Mapping connections of single neurons using a holographic light beam

November 18, 2017

Controlling single neurons using optogenetics (credit: the researchers)

Researchers at MIT and Paris Descartes University have developed a technique for precisely mapping connections of individual neurons for the first time by triggering them with holographic laser light.

The technique is based on optogenetics (using light to stimulate or silence light-sensitive genetically modified protein molecules called “opsins” that are embedded in specific neurons). Current optogenetics techniques can’t isolate individual neurons (and their connections) because the light strikes a relatively large area — stimulating axons and dendrites of other neurons simultaneously (and these neurons may have different functions, even when nearby).

The new technique stimulates only the soma (body) of the neuron, not its connections. To achieve that, the researchers combined two new advances: an optimized holographic light-shaping microscope* and a localized, more powerful opsin protein called CoChR.

Two-photon computer-generated holography (CGH) was used to create three-dimensional sculptures of light that envelop only a target cell, using a conventional pulsed laser coupled with a widefield epifluorescence imaging system. (credit: Or A. Shemesh et al./Nature Nanoscience)

The researchers used an opsin protein called CoChR, which generates a very strong electric current in response to light, and fused it to a small protein that directs the opsin into the cell bodies of neurons and away from axons and dendrites, which extend from the neuron body, forming “somatic channelrhodopsin” (soCoChR). This new opsin enabled photostimulation of individual cells (regions of stimulation are highlighted by magenta circles) in mouse cortical brain slices with single-cell resolution and with less than 1 millisecond temporal (time) precision — achieving connectivity mapping on intact cortical circuits without crosstalk between neurons. (credit: Or A. Shemesh et al./Nature Nanoscience)

In the new study, by combining this approach with new ““somatic channelrhodopsin” opsins that cluster in the cell body, the researchers showed they could stimulate individual neurons with not only precise spatial control but also great control over the timing of the stimulation. When they target a specific neuron, it responds consistently every time, with variability that is less than one millisecond, even when the cell is stimulated many times in a row.

“For the first time ever, we can bring the precision of single-cell control toward the natural timescales of neural computation,” says Ed Boyden, an associate professor of brain and cognitive sciences and biological engineering at MIT, and a member of MIT’s Media Lab and McGovern Institute for Brain Research. Boyden is co-senior author with Valentina Emiliani, a research director at France’s National Center for Scientific Research (CNRS) and director of the Neurophotonics Laboratory at Paris Descartes University, of a study that appears in the Nov. 13 issue of Nature Neuroscience.

Mapping neural connections in real time

Using this technique, the researchers were able to stimulate single neurons in brain slices and then measure the responses from cells that are connected to that cell. This may pave the way for more precise diagramming of the connections of the brain, and analyzing how those connections change in real time as the brain performs a task or learns a new skill.

Optogenetics was co-developed in 2005 by Ed Boyden (credit: MIT)

One possible experiment, Boyden says, would be to stimulate neurons connected to each other to try to figure out if one is controlling the others or if they are all receiving input from a far-off controller.

“It’s an open question,” he says. “Is a given function being driven from afar, or is there a local circuit that governs the dynamics and spells out the exact chain of command within a circuit? If you can catch that chain of command in action and then use this technology to prove that that’s actually a causal link of events, that could help you explain how a sensation, or movement, or decision occurs.”

As a step toward that type of study, the researchers now plan to extend this approach into living animals. They are also working on improving their targeting molecules and developing high-current opsins that can silence neuron activity.

The research was funded by the National Institutes of Health, France’s National Research Agency, the Simons Foundation for the Social Brain, the Human Frontiers Science Program, John Doerr, the Open Philanthropy Project, the Howard Hughes Medical Institute, and the Defense Advanced Research Projects Agency.

* Traditional holography is based on reproducing, with light, the shape of a specific object, in the absence of that original object. This is achieved by creating an “interferogram” that contains the information needed to reconstruct an object that was previously illuminated by a reference beam. In computer-generated holography, the interferogram is calculated by a computer without the need of any original object. Combined with two-photon excitation, CGH can be used to refocus laser light to precisely illuminate a cell or a defined group of cells in the brain.


Abstract of Temporally precise single-cell-resolution optogenetics

Optogenetic control of individual neurons with high temporal precision within intact mammalian brain circuitry would enable powerful explorations of how neural circuits operate. Two-photon computer-generated holography enables precise sculpting of light and could in principle enable simultaneous illumination of many neurons in a network, with the requisite temporal precision to simulate accurate neural codes. We designed a high-efficacy soma-targeted opsin, finding that fusing the N-terminal 150 residues of kainate receptor subunit 2 (KA2) to the recently discovered high-photocurrent channelrhodopsin CoChR restricted expression of this opsin primarily to the cell body of mammalian cortical neurons. In combination with two-photon holographic stimulation, we found that this somatic CoChR (soCoChR) enabled photostimulation of individual cells in mouse cortical brain slices with single-cell resolution and <1-ms temporal precision. We used soCoChR to perform connectivity mapping on intact cortical circuits.

Advertisements

Peter Diamandis Thinks We’re Evolving Toward “Meta-Intelligence”

November 18, 2017

From Natural Selection to Intelligent Direction

In the next 30 years, humanity is in for a transformation the likes of which we’ve never seen before—and XPRIZE Foundation founder and chairman Peter Diamandis believes that this will give birth to a new species. Diamandis admits that this might sound too far out there for most people. He is convinced, however, that we are evolving towards what he calls “meta-intelligence,” and today’s exponential rate of growth is one clear indication.

In an essay for Singularity Hub, Diamandis outlines the transformative stages in the multi-billion year pageant of evolution, and takes note of what the recent increasing “temperature” of evolution—a consequence of human activity—may mean for the future. The story, in a nutshell, is this—early prokaryotic life appears about 3.5 billion years ago (bya), representing perhaps a symbiosis of separate metabolic and replicative mechanisms of “life;” at 2.5 bya, eukaryotes emerge as composite organisms incorporating biological “technology” (other living things) within themselves; at 1.5 bya, multicellular metazoans appear, taking the form of eukaryotes that are yoked together in cooperative colonies; and at 400 million years ago, vertebrate fish species emerge onto land to begin life’s adventure beyond the seas.

“Today, at a massively accelerated rate—some 100 million times faster than the steps I outlined above—life is undergoing a similar evolution,” Diamandis writes. He thinks we’ve moved from a simple Darwinian evolution via natural selection into evolution by intelligent direction.

Credits: Richard Bizley/SPL
Credits: Richard Bizley/SPL

“I believe we’re rapidly heading towards a human-scale transformation, the next evolutionary step into what I call a “Meta-Intelligence,” a future in which we are all highly connected—brain to brain via the cloud—sharing thoughts, knowledge and actions,” he writes.

Change is Coming

Diamandis outlines the next stages of humanity’s evolution in four steps, each a parallel to his four evolutionary stages of life on Earth. There are four driving forces behind this evolution: our interconnected or wired world, the emergence of brain-computer interface (BCI), the emergence of artificial intelligence (AI), and man reaching for the final frontier of space.

In the next 30 years, humanity will move from the first stage—where we are today—to the fourth stage. From simple humans dependent on one another, humanity will incorporate technology into our bodies to allow for more efficient use of information and energy. This is already happening today.

The third stage is a crucial point.

Enabled with BCI and AI, humans will become massively connected with each other and billions of AIs (computers) via the cloud, analogous to the first multicellular lifeforms 1.5 billion years ago. Such a massive interconnection will lead to the emergence of a new global consciousness, and a new organism I call the Meta-Intelligence.

This brings to mind another futuristic event that many are eagerly anticipating: the technological singularity. “Within a quarter century, nonbiological intelligence will match the range and subtlety of human intelligence,” said notable futurist Ray Kurzweil, explaining the singularity.

Credits: Lovelace Turing
Credits: Lovelace Turing

“It will then soar past it because of the continuing acceleration of information-based technologies, as well as the ability of machines to instantly share their knowledge.” Kurzweil predicts that this will happen by 2045—within Diamandis’ evolutionary timeline. “The nonbiological intelligence created in that year will be one billion times more powerful than all human intelligence today.”

The fourth and final stage marks humanity’s evolution to becoming a multiplanetary species. “Our journey to the moon, Mars, asteroids and beyond represents the modern-day analogy of the journey made by lungfish climbing out of the oceans some 400 million years ago,” Diamandis explains.

Buckle up: we have an exciting future ahead of us.

This article was originally published by:
https://futurism.com/peter-diamandis-thinks-were-evolving-toward-meta-intelligence/

Google’s AI Wizard Unveils a New Twist on Neural Networks

November 18, 2017

If you want to blame someone for the hoopla around artificial intelligence, 69-year-old Google researcher Geoff Hinton is a good candidate.

The droll University of Toronto professor jolted the field onto a new trajectory in October 2012. With two grad students, Hinton showed that an unfashionable technology he’d championed for decades called artificial neural networks permitted a huge leap in machines’ ability to understand images. Within six months, all three researchers were on Google’s payroll. Today neural networks transcribe our speech, recognize our pets, and fight our trolls.

But Hinton now belittles the technology he helped bring to the world. “I think the way we’re doing computer vision is just wrong,” he says. “It works better than anything else at present but that doesn’t mean it’s right.”

In its place, Hinton has unveiled another “old” idea that might transform how computers see—and reshape AI. That’s important because computer vision is crucial to ideas such as self-driving cars, and having software that plays doctor.

Late last week, Hinton released two research papers that he says prove out an idea he’s been mulling for almost 40 years. “It’s made a lot of intuitive sense to me for a very long time, it just hasn’t worked well,” Hinton says. “We’ve finally got something that works well.”

Hinton’s new approach, known as capsule networks, is a twist on neural networks intended to make machines better able to understand the world through images or video. In one of the papers posted last week, Hinton’s capsule networks matched the accuracy of the best previous techniques on a standard test of how well software can learn to recognize handwritten digits.

In the second, capsule networks almost halved the best previous error rate on a test that challenges software to recognize toys such as trucks and cars from different angles. Hinton has been working on his new technique with colleagues Sara Sabour and Nicholas Frosst at Google’s Toronto office.

Capsule networks aim to remedy a weakness of today’s machine-learning systems that limits their effectiveness. Image-recognition software in use today by Google and others needs a large number of example photos to learn to reliably recognize objects in all kinds of situations. That’s because the software isn’t very good at generalizing what it learns to new scenarios, for example understanding that an object is the same when seen from a new viewpoint.

To teach a computer to recognize a cat from many angles, for example, could require thousands of photos covering a variety of perspectives. Human children don’t need such explicit and extensive training to learn to recognize a household pet.

Hinton’s idea for narrowing the gulf between the best AI systems and ordinary toddlers is to build a little more knowledge of the world into computer-vision software. Capsules—small groups of crude virtual neurons—are designed to track different parts of an object, such as a cat’s nose and ears, and their relative positions in space. A network of many capsules can use that awareness to understand when a new scene is in fact a different view of something it has seen before.

Hinton formed his intuition that vision systems need such an inbuilt sense of geometry in 1979, when he was trying to figure out how humans use mental imagery. He first laid out a preliminary design for capsule networks in 2011. The fuller picture released last week was long anticipated by researchers in the field. “Everyone has been waiting for it and looking for the next great leap from Geoff,” says Kyunghyun Cho, a professor at NYU who works on image recognition.

It’s too early to say how big a leap Hinton has made—and he knows it. The AI veteran segues from quietly celebrating that his intuition is now supported by evidence, to explaining that capsule networks still need to be proven on large image collections, and that the current implementation is slow compared to existing image-recognition software.

Hinton is optimistic he can address those shortcomings. Others in the field are also hopeful about his long-maturing idea.

Roland Memisevic, cofounder of image-recognition startup Twenty Billion Neurons, and a professor at University of Montreal, says Hinton’s basic design should be capable of extracting more understanding from a given amount of data than existing systems. If proven out at scale, that could be helpful in domains such as healthcare, where image data to train AI systems is much scarcer than the large volume of selfies available around the internet.

In some ways, capsule networks are a departure from a recent trend in AI research. One interpretation of the recent success of neural networks is that humans should encode as little knowledge as possible into AI software, and instead make them figure things out for themselves from scratch. Gary Marcus, a professor of psychology at NYU who sold an AI startup to Uber last year, says Hinton’s latest work represents a welcome breath of fresh air. Marcus argues that AI researchers should be doing more to mimic how the brain has built-in, innate machinery for learning crucial skills like vision and language. “It’s too early to tell how far this particular architecture will go, but it’s great to see Hinton breaking out of the rut that the field has seemed fixated on,” Marcus says.

UPDATED, Nov. 2, 12:55 PM: This article has been updated to include the names of Geoff Hinton’s co-authors.

This article was originally published by:
https://www.wired.com/story/googles-ai-wizard-unveils-a-new-twist-on-neural-networks/

Bionic Contacts: Goodbye Glasses. Hello Vision That’s 3x Better Than 20/20

October 18, 2017

A Clear Problem

Most of us take our vision for granted. As a result, we take the ability to read, write, drive, and complete a multitude of other tasks for granted. However, unfortunately, sight is not so easy for everyone.

For many people, simply seeing is a struggle. In fact, more than 285 million people worldwide have vision problems, according to the World Health Organization (WHO).

Cataracts account for about a third of these. The National Eye Institute reports that more than half of all Americans will have cataracts or will have had cataract surgery by the time they are 80, and in low- and middle-income countries, they’re the leading cause of blindness.

But now, people with vision problems may have new hope.

A Welcome Sight

Soon, cataracts may be the thing of the past, and even better, it may be possible to see a staggering three times better than 20/20 vision. Oh, and you could do it all without wearing glasses or contacts.

So what exactly does having three times better vision mean? If you can currently read a text that is 10 feet away, you would be able to read the same text from 30 feet away. What’s more, people who currently can’t see properly might be able to see a lot better than the average person.

This development comes thanks to the Ocumetics Bionic Lens. This dynamic lens essentially replaces a person’s natural eye lens. It’s placed into the eye via a saline-filled syringe, after which it unravels itself in under 10 seconds.

 

It may sound painful, but Dr. Garth Webb, the optometrist who invented the Ocumetics Bionic Lens, says that the procedure is identical to cataract surgery and would take just about eight minutes. He adds that people who have the specialized lenses surgically inserted would never get cataracts and that the lenses feel natural and won’t cause headaches or eyestrain.

The Bionic Lens may sound like a fairy tale (or sci-fi dream), but it’s not. It is actually the end result of years and years of research and more than a little funding — so far, the lens has taken nearly a decade to develop and has cost US$3 million.

There is still some ways to go before you will be able to buy them, but if the timeline Webb offered in an interview with Eye Design Optometry holds up, human studies will begin in July 2017, and the bionic lenses will be available to the public in March 2018.

Original source: https://futurism.com/bionic-contacts-goodbye-glasses-hello-vision-thats-3x-better-than-2020/

What Does It Cost to Create a Cancer Drug? Less Than You’d Think

October 18, 2017

What does it really cost to bring a drug to market?

The question is central to the debate over rising health care costs and appropriate drug pricing. President Trump campaigned on promises to lower the costs of drugs.

But numbers have been hard to come by. For years, the standard figure has been supplied by researchers at the Tufts Center for the Study of Drug Development: $2.7 billion each, in 2017 dollars.

Yet a new study looking at 10 cancer medications, among the most expensive of new drugs, has arrived at a much lower figure: a median cost of $757 million per drug. (Half cost less, and half more.)

Following approval, the 10 drugs together brought in $67 billion, the researchers also concluded — a more than sevenfold return on investment. Nine out of 10 companies made money, but revenues varied enormously. One drug had not yet earned back its development costs.

The study, published Monday in JAMA Internal Medicine, relied on company filings with the Securities and Exchange Commission to determine research and development costs.

“It seems like they have done a thoughtful and rigorous job,” said Dr. Aaron Kesselheim, director of the program on regulation, therapeutics and the law at Brigham and Women’s Hospital.

“It provides at least something of a reality check,” he added.

The figures were met with swift criticism, however, by other experts and by representatives of the biotech industry, who said that the research did not adequately take into account the costs of the many experimental drugs that fail.

“It’s a bit like saying it’s a good business to go out and buy winning lottery tickets,” Daniel Seaton, a spokesman for the Biotechnology Innovation Organization, said in an email.

Dr. Jerry Avorn, chief of the division of pharmacoepidemiology and pharmacoeconomics at Brigham and Women’s Hospital, predicted that the paper would help fuel the debate over the prices of cancer drugs, which have soared so high “that we are getting into areas that are almost unimaginable economically,” he said.

A leukemia treatment approved recently by the Food and Drug Administration, for example, will cost $475,000 for a single treatment. It is the first of a wave of gene therapy treatments likely to carry staggering price tags.

“This is an important brick in the wall of this developing concern,” he said.

Dr. Vinay Prasad, an oncologist at Oregon Health and Science University, and Dr. Sham Mailankody, of Memorial Sloan Kettering Cancer Center, arrived at their figures after reviewing data on 10 companies that brought a cancer drug to market in the past decade.

Since the companies also were developing other drugs that did not receive approval from the F.D.A., the researchers were able to include the companies’ total spending on research and development, not just what they spent on the drugs that succeeded.

One striking example was ibrutinib, made by Pharmacyclics. It was approved in 2013 for patients with certain blood cancers who did not respond to conventional therapy.

Ibrutinib was the only drug out of four the company was developing to receive F.D.A. approval. The company’s research and development costs for their four drugs were $388 million, the company’s S.E.C. filings indicated.

The drug ibrutinib was developed to treat chronic lymphocytic leukemia, shown here in a CT reconstruction of a patient’s neck. The manufacturer’s return on investment was quite high, according to a new study. Credit LLC/Science Source

After the drug was approved, AbbVie acquired its manufacturer, Pharmacylics, for $21 billion. “That is a 50-fold difference between revenue post-approval and cost to develop,” Dr. Prasad said.

Accurate figures on drug development are difficult to find and often disputed. Although it is widely cited, the Tufts study also was fiercely criticized.

One objection was that the researchers, led by Joseph A. DiMasi, did not disclose the companies’ data on development costs. The study involved ten large companies, which were not named, and 106 investigational drugs, also not named.

But Dr. DiMasi found the new study “irredeemably flawed at a fundamental level.”

“The sample consists of relatively small companies that have gotten only one drug approved, with few other drugs of any type in development,” he said. The result is “substantial selection bias,” meaning that the estimates do not accurately reflect the industry as a whole.

Ninety-five percent of cancer drugs that enter clinical trials fail, said Mr. Seaton, of the biotech industry group. “The small handful of successful drugs — those looked at by this paper — must be profitable enough to finance all of the many failures this analysis leaves unexamined.”

“When the rare event occurs that a company does win approval,” he added, “the reward must be commensurate with taking on the multiple levels of risk not seen in any other industry if drug development is to remain economically viable for prospective investors.”

Cancer drugs remain among the most expensive medications, with prices reaching the hundreds of thousands of dollars per patient.

Although the new study was small, its estimates are so much lower than previous figures, and the return on investment so great, that experts say they raise questions about whether soaring drug prices really are needed to encourage investment.

”That seems hard to swallow when they make seven times what they invested in the first four years,” Dr. Prasad said.

The new study has limitations, noted Patricia Danzon, an economist at the University of Pennsylvania’s Wharton School.

It involved just ten small biotech companies whose cancer drugs were aimed at limited groups of patients with less common diseases.

For such drugs, the F.D.A. often permits clinical trials to be very small and sometimes without control groups. Therefore development costs may have been lower for this group than for drugs that require longer and larger studies.

But, Dr. Danzon said, most new cancer drugs today are developed this way: by small companies and for small groups of patients. The companies often license or sell successful drugs to the larger companies.

The new study, she said, “is shining a light on a sector of the industry that is becoming important now.” The evidence, she added, is “irrefutable” that the cost of research and development “is small relative to the revenues.”

When it comes to drug prices, it does not matter what companies spend on research and development, Dr. Kesselheim said.

“They are based on what the market will bear.”

Correction: September 14, 2017
An earlier version of this article incorrectly identified the company that acquired a drug maker. It was AbbVie, not Janssen Biotech (which jointly develops the drug). Additionally, the article incorrectly described what AbbVie acquired. It was the company Pharmacylics, which developed the drug Imbruvica, not the drug itself.

Original source: https://www.nytimes.com/2017/09/11/health/cancer-drug-costs.html

Is our world a simulation? Why some scientists say it’s more likely than not

October 18, 2017

When Elon Musk isn’t outlining plans to use his massive rocket to leave a decaying Planet Earth and colonize Mars, he sometimes talks about his belief that Earth isn’t even real and we probably live in a computer simulation.

“There’s a billion to one chance we’re living in base reality,” he said at a conference in June.

Musk is just one of the people in Silicon Valley to take a keen interest in the “simulation hypothesis”, which argues that what we experience as reality is actually a giant computer simulation created by a more sophisticated intelligence. If it sounds a lot like The Matrix, that’s because it is.

According to this week’s New Yorker profile of Y Combinator venture capitalist Sam Altman, there are two tech billionaires secretly engaging scientists to work on breaking us out of the simulation. But what does this mean? And what evidence is there that we are, in fact, living in The Matrix?

One popular argument for the simulation hypothesis, outside of acid trips, came from Oxford University’s Nick Bostrom in 2003 (although the idea dates back as far as the 17th-century philosopher René Descartes). In a paper titled “Are You Living In a Simulation?”, Bostrom suggested that members of an advanced “posthuman” civilization with vast computing power might choose to run simulations of their ancestors in the universe.

This argument is extrapolated from observing current trends in technology, including the rise of virtual reality and efforts to map the human brain.

If we believe that there is nothing supernatural about what causes consciousness and it’s merely the product of a very complex architecture in the human brain, we’ll be able to reproduce it. “Soon there will be nothing technical standing in the way to making machines that have their own consciousness,” said Rich Terrile, a scientist at Nasa’s Jet Propulsion Laboratory.

At the same time, videogames are becoming more and more sophisticated and in the future we’ll be able to have simulations of conscious entities inside them.

Elon Musk on simulation: ‘The odds we’re in base reality is one in billions’

“Forty years ago we had Pong – two rectangles and a dot. That’s where we were. Now 40 years later, we have photorealistic, 3D simulations with millions of people playing simultaneously and it’s getting better every year. And soon we’ll have virtual reality, we’ll have augmented reality,” said Musk. “If you assume any rate of improvement at all, then the games will become indistinguishable from reality.”

It’s a view shared by Terrile. “If one progresses at the current rate of technology a few decades into the future, very quickly we will be a society where there are artificial entities living in simulations that are much more abundant than human beings.”

If there are many more simulated minds than organic ones, then the chances of us being among the real minds starts to look more and more unlikely. As Terrile puts it: “If in the future there are more digital people living in simulated environments than there are today, then what is to say we are not part of that already?”

Reasons to believe that the universe is a simulation include the fact that it behaves mathematically and is broken up into pieces (subatomic particles) like a pixelated video game. “Even things that we think of as continuous – time, energy, space, volume – all have a finite limit to their size. If that’s the case, then our universe is both computable and finite. Those properties allow the universe to be simulated,” Terrile said.

“Quite frankly, if we are not living in a simulation, it is an extraordinarily unlikely circumstance,” he added.

So who has created this simulation? “Our future selves,” said Terrile.

Not everyone is so convinced by the hypothesis. “Is it logically possible that we are in a simulation? Yes. Are we probably in a simulation? I would say no,” said Max Tegmark, a professor of physics at MIT.

“In order to make the argument in the first place, we need to know what the fundamental laws of physics are where the simulations are being made. And if we are in a simulation then we have no clue what the laws of physics are. What I teach at MIT would be the simulated laws of physics,” he said.

Harvard theoretical physicist Lisa Randall is even more skeptical. “I don’t see that there’s really an argument for it,” she said. “There’s no real evidence.”

“It’s also a lot of hubris to think we would be what ended up being simulated.”

Terrile believes that recognizing that we are probably living in a simulation is as game-changing as Copernicus realizing that the Earth was not the center of the universe. “It was such a profound idea that it wasn’t even thought of as an assumption,” he said.

Before Copernicus, scientists had tried to explain the peculiar behaviour of the planets’ motion with complex mathematical models. “When they dropped the assumption, everything else became much simpler to understand.”

That we might be in a simulation is, Terrile argues, a simpler explanation for our existence than the idea that we are the first generation to rise up from primordial ooze and evolve into molecules, biology and eventually intelligence and self-awareness. The simulation hypothesis also accounts for peculiarities in quantum mechanics, particularly the measurement problem, whereby things only become defined when they are observed.

“For decades it’s been a problem. Scientists have bent over backwards to eliminate the idea that we need a conscious observer. Maybe the real solution is you do need a conscious entity like a conscious player of a video game,” he said.

For Tegmark, this doesn’t make sense. “We have a lot of problems in physics and we can’t blame our failure to solve them on simulation.”

How can the hypothesis be put to the test? On one hand, neuroscientists and artificial intelligence researchers can check whether it’s possible to simulate the human mind. So far, machines have proven to be good at playing chess and Go and putting captions on images. But can a machine achieve consciousness? We don’t know.

On the other hand, scientists can look for hallmarks of simulation. “Suppose someone is simulating our universe – it would be very tempting to cut corners in ways that makes the simulation cheaper to run. You could look for evidence of that in an experiment,” said Tegmark.

For Terrile, the simulation hypothesis has “beautiful and profound” implications.

First, it provides a scientific basis for some kind of afterlife or larger domain of reality above our world. “You don’t need a miracle, faith or anything special to believe it. It comes naturally out of the laws of physics,” he said.

Second, it means we will soon have the same ability to create our own simulations.

“We will have the power of mind and matter to be able to create whatever we want and occupy those worlds.”

Original source: https://www.theguardian.com/technology/2016/oct/11/simulated-world-elon-musk-the-matrix#img-1

Deus ex machina: former Google engineer is developing an AI god

October 18, 2017

Intranet service? Check. Autonomous motorcycle? Check. Driverless car technology? Check. Obviously the next logical project for a successful Silicon Valley engineer is to set up an AI-worshipping religious organization.

Anthony Levandowski, who is at the center of a legal battle between Uber and Google’s Waymo, has established a nonprofit religious corporation called Way of the Future, according to state filings first uncovered by Wired’s Backchannel. Way of the Future’s startling mission: “To develop and promote the realization of a Godhead based on artificial intelligence and through understanding and worship of the Godhead contribute to the betterment of society.”

Levandowski was co-founder of autonomous trucking company Otto, which Uber bought in 2016. He was fired from Uber in May amid allegations that he had stolen trade secrets from Google to develop Otto’s self-driving technology. He must be grateful for this religious fall-back project, first registered in 2015.

The Way of the Future team did not respond to requests for more information about their proposed benevolent AI overlord, but history tells us that new technologies and scientific discoveries have continually shaped religion, killing old gods and giving birth to new ones.

As author Yuval Noah Harari notes: “That is why agricultural deities were different from hunter-gatherer spirits, why factory hands and peasants fantasised about different paradises, and why the revolutionary technologies of the 21st century are far more likely to spawn unprecedented religious movements than to revive medieval creeds.”

Religions, Harari argues, must keep up with the technological advancements of the day or they become irrelevant, unable to answer or understand the quandaries facing their disciples.

“The church does a terrible job of reaching out to Silicon Valley types,” acknowledges Christopher Benek a pastor in Florida and founding chair of the Christian Transhumanist Association.

Silicon Valley, meanwhile, has sought solace in technology and has developed quasi-religious concepts including the “singularity”, the hypothesis that machines will eventually be so smart that they will outperform all human capabilities, leading to a superhuman intelligence that will be so sophisticated it will be incomprehensible to our tiny fleshy, rational brains.

Anthony Levandowski, the former head of Uber’s self-driving program, with one of the company’s driverless cars in San Francisco. Photograph: Eric Risberg/AP

For futurists like Ray Kurzweil, this means we’ll be able to upload copies of our brains to these machines, leading to digital immortality. Others like Elon Musk and Stephen Hawking warn that such systems pose an existential threat to humanity.

“With artificial intelligence we are summoning the demon,” Musk said at a conference in 2014. “In all those stories where there’s the guy with the pentagram and the holy water, it’s like – yeah, he’s sure he can control the demon. Doesn’t work out.”

Benek argues that advanced AI is compatible with Christianity – it’s just another technology that humans have created under guidance from God that can be used for good or evil.

“I totally think that AI can participate in Christ’s redemptive purposes,” he said, by ensuring it is imbued with Christian values.

“Even if people don’t buy organized religion, they can buy into ‘do unto others’.”

For transhumanist and “recovering Catholic” Zoltan Istvan, religion and science converge conceptually in the singularity.

“God, if it exists as the most powerful of all singularities, has certainly already become pure organized intelligence,” he said, referring to an intelligence that “spans the universe through subatomic manipulation of physics”.

“And perhaps, there are other forms of intelligence more complicated than that which already exist and which already permeate our entire existence. Talk about ghost in the machine,” he added.

For Istvan, an AI-based God is likely to be more rational and more attractive than current concepts (“the Bible is a sadistic book”) and, he added, “this God will actually exist and hopefully will do things for us.”

We don’t know whether Levandowski’s Godhead ties into any existing theologies or is a manmade alternative, but it’s clear that advancements in technologies including AI and bioengineering kick up the kinds of ethical and moral dilemmas that make humans seek the advice and comfort from a higher power: what will humans do once artificial intelligence outperforms us in most tasks? How will society be affected by the ability to create super-smart, athletic “designer babies” that only the rich can afford? Should a driverless car kill five pedestrians or swerve to the side to kill the owner?

If traditional religions don’t have the answer, AI – or at least the promise of AI – might be alluring.

Original source: https://www.theguardian.com/technology/2017/sep/28/artificial-intelligence-god-anthony-levandowski

3D ‘body-on-a-chip’ project aims to accelerate drug testing, reduce costs

October 18, 2017

A team of scientists at Wake Forest Institute for Regenerative Medicine and nine other institutions has engineered miniature 3D human hearts, lungs, and livers to achieve more realistic testing of how the human body responds to new drugs.

The “body-on-a-chip” project, funded by the Defense Threat Reduction Agency, aims to help reduce the estimated $2 billion cost and 90 percent failure rate that pharmaceutical companies face when developing new medications. The research is described in an open-access paper in Scientific Reports, published by Nature.

Using the same expertise they’ve employed to build new organs for patients, the researchers connected together micro-sized 3D liver, heart, and lung organs-on-a chip (or “organoids”) on a single platform to monitor their function. They selected heart and liver for the system because toxicity to these organs is a major reason for drug candidate failures and drug recalls. And lungs were selected because they’re the point of entry for toxic particles and for aerosol drugs such as asthma inhalers.

The integrated three-tissue organ-on-a-chip platform combines liver, heart, and lung organoids. (Top) Liver and cardiac modules are created by bioprinting spherical organoids using customized bioinks, resulting in 3D hydrogel constructs (upper left) that are placed into the microreactor devices. (Bottom) Lung modules are formed by creating layers of cells over porous membranes within microfluidic devices. TEER (trans-endothelial [or epithelial] electrical resistance sensors allow for monitoring tissue barrier function integrity over time. The three organoids are placed in a sealed, monitored system with a real-time camera. A nutrient-filled liquid that circulates through the system keeps the organoids alive and is used to introduce potential drug therapies into the system. (credit: Aleksander Skardal et al./Scientific Reports)

Why current drug testing fails

Drug compounds are currently screened in the lab using human cells and then tested in animals. But these methods don’t adequately replicate how drugs affect human organs. “If you screen a drug in livers only, for example, you’re never going to see a potential side effect to other organs,” said Aleks Skardal, Ph.D., assistant professor at Wake Forest Institute for Regenerative Medicine and lead author of the paper.

In many cases during testing of new drug candidates — and sometimes even after the drugs have been approved for use — drugs also have unexpected toxic effects in tissues not directly targeted by the drugs themselves, he explained. “By using a multi-tissue organ-on-a-chip system, you can hopefully identify toxic side effects early in the drug development process, which could save lives as well as millions of dollars.”

“There is an urgent need for improved systems to accurately predict the effects of drugs, chemicals and biological agents on the human body,” said Anthony Atala, M.D., director of the institute and senior researcher on the multi-institution study. “The data show a significant toxic response to the drug as well as mitigation by the treatment, accurately reflecting the responses seen in human patients.”

Advanced drug screening, personalized medicine

The scientists conducted multiple scenarios to ensure that the body-on-a-chip system mimics a multi-organ response.

For example, they introduced a drug used to treat cancer into the system. Known to cause scarring of the lungs, the drug also unexpectedly affected the system’s heart. (A control experiment using only the heart failed to show a response.) The scientists theorize that the drug caused inflammatory proteins from the lung to be circulated throughout the system. As a result, the heart increased beats and then later stopped altogether, indicating a toxic side effect.

“This was completely unexpected, but it’s the type of side effect that can be discovered with this system in the drug development pipeline,” Skardal noted.

Test of “liver on a chip” response to two drugs to demonstrate clinical relevance. Liver construct toxicity response was assessed following exposure to acetaminophen (APAP) and the clinically-used APAP countermeasure N-acetyl-L-cysteine (NAC). Liver constructs in the fluidic system (left) were treated with no drug (b), 1 mM APAP (c), and 10 mM APAP (d) — showing progressive loss of function and cell death, compared to 10 mM APAP +20 mM NAC (e), which mitigated those negative effects. The data shows both a significant cytotoxic (cell-damage) response to APAP as well as its mitigation by NAC treatment — accurately reflecting the clinical responses seen in human patients. (credit: Aleksander Skardal et al./Scientific Reports)

The scientists are now working to increase the speed of the system for large scale screening and add additional organs.

“Eventually, we expect to demonstrate the utility of a body-on-a-chip system containing many of the key functional organs in the human body,” said Atala. “This system has the potential for advanced drug screening and also to be used in personalized medicine — to help predict an individual patient’s response to treatment.”

Several patent applications comprising the technology described in the paper have been filed.

The international collaboration included researchers at Wake Forest Institute for Regenerative Medicine at the Wake Forest School of Medicine, Harvard-MIT Division of Health Sciences and Technology, Wyss Institute for Biologically Inspired Engineering at Harvard University, Biomaterials Innovation Research Center at Harvard Medical School, Bloomberg School of Public Health at Johns Hopkins University, Virginia Tech-Wake Forest School of Biomedical Engineering and Sciences, Brigham and Women’s Hospital, University of Konstanz, Konkuk University (Seoul), and King Abdulaziz University.


Abstract of Multi-tissue interactions in an integrated three-tissue organ-on-a-chip platform

Many drugs have progressed through preclinical and clinical trials and have been available – for years in some cases – before being recalled by the FDA for unanticipated toxicity in humans. One reason for such poor translation from drug candidate to successful use is a lack of model systems that accurately recapitulate normal tissue function of human organs and their response to drug compounds. Moreover, tissues in the body do not exist in isolation, but reside in a highly integrated and dynamically interactive environment, in which actions in one tissue can affect other downstream tissues. Few engineered model systems, including the growing variety of organoid and organ-on-a-chip platforms, have so far reflected the interactive nature of the human body. To address this challenge, we have developed an assortment of bioengineered tissue organoids and tissue constructs that are integrated in a closed circulatory perfusion system, facilitating inter-organ responses. We describe a three-tissue organ-on-a-chip system, comprised of liver, heart, and lung, and highlight examples of inter-organ responses to drug administration. We observe drug responses that depend on inter-tissue interaction, illustrating the value of multiple tissue integration for in vitro study of both the efficacy of and side effects associated with candidate drugs.

Video

“The Looking Planet” – by Eric Law Anderson

August 06, 2017

Enjoy this CGI 3D Animated Short Film and winner of over 50 film festival jury and audience awards including Best Short Film, Best Sci-Fi Film, Best Animated Film, Best Production Design, Best Visual Effects, and Best Sound Design. During the construction of the universe, a young member of the Cosmos Corps of Engineers decides to break some fundamental laws in the name of self-expression.