Eugenics 2.0: We’re at the Dawn of Choosing Embryos by Health, Height, and More

November 18, 2017

Nathan Treff was diagnosed with type 1 diabetes at 24. It’s a disease that runs in families, but it has complex causes. More than one gene is involved. And the environment plays a role too.

So you don’t know who will get it. Treff’s grandfather had it, and lost a leg. But Treff’s three young kids are fine, so far. He’s crossing his fingers they won’t develop it later.

Now Treff, an in vitro fertilization specialist, is working on a radical way to change the odds. Using a combination of computer models and DNA tests, the startup company he’s working with, Genomic Prediction, thinks it has a way of predicting which IVF embryos in a laboratory dish would be most likely to develop type 1 diabetes or other complex diseases. Armed with such statistical scorecards, doctors and parents could huddle and choose to avoid embryos with failing grades.

IVF clinics already test the DNA of embryos to spot rare diseases, like cystic fibrosis, caused by defects in a single gene. But these “preimplantation” tests are poised for a dramatic leap forward as it becomes possible to peer more deeply at an embryo’s genome and create broad statistical forecasts about the person it would become.

The advance is occurring, say scientists, thanks to a growing flood of genetic data collected from large population studies. As statistical models known as predictors gobble up DNA and health information about hundreds of thousands of people, they’re getting more accurate at spotting the genetic patterns that foreshadow disease risk. But they have a controversial side, since the same techniques can be used to project the eventual height, weight, skin tone, and even intelligence of an IVF embryo.

In addition to Treff, who is the company’s chief scientific officer, the founders of Genomic Prediction are Stephen Hsu, a physicist who is vice president for research at Michigan State University, and Laurent Tellier, a Danish bioinformatician who is CEO. Both Hsu and Tellier have been closely involved with a project in China that aims to sequence the genomes of mathematical geniuses, hoping to shed light on the genetic basis of IQ.

Spotting outliers

The company’s plans rely on a tidal wave of new knowledge showing how small genetic differences can add up to put one person, but not another, at high odds for diabetes, a neurotic personality, or a taller or shorter height. Already, such “polygenic risk scores” are used in direct-to-consumer gene tests, such as reports from 23andMe that tell customers their genetic chance of being overweight.

For adults, risk scores are little more than a novelty or a source of health advice they can ignore. But if the same information is generated about an embryo, it could lead to existential consequences: who will be born, and who stays in a laboratory freezer.

“I remind my partners, ‘You know, if my parents had this test, I wouldn’t be here,’” says Treff, a prize-winning expert on diagnostic technology who is the author of more than 90 scientific papers.

Genomic Prediction was founded this year and has raised funds from venture capitalists in Silicon Valley, though it declines to say who they are. Tellier, whose inspiration is the science fiction film Gattaca, says the company plans to offer reports to IVF doctors and parents identifying “outliers”—those embryos whose genetic scores put them at the wrong end of a statistical curve for disorders such as diabetes, late-life osteoporosis, schizophrenia, and dwarfism, depending on whether models for those problems prove accurate.

A days-old human embryo in an IVF clinic. Some cells can be removed to perform DNA tests.

dallasfertility.com

The company’s concept, which it calls expanded preimplantation genetic testing, or ePGT, would effectively add a range of common disease risks to the menu of rare ones already available, which it also plans to test for. Its promotional material uses a picture of a mostly submerged iceberg to get the idea across. “We believe it will become a standard part of the IVF process,” says Tellier, just as a test for Down syndrome is a standard part of pregnancy.

Some experts contacted by MIT Technology Review said they believed it’s premature to introduce polygenic scoring technology into IVF clinics—though perhaps not by very much. Matthew Rabinowitz, CEO of the prenatal-testing company Natera, based in California, says he thinks predictions obtained today could be “largely misleading” because DNA models don’t function well enough. But Rabinowitz agrees that the technology is coming.

“You are not going to stop the modeling in genetics, and you are not going to stop people from accessing it,” he says. “It’s going to get better and better.”

Sharp questions

Testing embryos for disease risks, including risks for diseases that develop only late in life, is considered ethically acceptable by U.S. fertility doctors. But the new DNA scoring models mean parents might be able to choose their kids on the basis of traits like IQ or adult weight. That’s because, just like type 1 diabetes, these traits are the result of complex genetic influences the predictor algorithms are designed to find.

“It’s the camel’s nose under the tent. Because if you are doing it for something more serious, then it’s trivially easy to look for anything else,” says Michelle Meyer, a bioethicist at the Geisinger Health System who analyzes issues in reproductive genetics. “Here is the genomic dossier on each embryo. And you flip through the book.” Imagine picking the embryo most likely to get into Harvard like Mom, or to be tall like Dad.

For Genomic Prediction, a tiny startup based out of a tech incubator in New Jersey, such questions will be especially sharply drawn. That is because of Hsu’s long-standing interest in genetic selection for superior intelligence.

In 2014, Hsu authored an essay titled “Super-Intelligent Humans Are Coming,” in which he argued that selecting embryos for intelligence could boost the resulting child’s IQ by 15 points.

Genomic Prediction says it will only report diseases—that is, identify those embryos it thinks would develop into people with serious medical problems. Even so, on his blog and in public statements, Hsu has for years been developing a vision that goes far beyond that.

“Suppose I could tell you embryo four is going to be the tallest, embryo three is going to be the smartest, embryo two is going to be very antisocial. Suppose that level of granularity was available in the reports,” he told the conservative radio and YouTube personality Stefan Molyneux this spring. “That is the near-term future that we as a civilization face. This is going to be here.”

Measuring height

The fuel for the predictive models is a deluge of new data, most recently genetic readouts and medical records for 500,000 middle-aged Britons that were released in July by the U.K. Biobank, a national precision-medicine project in that country.

The data trove included, for each volunteer, a map of about 800,000 single-nucleotide polymorphisms, or SNPs—points where their DNA differs slightly from another person’s. The release caused a pell-mell rush by geneticists to update their calculations about exactly how much of human disease, or even routine behaviors like bread consumption, these genetic differences could explain.

Armed with the U.K. data, Hsu and Tellier claimed a breakthrough. For one easily measured trait, height, they used machine-learning techniques to create a predictor that behaved flawlessly. They reported that the model could, for the most part, predict people’s height from their DNA data to within three or four centimeters.

Height is currently the easiest trait to predict. It’s determined mostly by genes, and it’s always recorded in population databases. But Tellier says genetic databases are “rapidly approaching” the size needed to make accurate predictions about other human features, including risk for diseases whose true causes aren’t even known.

Tellier says Genomic Prediction will zero in on disease traits for which the predictors already perform fairly well, or will soon. Those include autoimmune disorders like the illness Treff suffers from. In those conditions, a smaller set of genes dominates the predictions, sometimes making them more reliable.

A report from Germany in 2014, for instance, found it was possible to distinguish fairly accurately, from a polygenic DNA score alone, between a person with type 1 diabetes and a person without it. While the scores aren’t perfectly accurate, consider how they might influence a prospective parent. On average, children of a man with type 1 diabetes have a one in 17 chance of developing the ailment. Picking the best of several embryos made in an IVF clinic, even with an error-prone predictor, could lower the odds.

In the case of height, Genomic Prediction hopes to use the model to help identify embryos that would grow into adults shorter than 4’10”, the medical definition of dwarfism, says Tellier. There are many physical and psychological disadvantages to being so short. Eventually the company could also have the ability to identify intellectual problems, such as embryos with a predicted IQ of less than 70.

The company doesn’t intend to give out raw trait scores to parents, only to flag embryos likely to be abnormal. That is because the product has to be “ethically defensible,” says Hsu: “We would only reveal the negative outlier state. We don’t report, ‘This guy is going to be in the NBA.’”

Some scientists doubt the scores will prove useful at picking better people from IVF dishes. Even if they’re accurate on the average, for individuals there’s no guarantee of pinpoint precision. What’s more, environment has as big an impact on most traits as genes do. “There is a high probability that you will get it wrong—that would be my concern,” says Manuel Rivas, a professor at Stanford University who studies the genetics of Crohn’s disease. “If someone is using that information to make decisions about embryos, I don’t know what to make of it.”

Efforts to introduce this type of statistical scoring into reproduction have, in the past, drawn criticism. In 2013, 23andMe provoked outrage when it won a patent on the idea of drop-down menus parents could use to pick sperm or egg donors—say, to try to get a specific eye color. The company, funded by Google, quickly backpedaled.

But since then, polygenic scores have become a routine aspect of novelty DNA tests. A company called HumanCode sells a $199 test online that uses SNP scores to tell two people about how tall their kids might be. In the dairy cattle industry, polygenic tests are widely used to rate young animals for how much milk they’ll produce.

“At a broad level, our understanding of complex traits has evolved. It’s not that there are a few genes contributing to complex traits; it’s tens, or thousands, or even all genes,” says Meyer, the Geisinger professor. “That has led to polygenic risk scores. It’s many variants, each with small contributions of their own, but which have a significant contribution together. You add them up.” In his predictor for height, Hsu eventually made use of 20,000 variants to guess how tall each person was.

Measuring embryos

Around the world, a million couples undergo IVF each year; in the U.S., test-tube babies account for 1 percent of births. Preimplantation genetic diagnosis, or PGD, has been part of the technology since the 1990s. In that procedure, a few cells are plucked from a days-old embryo growing in a laboratory so they can be tested.

Until now, doctors have used PGD to detect embryos with major abnormalities, such as missing chromosomes, as well as those with “single gene” defects. Parents who carry the defective gene that causes Huntington’s disease, for instance, can use embryo tests to avoid having a child with the fatal brain ailment.

The obstacle to polygenic tests has been that with so few cells, it’s been difficult to get the broad, accurate view of an embryo’s genome necessary to perform the needed calculations. “It’s very hard to make reliable measurements on that little DNA,” says Rabinowitz, the Natera CEO.

Tellier says Genomic Prediction has developed an improved method for analyzing embryonic DNA, which he says will first be used to improve on traditional PGD, combing many single-gene tests into one. Tellier says the same technique is what will permit it to collect polygenic scores on embryos, although the company did not describe the method in detail. But other scientists have already demonstrated ways to overcome the accuracy barrier.

In 2015, a team led by Rabinowitz and Jay Shendure of the University of Washington did it by sequencing in detail the genomes of two parents undergoing IVF. That let them infer the embryo’s genome sequence, even though the embryo test itself was no more accurate than before. When the babies were born, they found they’d been right.

“We do have the technology to reconstruct the genome of an embryo and create a polygenic model,” says Rabinowitz, whose publicly traded company is worth about $600 million, and who says he has been mulling whether to enter the embryo-scoring business. “The problem is that the models have not quite been ready for prime time.”

That’s because despite Hsu’s success with height, the scoring algorithms have significant limitations. One is that they’re built using data mostly from Northern Europeans. That means they may not be useful for people from Asia or Africa, where the pattern of SNPs is different, or for people of mixed ancestry. Even their performance for specific families of European background can’t be taken for granted unless the procedure is carefully tested in a clinical study, something that’s never been done, says Akash Kumar, a Stanford resident physician who was lead author of the Natera study.

Kumar, who treats young patients with rare disorders, says the genetic predictors raise some “big issues.” One is that the sheer amount of genetic data becoming available could make it temptingly easy to assess nonmedical traits. “We’ve seen such a crazy change in the number of people we are able to study,” he says. “Not many have schizophrenia, but they all have a height and a body-mass index. So the number of people you can use to build the trait models is much larger. It’s a very unique place to be, thinking what we should do with this technology.”

Smarter kids

This week, Genomic Prediction manned a booth at the annual meeting of the American Society for Reproductive Medicine. That organization, which represents fertility doctors and scientists, has previously said it thinks testing embryos for late-life conditions, like Alzheimer’s, would be “ethically justified.” It cited, among other reasons, the “reproductive liberty” of parents.

The society has been more ambivalent about choosing the sex of embryos (something that conventional PGD allows), leaving it to the discretion of doctors. Combined, the society’s positions seem to open the door to any kind of measurement, perhaps so long as the test is justified for a medical reason.

Hsu has previously said he thinks intelligence is “the most interesting phenotype,” or trait, of all. But when he tried his predictor to see what it could say about how far along in school the 500,000 British subjects from the U.K. Biobank had gotten (years of schooling is a proxy for IQ), he found that DNA couldn’t predict it nearly as well as it could predict height.

Yet DNA did explain some of the difference. Daniel Benjamin, a geno-economist at the University of Southern California, says that for large populations, gene scores are already as predictive of educational attainment as whether someone grew up in a rich or poor family. He adds that the accuracy of the scores has been steadily improving. Scoring embryos for high IQ, however, would be “premature” and “ethically contentious,” he says.

Hsu’s prediction is that “billionaires and Silicon Valley types” will be the early adopters of embryo selection technology, becoming among the first “to do IVF even though they don’t need IVF.” As they start producing fewer unhealthy children, and more exceptional ones, the rest of society could follow suit.

“I fully predict it will be possible,” says Hsu of selecting embryos with higher IQ scores. “But we’ve said that we as a company are not going to do it. It’s a difficult issue, like nuclear weapons or gene editing. There will be some future debate over whether this should be legal, or made illegal. Countries will have referendums on it.”

This article was originally published by:
https://www.technologyreview.com/s/609204/eugenics-20-were-at-the-dawn-of-choosing-embryos-by-health-height-and-more/

Advertisements

Mapping connections of single neurons using a holographic light beam

November 18, 2017

Controlling single neurons using optogenetics (credit: the researchers)

Researchers at MIT and Paris Descartes University have developed a technique for precisely mapping connections of individual neurons for the first time by triggering them with holographic laser light.

The technique is based on optogenetics (using light to stimulate or silence light-sensitive genetically modified protein molecules called “opsins” that are embedded in specific neurons). Current optogenetics techniques can’t isolate individual neurons (and their connections) because the light strikes a relatively large area — stimulating axons and dendrites of other neurons simultaneously (and these neurons may have different functions, even when nearby).

The new technique stimulates only the soma (body) of the neuron, not its connections. To achieve that, the researchers combined two new advances: an optimized holographic light-shaping microscope* and a localized, more powerful opsin protein called CoChR.

Two-photon computer-generated holography (CGH) was used to create three-dimensional sculptures of light that envelop only a target cell, using a conventional pulsed laser coupled with a widefield epifluorescence imaging system. (credit: Or A. Shemesh et al./Nature Nanoscience)

The researchers used an opsin protein called CoChR, which generates a very strong electric current in response to light, and fused it to a small protein that directs the opsin into the cell bodies of neurons and away from axons and dendrites, which extend from the neuron body, forming “somatic channelrhodopsin” (soCoChR). This new opsin enabled photostimulation of individual cells (regions of stimulation are highlighted by magenta circles) in mouse cortical brain slices with single-cell resolution and with less than 1 millisecond temporal (time) precision — achieving connectivity mapping on intact cortical circuits without crosstalk between neurons. (credit: Or A. Shemesh et al./Nature Nanoscience)

In the new study, by combining this approach with new ““somatic channelrhodopsin” opsins that cluster in the cell body, the researchers showed they could stimulate individual neurons with not only precise spatial control but also great control over the timing of the stimulation. When they target a specific neuron, it responds consistently every time, with variability that is less than one millisecond, even when the cell is stimulated many times in a row.

“For the first time ever, we can bring the precision of single-cell control toward the natural timescales of neural computation,” says Ed Boyden, an associate professor of brain and cognitive sciences and biological engineering at MIT, and a member of MIT’s Media Lab and McGovern Institute for Brain Research. Boyden is co-senior author with Valentina Emiliani, a research director at France’s National Center for Scientific Research (CNRS) and director of the Neurophotonics Laboratory at Paris Descartes University, of a study that appears in the Nov. 13 issue of Nature Neuroscience.

Mapping neural connections in real time

Using this technique, the researchers were able to stimulate single neurons in brain slices and then measure the responses from cells that are connected to that cell. This may pave the way for more precise diagramming of the connections of the brain, and analyzing how those connections change in real time as the brain performs a task or learns a new skill.

Optogenetics was co-developed in 2005 by Ed Boyden (credit: MIT)

One possible experiment, Boyden says, would be to stimulate neurons connected to each other to try to figure out if one is controlling the others or if they are all receiving input from a far-off controller.

“It’s an open question,” he says. “Is a given function being driven from afar, or is there a local circuit that governs the dynamics and spells out the exact chain of command within a circuit? If you can catch that chain of command in action and then use this technology to prove that that’s actually a causal link of events, that could help you explain how a sensation, or movement, or decision occurs.”

As a step toward that type of study, the researchers now plan to extend this approach into living animals. They are also working on improving their targeting molecules and developing high-current opsins that can silence neuron activity.

The research was funded by the National Institutes of Health, France’s National Research Agency, the Simons Foundation for the Social Brain, the Human Frontiers Science Program, John Doerr, the Open Philanthropy Project, the Howard Hughes Medical Institute, and the Defense Advanced Research Projects Agency.

* Traditional holography is based on reproducing, with light, the shape of a specific object, in the absence of that original object. This is achieved by creating an “interferogram” that contains the information needed to reconstruct an object that was previously illuminated by a reference beam. In computer-generated holography, the interferogram is calculated by a computer without the need of any original object. Combined with two-photon excitation, CGH can be used to refocus laser light to precisely illuminate a cell or a defined group of cells in the brain.


Abstract of Temporally precise single-cell-resolution optogenetics

Optogenetic control of individual neurons with high temporal precision within intact mammalian brain circuitry would enable powerful explorations of how neural circuits operate. Two-photon computer-generated holography enables precise sculpting of light and could in principle enable simultaneous illumination of many neurons in a network, with the requisite temporal precision to simulate accurate neural codes. We designed a high-efficacy soma-targeted opsin, finding that fusing the N-terminal 150 residues of kainate receptor subunit 2 (KA2) to the recently discovered high-photocurrent channelrhodopsin CoChR restricted expression of this opsin primarily to the cell body of mammalian cortical neurons. In combination with two-photon holographic stimulation, we found that this somatic CoChR (soCoChR) enabled photostimulation of individual cells in mouse cortical brain slices with single-cell resolution and <1-ms temporal precision. We used soCoChR to perform connectivity mapping on intact cortical circuits.

Peter Diamandis Thinks We’re Evolving Toward “Meta-Intelligence”

November 18, 2017

From Natural Selection to Intelligent Direction

In the next 30 years, humanity is in for a transformation the likes of which we’ve never seen before—and XPRIZE Foundation founder and chairman Peter Diamandis believes that this will give birth to a new species. Diamandis admits that this might sound too far out there for most people. He is convinced, however, that we are evolving towards what he calls “meta-intelligence,” and today’s exponential rate of growth is one clear indication.

In an essay for Singularity Hub, Diamandis outlines the transformative stages in the multi-billion year pageant of evolution, and takes note of what the recent increasing “temperature” of evolution—a consequence of human activity—may mean for the future. The story, in a nutshell, is this—early prokaryotic life appears about 3.5 billion years ago (bya), representing perhaps a symbiosis of separate metabolic and replicative mechanisms of “life;” at 2.5 bya, eukaryotes emerge as composite organisms incorporating biological “technology” (other living things) within themselves; at 1.5 bya, multicellular metazoans appear, taking the form of eukaryotes that are yoked together in cooperative colonies; and at 400 million years ago, vertebrate fish species emerge onto land to begin life’s adventure beyond the seas.

“Today, at a massively accelerated rate—some 100 million times faster than the steps I outlined above—life is undergoing a similar evolution,” Diamandis writes. He thinks we’ve moved from a simple Darwinian evolution via natural selection into evolution by intelligent direction.

Credits: Richard Bizley/SPL
Credits: Richard Bizley/SPL

“I believe we’re rapidly heading towards a human-scale transformation, the next evolutionary step into what I call a “Meta-Intelligence,” a future in which we are all highly connected—brain to brain via the cloud—sharing thoughts, knowledge and actions,” he writes.

Change is Coming

Diamandis outlines the next stages of humanity’s evolution in four steps, each a parallel to his four evolutionary stages of life on Earth. There are four driving forces behind this evolution: our interconnected or wired world, the emergence of brain-computer interface (BCI), the emergence of artificial intelligence (AI), and man reaching for the final frontier of space.

In the next 30 years, humanity will move from the first stage—where we are today—to the fourth stage. From simple humans dependent on one another, humanity will incorporate technology into our bodies to allow for more efficient use of information and energy. This is already happening today.

The third stage is a crucial point.

Enabled with BCI and AI, humans will become massively connected with each other and billions of AIs (computers) via the cloud, analogous to the first multicellular lifeforms 1.5 billion years ago. Such a massive interconnection will lead to the emergence of a new global consciousness, and a new organism I call the Meta-Intelligence.

This brings to mind another futuristic event that many are eagerly anticipating: the technological singularity. “Within a quarter century, nonbiological intelligence will match the range and subtlety of human intelligence,” said notable futurist Ray Kurzweil, explaining the singularity.

Credits: Lovelace Turing
Credits: Lovelace Turing

“It will then soar past it because of the continuing acceleration of information-based technologies, as well as the ability of machines to instantly share their knowledge.” Kurzweil predicts that this will happen by 2045—within Diamandis’ evolutionary timeline. “The nonbiological intelligence created in that year will be one billion times more powerful than all human intelligence today.”

The fourth and final stage marks humanity’s evolution to becoming a multiplanetary species. “Our journey to the moon, Mars, asteroids and beyond represents the modern-day analogy of the journey made by lungfish climbing out of the oceans some 400 million years ago,” Diamandis explains.

Buckle up: we have an exciting future ahead of us.

This article was originally published by:
https://futurism.com/peter-diamandis-thinks-were-evolving-toward-meta-intelligence/

Google’s AI Wizard Unveils a New Twist on Neural Networks

November 18, 2017

If you want to blame someone for the hoopla around artificial intelligence, 69-year-old Google researcher Geoff Hinton is a good candidate.

The droll University of Toronto professor jolted the field onto a new trajectory in October 2012. With two grad students, Hinton showed that an unfashionable technology he’d championed for decades called artificial neural networks permitted a huge leap in machines’ ability to understand images. Within six months, all three researchers were on Google’s payroll. Today neural networks transcribe our speech, recognize our pets, and fight our trolls.

But Hinton now belittles the technology he helped bring to the world. “I think the way we’re doing computer vision is just wrong,” he says. “It works better than anything else at present but that doesn’t mean it’s right.”

In its place, Hinton has unveiled another “old” idea that might transform how computers see—and reshape AI. That’s important because computer vision is crucial to ideas such as self-driving cars, and having software that plays doctor.

Late last week, Hinton released two research papers that he says prove out an idea he’s been mulling for almost 40 years. “It’s made a lot of intuitive sense to me for a very long time, it just hasn’t worked well,” Hinton says. “We’ve finally got something that works well.”

Hinton’s new approach, known as capsule networks, is a twist on neural networks intended to make machines better able to understand the world through images or video. In one of the papers posted last week, Hinton’s capsule networks matched the accuracy of the best previous techniques on a standard test of how well software can learn to recognize handwritten digits.

In the second, capsule networks almost halved the best previous error rate on a test that challenges software to recognize toys such as trucks and cars from different angles. Hinton has been working on his new technique with colleagues Sara Sabour and Nicholas Frosst at Google’s Toronto office.

Capsule networks aim to remedy a weakness of today’s machine-learning systems that limits their effectiveness. Image-recognition software in use today by Google and others needs a large number of example photos to learn to reliably recognize objects in all kinds of situations. That’s because the software isn’t very good at generalizing what it learns to new scenarios, for example understanding that an object is the same when seen from a new viewpoint.

To teach a computer to recognize a cat from many angles, for example, could require thousands of photos covering a variety of perspectives. Human children don’t need such explicit and extensive training to learn to recognize a household pet.

Hinton’s idea for narrowing the gulf between the best AI systems and ordinary toddlers is to build a little more knowledge of the world into computer-vision software. Capsules—small groups of crude virtual neurons—are designed to track different parts of an object, such as a cat’s nose and ears, and their relative positions in space. A network of many capsules can use that awareness to understand when a new scene is in fact a different view of something it has seen before.

Hinton formed his intuition that vision systems need such an inbuilt sense of geometry in 1979, when he was trying to figure out how humans use mental imagery. He first laid out a preliminary design for capsule networks in 2011. The fuller picture released last week was long anticipated by researchers in the field. “Everyone has been waiting for it and looking for the next great leap from Geoff,” says Kyunghyun Cho, a professor at NYU who works on image recognition.

It’s too early to say how big a leap Hinton has made—and he knows it. The AI veteran segues from quietly celebrating that his intuition is now supported by evidence, to explaining that capsule networks still need to be proven on large image collections, and that the current implementation is slow compared to existing image-recognition software.

Hinton is optimistic he can address those shortcomings. Others in the field are also hopeful about his long-maturing idea.

Roland Memisevic, cofounder of image-recognition startup Twenty Billion Neurons, and a professor at University of Montreal, says Hinton’s basic design should be capable of extracting more understanding from a given amount of data than existing systems. If proven out at scale, that could be helpful in domains such as healthcare, where image data to train AI systems is much scarcer than the large volume of selfies available around the internet.

In some ways, capsule networks are a departure from a recent trend in AI research. One interpretation of the recent success of neural networks is that humans should encode as little knowledge as possible into AI software, and instead make them figure things out for themselves from scratch. Gary Marcus, a professor of psychology at NYU who sold an AI startup to Uber last year, says Hinton’s latest work represents a welcome breath of fresh air. Marcus argues that AI researchers should be doing more to mimic how the brain has built-in, innate machinery for learning crucial skills like vision and language. “It’s too early to tell how far this particular architecture will go, but it’s great to see Hinton breaking out of the rut that the field has seemed fixated on,” Marcus says.

UPDATED, Nov. 2, 12:55 PM: This article has been updated to include the names of Geoff Hinton’s co-authors.

This article was originally published by:
https://www.wired.com/story/googles-ai-wizard-unveils-a-new-twist-on-neural-networks/