Canada is testing a basic income to discover what impact the policy has on unemployed people and those on low incomes.
The province of Ontario is planning to give 4,000 citizens thousands of dollars a month and assess how it affects their health, wellbeing, earnings and productivity.
It is among a number of regions and countries across the globe that are now piloting the scheme, which sees residents given a certain amount of money each month regardless of whether or not they are in work.
Although it is too early for the Ontario pilot to deliver clear results, some of those involved have already reported a significant change.
One recipient, Tim Button, said the monthly payments were making a “huge difference” to his life. He worked as a security guard before having to quit after a fall from a roof left him unable to work.
“It takes me out of depression”, he told the Associated Press. “I feel more sociable.”
The basic income payments have boosted his income by almost 60 per cent and have allowed him to make plans to visit his family for Christmas for the first time in years. He has also been able to buy healthier food, see a dentist and look into taking an educational course to help him find work.
Under the Ontario experiment, unemployed people or those with a low income can receive up to C$17,000 (£9,900) and are allowed to also keep half of what they earn at work, meaning there is still an incentive to work. Couples are entitled to C$24,000 (£13,400).
If the trial proves successful, the scheme could be expanded to more of the province’s 14.2 million residents and may inspire more regions of Canada and other nations to adopt the policy.
Support for a basic income has grown in recent years, fuelled in part by fears about the impact that new technology will have on jobs. As machines and robots are able to complete an increasing number of tasks, attention has turned to how people will live when there are not enough jobs to go round.
Ontario’s Premier, Kathleen Wynne, said this was a major factor in the decision to trial a basic income in the province.
She said: “I see it on a daily basis. I go into a factory and the floor plant manager can tell me where there were 20 people and there is one machine. We need to understand what it might look like if there is, in fact, the labour disruption that some economists are predicting.”
Ontario officials have found that many people are reluctant to sign up to the scheme, fearing there is a catch or that they will be left without money once the pilot finishes.
Many of those who are receiving payments, however, say their lives have already been changed for the better.
Dave Cherkewski, 46, said the extra C$750 (£436) a month he receives has helped him to cope with the mental illness that has kept him out of work since 2002.
“I’ve never been better after 14 years of living in poverty,” he said.
He hopes to soon find work helping other people with mental health challenges.
He said: “With basic income I will be able to clarify my dream and actually make it a reality, because I can focus all my effort on that and not worry about, ‘Well, I need to pay my $520 rent, I need to pay my $50 cellphone, I need to eat and do other things’.”
Finland is also trialling a basic income, as is the state of Hawaii, Oakland in California and the Dutch city of Utrecht.
As the December Federal Reserve (Fed) meeting nears, discussions and speculation about the precise timing of Fed liftoff are certain to take center stage.
But while I’ve certainly weighed in on this debate many times, I believe it’s just one example of a topic that receives far too much attention from investors and market watchers alike.
The Fed has been abundantly clear that the forthcoming rate hiking cycle, likely to begin this month, will be incredibly gradual and sensitive to how economic data evolves, meaning the central bank is likely to be extraordinarily cautious about derailing the recovery and rates will likely remain historically low for an extended period of time. In other words, when the Fed does begin rate normalization, not much is likely to change.
Shifting the Focus to Other Economic Trends
In contrast, there are a number of important longer-term trends more worthy of our focus, as they’re likely to have a bigger, longer-sustaining impact on markets than the Fed’s first rate move. One such market influence that I believe should be getting more attention: The advances in technology happening all around us; innovations already having a huge disruptive influence on the economy and markets. These three charts help explain why.
1. ADOPTION OF TECHNOLOGY IN THE U.S., 1900 TO PRESENT
As the chart above shows, people in the U.S. today are adopting new technologies, including tablets and smartphones, at the swiftest pace we’ve seen since the advent of the television. However, while television arguably detracted from U.S. productivity, today’s advances in technology are generally geared toward greater efficiency at lower costs. Indeed, when you take into account technology’s downward influence on price, U.S. consumption and productivity figures look much better than headline numbers would suggest.
2. PERCENTAGE TOP 1500 U.S. STOCKS WITH ZERO INVENTORY THROUGH Q2 2015
Meanwhile, on the labor market front, greater utilization of technology in business has placed a premium on high-skilled workers who can navigate and innovate alongside that technology. As such, over the past 15 years, we’ve seen considerably faster jobs growth in skilled positions than in lesser skilled ones, as shown in the chart above.
This shift reflects some of the significant influences of technological innovation on the labor market: Highly-skilled labor is rewarded for compatibility with new technologies and is less likely to be replaced by automation or robotics, while the opposite is true for lower-skilled workers, a trend that has kept job growth from being even more robust. This skills-divide also highlights the need for fiscal policies that emphasize education and retraining. In my view, the adoption of such policies will ultimately be much more important to the trajectory of the U.S. labor market and economy than whether the Fed moves away from emergency-rate levels this year or next.
Above all, if there’s one common theme in all three of these charts, it’s this: Technology is advancing so fast that traditional economic metrics haven’t kept up. This has serious implications. It helps to explain widespread misconceptions about the state of the U.S. economy, including the assertion that we reside in a period of low productivity growth, despite the many remarkable advances we see around us. It also makes monetary policy evolution more difficult, and is one reason why I’ve found recent policy debates somewhat myopic and distorted from reality.
So, let’s all make this New Year’s resolution: Instead of focusing so much on the Fed, let’s give some attention to how technology is changing the entire world in ways never before witnessed, and let’s focus on education and training policies that can help our workforce adapt. Such initiatives are more important and durable, and should havefewer unintended negative economic consequences, than policies designed to distort the real rates of interest.
Nathan Treff was diagnosed with type 1 diabetes at 24. It’s a disease that runs in families, but it has complex causes. More than one gene is involved. And the environment plays a role too.
So you don’t know who will get it. Treff’s grandfather had it, and lost a leg. But Treff’s three young kids are fine, so far. He’s crossing his fingers they won’t develop it later.
Now Treff, an in vitro fertilization specialist, is working on a radical way to change the odds. Using a combination of computer models and DNA tests, the startup company he’s working with, Genomic Prediction, thinks it has a way of predicting which IVF embryos in a laboratory dish would be most likely to develop type 1 diabetes or other complex diseases. Armed with such statistical scorecards, doctors and parents could huddle and choose to avoid embryos with failing grades.
IVF clinics already test the DNA of embryos to spot rare diseases, like cystic fibrosis, caused by defects in a single gene. But these “preimplantation” tests are poised for a dramatic leap forward as it becomes possible to peer more deeply at an embryo’s genome and create broad statistical forecasts about the person it would become.
The advance is occurring, say scientists, thanks to a growing flood of genetic data collected from large population studies. As statistical models known as predictors gobble up DNA and health information about hundreds of thousands of people, they’re getting more accurate at spotting the genetic patterns that foreshadow disease risk. But they have a controversial side, since the same techniques can be used to project the eventual height, weight, skin tone, and even intelligence of an IVF embryo.
In addition to Treff, who is the company’s chief scientific officer, the founders of Genomic Prediction are Stephen Hsu, a physicist who is vice president for research at Michigan State University, and Laurent Tellier, a Danish bioinformatician who is CEO. Both Hsu and Tellier have been closely involved with a project in China that aims to sequence the genomes of mathematical geniuses, hoping to shed light on the genetic basis of IQ.
The company’s plans rely on a tidal wave of new knowledge showing how small genetic differences can add up to put one person, but not another, at high odds for diabetes, a neurotic personality, or a taller or shorter height. Already, such “polygenic risk scores” are used in direct-to-consumer gene tests, such as reports from 23andMe that tell customers their genetic chance of being overweight.
For adults, risk scores are little more than a novelty or a source of health advice they can ignore. But if the same information is generated about an embryo, it could lead to existential consequences: who will be born, and who stays in a laboratory freezer.
“I remind my partners, ‘You know, if my parents had this test, I wouldn’t be here,’” says Treff, a prize-winning expert on diagnostic technology who is the author of more than 90 scientific papers.
Genomic Prediction was founded this year and has raised funds from venture capitalists in Silicon Valley, though it declines to say who they are. Tellier, whose inspiration is the science fiction film Gattaca, says the company plans to offer reports to IVF doctors and parents identifying “outliers”—those embryos whose genetic scores put them at the wrong end of a statistical curve for disorders such as diabetes, late-life osteoporosis, schizophrenia, and dwarfism, depending on whether models for those problems prove accurate.
The company’s concept, which it calls expanded preimplantation genetic testing, or ePGT, would effectively add a range of common disease risks to the menu of rare ones already available, which it also plans to test for. Its promotional material uses a picture of a mostly submerged iceberg to get the idea across. “We believe it will become a standard part of the IVF process,” says Tellier, just as a test for Down syndrome is a standard part of pregnancy.
Some experts contacted by MIT Technology Review said they believed it’s premature to introduce polygenic scoring technology into IVF clinics—though perhaps not by very much. Matthew Rabinowitz, CEO of the prenatal-testing company Natera, based in California, says he thinks predictions obtained today could be “largely misleading” because DNA models don’t function well enough. But Rabinowitz agrees that the technology is coming.
“You are not going to stop the modeling in genetics, and you are not going to stop people from accessing it,” he says. “It’s going to get better and better.”
Testing embryos for disease risks, including risks for diseases that develop only late in life, is considered ethically acceptable by U.S. fertility doctors. But the new DNA scoring models mean parents might be able to choose their kids on the basis of traits like IQ or adult weight. That’s because, just like type 1 diabetes, these traits are the result of complex genetic influences the predictor algorithms are designed to find.
“It’s the camel’s nose under the tent. Because if you are doing it for something more serious, then it’s trivially easy to look for anything else,” says Michelle Meyer, a bioethicist at the Geisinger Health System who analyzes issues in reproductive genetics. “Here is the genomic dossier on each embryo. And you flip through the book.” Imagine picking the embryo most likely to get into Harvard like Mom, or to be tall like Dad.
For Genomic Prediction, a tiny startup based out of a tech incubator in New Jersey, such questions will be especially sharply drawn. That is because of Hsu’s long-standing interest in genetic selection for superior intelligence.
In 2014, Hsu authored an essay titled “Super-Intelligent Humans Are Coming,” in which he argued that selecting embryos for intelligence could boost the resulting child’s IQ by 15 points.
Genomic Prediction says it will only report diseases—that is, identify those embryos it thinks would develop into people with serious medical problems. Even so, on his blog and in public statements, Hsu has for years been developing a vision that goes far beyond that.
“Suppose I could tell you embryo four is going to be the tallest, embryo three is going to be the smartest, embryo two is going to be very antisocial. Suppose that level of granularity was available in the reports,” he told the conservative radio and YouTube personality Stefan Molyneux this spring. “That is the near-term future that we as a civilization face. This is going to be here.”
The fuel for the predictive models is a deluge of new data, most recently genetic readouts and medical records for 500,000 middle-aged Britons that were released in July by the U.K. Biobank, a national precision-medicine project in that country.
The data trove included, for each volunteer, a map of about 800,000 single-nucleotide polymorphisms, or SNPs—points where their DNA differs slightly from another person’s. The release caused a pell-mell rush by geneticists to update their calculations about exactly how much of human disease, or even routine behaviors like bread consumption, these genetic differences could explain.
Armed with the U.K. data, Hsu and Tellier claimed a breakthrough. For one easily measured trait, height, they used machine-learning techniques to create a predictor that behaved flawlessly. They reported that the model could, for the most part, predict people’s height from their DNA data to within three or four centimeters.
Height is currently the easiest trait to predict. It’s determined mostly by genes, and it’s always recorded in population databases. But Tellier says genetic databases are “rapidly approaching” the size needed to make accurate predictions about other human features, including risk for diseases whose true causes aren’t even known.
Tellier says Genomic Prediction will zero in on disease traits for which the predictors already perform fairly well, or will soon. Those include autoimmune disorders like the illness Treff suffers from. In those conditions, a smaller set of genes dominates the predictions, sometimes making them more reliable.
A report from Germany in 2014, for instance, found it was possible to distinguish fairly accurately, from a polygenic DNA score alone, between a person with type 1 diabetes and a person without it. While the scores aren’t perfectly accurate, consider how they might influence a prospective parent. On average, children of a man with type 1 diabetes have a one in 17 chance of developing the ailment. Picking the best of several embryos made in an IVF clinic, even with an error-prone predictor, could lower the odds.
In the case of height, Genomic Prediction hopes to use the model to help identify embryos that would grow into adults shorter than 4’10”, the medical definition of dwarfism, says Tellier. There are many physical and psychological disadvantages to being so short. Eventually the company could also have the ability to identify intellectual problems, such as embryos with a predicted IQ of less than 70.
The company doesn’t intend to give out raw trait scores to parents, only to flag embryos likely to be abnormal. That is because the product has to be “ethically defensible,” says Hsu: “We would only reveal the negative outlier state. We don’t report, ‘This guy is going to be in the NBA.’”
Some scientists doubt the scores will prove useful at picking better people from IVF dishes. Even if they’re accurate on the average, for individuals there’s no guarantee of pinpoint precision. What’s more, environment has as big an impact on most traits as genes do. “There is a high probability that you will get it wrong—that would be my concern,” says Manuel Rivas, a professor at Stanford University who studies the genetics of Crohn’s disease. “If someone is using that information to make decisions about embryos, I don’t know what to make of it.”
Efforts to introduce this type of statistical scoring into reproduction have, in the past, drawn criticism. In 2013, 23andMe provoked outrage when it won a patent on the idea of drop-down menus parents could use to pick sperm or egg donors—say, to try to get a specific eye color. The company, funded by Google, quickly backpedaled.
But since then, polygenic scores have become a routine aspect of novelty DNA tests. A company called HumanCode sells a $199 test online that uses SNP scores to tell two people about how tall their kids might be. In the dairy cattle industry, polygenic tests are widely used to rate young animals for how much milk they’ll produce.
“At a broad level, our understanding of complex traits has evolved. It’s not that there are a few genes contributing to complex traits; it’s tens, or thousands, or even all genes,” says Meyer, the Geisinger professor. “That has led to polygenic risk scores. It’s many variants, each with small contributions of their own, but which have a significant contribution together. You add them up.” In his predictor for height, Hsu eventually made use of 20,000 variants to guess how tall each person was.
Around the world, a million couples undergo IVF each year; in the U.S., test-tube babies account for 1 percent of births. Preimplantation genetic diagnosis, or PGD, has been part of the technology since the 1990s. In that procedure, a few cells are plucked from a days-old embryo growing in a laboratory so they can be tested.
Until now, doctors have used PGD to detect embryos with major abnormalities, such as missing chromosomes, as well as those with “single gene” defects. Parents who carry the defective gene that causes Huntington’s disease, for instance, can use embryo tests to avoid having a child with the fatal brain ailment.
The obstacle to polygenic tests has been that with so few cells, it’s been difficult to get the broad, accurate view of an embryo’s genome necessary to perform the needed calculations. “It’s very hard to make reliable measurements on that little DNA,” says Rabinowitz, the Natera CEO.
Tellier says Genomic Prediction has developed an improved method for analyzing embryonic DNA, which he says will first be used to improve on traditional PGD, combing many single-gene tests into one. Tellier says the same technique is what will permit it to collect polygenic scores on embryos, although the company did not describe the method in detail. But other scientists have already demonstrated ways to overcome the accuracy barrier.
In 2015, a team led by Rabinowitz and Jay Shendure of the University of Washington did it by sequencing in detail the genomes of two parents undergoing IVF. That let them infer the embryo’s genome sequence, even though the embryo test itself was no more accurate than before. When the babies were born, they found they’d been right.
“We do have the technology to reconstruct the genome of an embryo and create a polygenic model,” says Rabinowitz, whose publicly traded company is worth about $600 million, and who says he has been mulling whether to enter the embryo-scoring business. “The problem is that the models have not quite been ready for prime time.”
That’s because despite Hsu’s success with height, the scoring algorithms have significant limitations. One is that they’re built using data mostly from Northern Europeans. That means they may not be useful for people from Asia or Africa, where the pattern of SNPs is different, or for people of mixed ancestry. Even their performance for specific families of European background can’t be taken for granted unless the procedure is carefully tested in a clinical study, something that’s never been done, says Akash Kumar, a Stanford resident physician who was lead author of the Natera study.
Kumar, who treats young patients with rare disorders, says the genetic predictors raise some “big issues.” One is that the sheer amount of genetic data becoming available could make it temptingly easy to assess nonmedical traits. “We’ve seen such a crazy change in the number of people we are able to study,” he says. “Not many have schizophrenia, but they all have a height and a body-mass index. So the number of people you can use to build the trait models is much larger. It’s a very unique place to be, thinking what we should do with this technology.”
This week, Genomic Prediction manned a booth at the annual meeting of the American Society for Reproductive Medicine. That organization, which represents fertility doctors and scientists, has previously said it thinks testing embryos for late-life conditions, like Alzheimer’s, would be “ethically justified.” It cited, among other reasons, the “reproductive liberty” of parents.
The society has been more ambivalent about choosing the sex of embryos (something that conventional PGD allows), leaving it to the discretion of doctors. Combined, the society’s positions seem to open the door to any kind of measurement, perhaps so long as the test is justified for a medical reason.
Hsu has previously said he thinks intelligence is “the most interesting phenotype,” or trait, of all. But when he tried his predictor to see what it could say about how far along in school the 500,000 British subjects from the U.K. Biobank had gotten (years of schooling is a proxy for IQ), he found that DNA couldn’t predict it nearly as well as it could predict height.
Yet DNA did explain some of the difference. Daniel Benjamin, a geno-economist at the University of Southern California, says that for large populations, gene scores are already as predictive of educational attainment as whether someone grew up in a rich or poor family. He adds that the accuracy of the scores has been steadily improving. Scoring embryos for high IQ, however, would be “premature” and “ethically contentious,” he says.
Hsu’s prediction is that “billionaires and Silicon Valley types” will be the early adopters of embryo selection technology, becoming among the first “to do IVF even though they don’t need IVF.” As they start producing fewer unhealthy children, and more exceptional ones, the rest of society could follow suit.
“I fully predict it will be possible,” says Hsu of selecting embryos with higher IQ scores. “But we’ve said that we as a company are not going to do it. It’s a difficult issue, like nuclear weapons or gene editing. There will be some future debate over whether this should be legal, or made illegal. Countries will have referendums on it.”
Controlling single neurons using optogenetics (credit: the researchers)
Researchers at MIT and Paris Descartes University have developed a technique for precisely mapping connections of individual neurons for the first time by triggering them with holographic laser light.
The technique is based on optogenetics (using light to stimulate or silence light-sensitive genetically modified protein molecules called “opsins” that are embedded in specific neurons). Current optogenetics techniques can’t isolate individual neurons (and their connections) because the light strikes a relatively large area — stimulating axons and dendrites of other neurons simultaneously (and these neurons may have different functions, even when nearby).
The new technique stimulates only the soma (body) of the neuron, not its connections. To achieve that, the researchers combined two new advances: an optimized holographic light-shaping microscope* and a localized, more powerful opsin protein called CoChR.
Two-photon computer-generated holography (CGH) was used to create three-dimensional sculptures of light that envelop only a target cell, using a conventional pulsed laser coupled with a widefield epifluorescence imaging system. (credit: Or A. Shemesh et al./Nature Nanoscience)
The researchers used an opsin protein called CoChR, which generates a very strong electric current in response to light, and fused it to a small protein that directs the opsin into the cell bodies of neurons and away from axons and dendrites, which extend from the neuron body, forming “somatic channelrhodopsin” (soCoChR). This new opsin enabled photostimulation of individual cells (regions of stimulation are highlighted by magenta circles) in mouse cortical brain slices with single-cell resolution and with less than 1 millisecond temporal (time) precision — achieving connectivity mapping on intact cortical circuits without crosstalk between neurons. (credit: Or A. Shemesh et al./Nature Nanoscience)
In the new study, by combining this approach with new ““somatic channelrhodopsin” opsins that cluster in the cell body, the researchers showed they could stimulate individual neurons with not only precise spatial control but also great control over the timing of the stimulation. When they target a specific neuron, it responds consistently every time, with variability that is less than one millisecond, even when the cell is stimulated many times in a row.
“For the first time ever, we can bring the precision of single-cell control toward the natural timescales of neural computation,” says Ed Boyden, an associate professor of brain and cognitive sciences and biological engineering at MIT, and a member of MIT’s Media Lab and McGovern Institute for Brain Research. Boyden is co-senior author with Valentina Emiliani, a research director at France’s National Center for Scientific Research (CNRS) and director of the Neurophotonics Laboratory at Paris Descartes University, of a study that appears in the Nov. 13 issue of Nature Neuroscience.
Mapping neural connections in real time
Using this technique, the researchers were able to stimulate single neurons in brain slices and then measure the responses from cells that are connected to that cell. This may pave the way for more precise diagramming of the connections of the brain, and analyzing how those connections change in real time as the brain performs a task or learns a new skill.
Optogenetics was co-developed in 2005 by Ed Boyden (credit: MIT)
One possible experiment, Boyden says, would be to stimulate neurons connected to each other to try to figure out if one is controlling the others or if they are all receiving input from a far-off controller.
“It’s an open question,” he says. “Is a given function being driven from afar, or is there a local circuit that governs the dynamics and spells out the exact chain of command within a circuit? If you can catch that chain of command in action and then use this technology to prove that that’s actually a causal link of events, that could help you explain how a sensation, or movement, or decision occurs.”
As a step toward that type of study, the researchers now plan to extend this approach into living animals. They are also working on improving their targeting molecules and developing high-current opsins that can silence neuron activity.
The research was funded by the National Institutes of Health, France’s National Research Agency, the Simons Foundation for the Social Brain, the Human Frontiers Science Program, John Doerr, the Open Philanthropy Project, the Howard Hughes Medical Institute, and the Defense Advanced Research Projects Agency.
* Traditional holography is based on reproducing, with light, the shape of a specific object, in the absence of that original object. This is achieved by creating an “interferogram” that contains the information needed to reconstruct an object that was previously illuminated by a reference beam. In computer-generated holography, the interferogram is calculated by a computer without the need of any original object. Combined with two-photon excitation, CGH can be used to refocus laser light to precisely illuminate a cell or a defined group of cells in the brain.
Abstract of Temporally precise single-cell-resolution optogenetics
Optogenetic control of individual neurons with high temporal precision within intact mammalian brain circuitry would enable powerful explorations of how neural circuits operate. Two-photon computer-generated holography enables precise sculpting of light and could in principle enable simultaneous illumination of many neurons in a network, with the requisite temporal precision to simulate accurate neural codes. We designed a high-efficacy soma-targeted opsin, finding that fusing the N-terminal 150 residues of kainate receptor subunit 2 (KA2) to the recently discovered high-photocurrent channelrhodopsin CoChR restricted expression of this opsin primarily to the cell body of mammalian cortical neurons. In combination with two-photon holographic stimulation, we found that this somatic CoChR (soCoChR) enabled photostimulation of individual cells in mouse cortical brain slices with single-cell resolution and <1-ms temporal precision. We used soCoChR to perform connectivity mapping on intact cortical circuits.
In the next 30 years, humanity is in for a transformation the likes of which we’ve never seen before—and XPRIZE Foundation founder and chairman Peter Diamandis believes that this will give birth to a new species. Diamandis admits that this might sound too far out there for most people. He is convinced, however, that we are evolving towards what he calls “meta-intelligence,” and today’s exponential rate of growth is one clear indication.
In an essay for Singularity Hub, Diamandis outlines the transformative stages in the multi-billion year pageant of evolution, and takes note of what the recent increasing “temperature” of evolution—a consequence of human activity—may mean for the future. The story, in a nutshell, is this—early prokaryotic life appears about 3.5 billion years ago (bya), representing perhaps a symbiosis of separate metabolic and replicative mechanisms of “life;” at 2.5 bya, eukaryotes emerge as composite organisms incorporating biological “technology” (other living things) within themselves; at 1.5 bya, multicellular metazoans appear, taking the form of eukaryotes that are yoked together in cooperative colonies; and at 400 million years ago, vertebrate fish species emerge onto land to begin life’s adventure beyond the seas.
“Today, at a massively accelerated rate—some 100 million times faster than the steps I outlined above—life is undergoing a similar evolution,” Diamandis writes. He thinks we’ve moved from a simple Darwinian evolution via natural selection into evolution by intelligent direction.
“I believe we’re rapidly heading towards a human-scale transformation, the next evolutionary step into what I call a “Meta-Intelligence,” a future in which we are all highly connected—brain to brain via the cloud—sharing thoughts, knowledge and actions,” he writes.
Change is Coming
Diamandis outlines the next stages of humanity’s evolution in four steps, each a parallel to his four evolutionary stages of life on Earth. There are four driving forces behind this evolution: our interconnected or wired world, the emergence of brain-computer interface (BCI), the emergence of artificial intelligence (AI), and man reaching for the final frontier of space.
In the next 30 years, humanity will move from the first stage—where we are today—to the fourth stage. From simple humans dependent on one another, humanity will incorporate technology into our bodies to allow for more efficient use of information and energy. This is already happening today.
The third stage is a crucial point.
Enabled with BCI and AI, humans will become massively connected with each other and billions of AIs (computers) via the cloud, analogous to the first multicellular lifeforms 1.5 billion years ago. Such a massive interconnection will lead to the emergence of a new global consciousness, and a new organism I call the Meta-Intelligence.
“It will then soar past it because of the continuing acceleration of information-based technologies, as well as the ability of machines to instantly share their knowledge.” Kurzweil predicts that this will happen by 2045—within Diamandis’ evolutionary timeline. “The nonbiological intelligence created in that year will be one billion times more powerful than all human intelligence today.”
The fourth and final stage marks humanity’s evolution to becoming a multiplanetary species. “Our journey to the moon, Mars, asteroids and beyond represents the modern-day analogy of the journey made by lungfish climbing out of the oceans some 400 million years ago,” Diamandis explains.
Buckle up: we have an exciting future ahead of us.
If you want to blame someone for the hoopla around artificial intelligence, 69-year-old Google researcher Geoff Hinton is a good candidate.
The droll University of Toronto professor jolted the field onto a new trajectory in October 2012. With two grad students, Hinton showed that an unfashionable technology he’d championed for decades called artificial neural networks permitted a huge leap in machines’ ability to understand images. Within six months, all three researchers were on Google’s payroll. Today neural networks transcribe our speech, recognize our pets, and fight our trolls.
But Hinton now belittles the technology he helped bring to the world. “I think the way we’re doing computer vision is just wrong,” he says. “It works better than anything else at present but that doesn’t mean it’s right.”
In its place, Hinton has unveiled another “old” idea that might transform how computers see—and reshape AI. That’s important because computer vision is crucial to ideas such as self-driving cars, and having software that plays doctor.
Late last week, Hinton released tworesearch papers that he says prove out an idea he’s been mulling for almost 40 years. “It’s made a lot of intuitive sense to me for a very long time, it just hasn’t worked well,” Hinton says. “We’ve finally got something that works well.”
Hinton’s new approach, known as capsule networks, is a twist on neural networks intended to make machines better able to understand the world through images or video. In one of the papers posted last week, Hinton’s capsule networks matched the accuracy of the best previous techniques on a standard test of how well software can learn to recognize handwritten digits.
In the second, capsule networks almost halved the best previous error rate on a test that challenges software to recognize toys such as trucks and cars from different angles. Hinton has been working on his new technique with colleagues Sara Sabour and Nicholas Frosst at Google’s Toronto office.
Capsule networks aim to remedy a weakness of today’s machine-learning systems that limits their effectiveness. Image-recognition software in use today by Google and others needs a large number of example photos to learn to reliably recognize objects in all kinds of situations. That’s because the software isn’t very good at generalizing what it learns to new scenarios, for example understanding that an object is the same when seen from a new viewpoint.
To teach a computer to recognize a cat from many angles, for example, could require thousands of photos covering a variety of perspectives. Human children don’t need such explicit and extensive training to learn to recognize a household pet.
Hinton’s idea for narrowing the gulf between the best AI systems and ordinary toddlers is to build a little more knowledge of the world into computer-vision software. Capsules—small groups of crude virtual neurons—are designed to track different parts of an object, such as a cat’s nose and ears, and their relative positions in space. A network of many capsules can use that awareness to understand when a new scene is in fact a different view of something it has seen before.
Hinton formed his intuition that vision systems need such an inbuilt sense of geometry in 1979, when he was trying to figure out how humans use mental imagery. He first laid out a preliminary design for capsule networks in 2011. The fuller picture released last week was long anticipated by researchers in the field. “Everyone has been waiting for it and looking for the next great leap from Geoff,” says Kyunghyun Cho, a professor at NYU who works on image recognition.
It’s too early to say how big a leap Hinton has made—and he knows it. The AI veteran segues from quietly celebrating that his intuition is now supported by evidence, to explaining that capsule networks still need to be proven on large image collections, and that the current implementation is slow compared to existing image-recognition software.
Hinton is optimistic he can address those shortcomings. Others in the field are also hopeful about his long-maturing idea.
Roland Memisevic, cofounder of image-recognition startup Twenty Billion Neurons, and a professor at University of Montreal, says Hinton’s basic design should be capable of extracting more understanding from a given amount of data than existing systems. If proven out at scale, that could be helpful in domains such as healthcare, where image data to train AI systems is much scarcer than the large volume of selfies available around the internet.
In some ways, capsule networks are a departure from a recent trend in AI research. One interpretation of the recent success of neural networks is that humans should encode as little knowledge as possible into AI software, and instead make them figure things out for themselves from scratch. Gary Marcus, a professor of psychology at NYU who sold an AI startup to Uber last year, says Hinton’s latest work represents a welcome breath of fresh air. Marcus argues that AI researchers should be doing more to mimic how the brain has built-in, innate machinery for learning crucial skills like vision and language. “It’s too early to tell how far this particular architecture will go, but it’s great to see Hinton breaking out of the rut that the field has seemed fixated on,” Marcus says.
UPDATED, Nov. 2, 12:55 PM: This article has been updated to include the names of Geoff Hinton’s co-authors.
Most of us take our vision for granted. As a result, we take the ability to read, write, drive, and complete a multitude of other tasks for granted. However, unfortunately, sight is not so easy for everyone.
Cataracts account for about a third of these. The National Eye Institute reports that more than half of all Americans will have cataracts or will have had cataract surgery by the time they are 80, and in low- and middle-income countries, they’re the leading cause of blindness.
But now, people with vision problems may have new hope.
A Welcome Sight
Soon, cataracts may be the thing of the past, and even better, it may be possible to see a staggering three times better than 20/20 vision. Oh, and you could do it all without wearing glasses or contacts.
So what exactly does having three times better vision mean? If you can currently read a text that is 10 feet away, you would be able to read the same text from 30 feet away. What’s more, people who currently can’t see properly might be able to see a lot better than the average person.
This development comes thanks to the Ocumetics Bionic Lens. This dynamic lens essentially replaces a person’s natural eye lens. It’s placed into the eye via a saline-filled syringe, after which it unravels itself in under 10 seconds.
It may sound painful, but Dr. Garth Webb, the optometrist who invented the Ocumetics Bionic Lens, says that the procedure is identical to cataract surgery and would take just about eight minutes. He adds that people who have the specialized lenses surgically inserted would never get cataracts and that the lenses feel natural and won’t cause headaches or eyestrain.
The Bionic Lens may sound like a fairy tale (or sci-fi dream), but it’s not. It is actually the end result of years and years of research and more than a little funding — so far, the lens has taken nearly a decade to develop and has cost US$3 million.
What does it really cost to bring a drug to market?
The question is central to the debate over rising health care costs and appropriate drug pricing. President Trump campaigned on promises to lower the costs of drugs.
But numbers have been hard to come by. For years, the standard figure has been supplied by researchers at the Tufts Center for the Study of Drug Development: $2.7 billion each, in 2017 dollars.
Yet a new study looking at 10 cancer medications, among the most expensive of new drugs, has arrived at a much lower figure: a median cost of $757 million per drug. (Half cost less, and half more.)
Following approval, the 10 drugs together brought in $67 billion, the researchers also concluded — a more than sevenfold return on investment. Nine out of 10 companies made money, but revenues varied enormously. One drug had not yet earned back its development costs.
The study, published Monday in JAMA Internal Medicine, relied on company filings with the Securities and Exchange Commission to determine research and development costs.
“It seems like they have done a thoughtful and rigorous job,” said Dr. Aaron Kesselheim, director of the program on regulation, therapeutics and the law at Brigham and Women’s Hospital.
“It provides at least something of a reality check,” he added.
The figures were met with swift criticism, however, by other experts and by representatives of the biotech industry, who said that the research did not adequately take into account the costs of the many experimental drugs that fail.
“It’s a bit like saying it’s a good business to go out and buy winning lottery tickets,” Daniel Seaton, a spokesman for the Biotechnology Innovation Organization, said in an email.
Dr. Jerry Avorn, chief of the division of pharmacoepidemiology and pharmacoeconomics at Brigham and Women’s Hospital, predicted that the paper would help fuel the debate over the prices of cancer drugs, which have soared so high “that we are getting into areas that are almost unimaginable economically,” he said.
A leukemia treatment approved recently by the Food and Drug Administration, for example, will cost $475,000 for a single treatment. It is the first of a wave of gene therapy treatments likely to carry staggering price tags.
“This is an important brick in the wall of this developing concern,” he said.
Dr. Vinay Prasad, an oncologist at Oregon Health and Science University, and Dr. Sham Mailankody, of Memorial Sloan Kettering Cancer Center, arrived at their figures after reviewing data on 10 companies that brought a cancer drug to market in the past decade.
Since the companies also were developing other drugs that did not receive approval from the F.D.A., the researchers were able to include the companies’ total spending on research and development, not just what they spent on the drugs that succeeded.
One striking example was ibrutinib, made by Pharmacyclics. It was approved in 2013 for patients with certain blood cancers who did not respond to conventional therapy.
Ibrutinib was the only drug out of four the company was developing to receive F.D.A. approval. The company’s research and development costs for their four drugs were $388 million, the company’s S.E.C. filings indicated.
Accurate figures on drug development are difficult to find and often disputed. Although it is widely cited, the Tufts study also was fiercely criticized.
One objection was that the researchers, led by Joseph A. DiMasi, did not disclose the companies’ data on development costs. The study involved ten large companies, which were not named, and 106 investigational drugs, also not named.
But Dr. DiMasi found the new study “irredeemably flawed at a fundamental level.”
“The sample consists of relatively small companies that have gotten only one drug approved, with few other drugs of any type in development,” he said. The result is “substantial selection bias,” meaning that the estimates do not accurately reflect the industry as a whole.
Ninety-five percent of cancer drugs that enter clinical trials fail, said Mr. Seaton, of the biotech industry group. “The small handful of successful drugs — those looked at by this paper — must be profitable enough to finance all of the many failures this analysis leaves unexamined.”
“When the rare event occurs that a company does win approval,” he added, “the reward must be commensurate with taking on the multiple levels of risk not seen in any other industry if drug development is to remain economically viable for prospective investors.”
Cancer drugs remain among the most expensive medications, with prices reaching the hundreds of thousands of dollars per patient.
Although the new study was small, its estimates are so much lower than previous figures, and the return on investment so great, that experts say they raise questions about whether soaring drug prices really are needed to encourage investment.
”That seems hard to swallow when they make seven times what they invested in the first four years,” Dr. Prasad said.
The new study has limitations, noted Patricia Danzon, an economist at the University of Pennsylvania’s Wharton School.
It involved just ten small biotech companies whose cancer drugs were aimed at limited groups of patients with less common diseases.
For such drugs, the F.D.A. often permits clinical trials to be very small and sometimes without control groups. Therefore development costs may have been lower for this group than for drugs that require longer and larger studies.
But, Dr. Danzon said, most new cancer drugs today are developed this way: by small companies and for small groups of patients. The companies often license or sell successful drugs to the larger companies.
The new study, she said, “is shining a light on a sector of the industry that is becoming important now.” The evidence, she added, is “irrefutable” that the cost of research and development “is small relative to the revenues.”
When it comes to drug prices, it does not matter what companies spend on research and development, Dr. Kesselheim said.
“They are based on what the market will bear.”
Correction: September 14, 2017
An earlier version of this article incorrectly identified the company that acquired a drug maker. It was AbbVie, not Janssen Biotech (which jointly develops the drug). Additionally, the article incorrectly described what AbbVie acquired. It was the company Pharmacylics, which developed the drug Imbruvica, not the drug itself.
Musk is just one of the people in Silicon Valley to take a keen interest in the “simulation hypothesis”, which argues that what we experience as reality is actually a giant computer simulation created by a more sophisticated intelligence. If it sounds a lot like The Matrix, that’s because it is.
One popular argument for the simulation hypothesis, outside of acid trips, came from Oxford University’s Nick Bostrom in 2003 (although the idea dates back as far as the 17th-century philosopher René Descartes). In a paper titled “Are You Living In a Simulation?”, Bostrom suggested that members of an advanced “posthuman” civilization with vast computing power might choose to run simulations of their ancestors in the universe.
This argument is extrapolated from observing current trends in technology, including the rise of virtual reality and efforts to map the human brain.
If we believe that there is nothing supernatural about what causes consciousness and it’s merely the product of a very complex architecture in the human brain, we’ll be able to reproduce it. “Soon there will be nothing technical standing in the way to making machines that have their own consciousness,” said Rich Terrile, a scientist at Nasa’s Jet Propulsion Laboratory.
At the same time, videogames are becoming more and more sophisticated and in the future we’ll be able to have simulations of conscious entities inside them.
“Forty years ago we had Pong – two rectangles and a dot. That’s where we were. Now 40 years later, we have photorealistic, 3D simulations with millions of people playing simultaneously and it’s getting better every year. And soon we’ll have virtual reality, we’ll have augmented reality,” said Musk. “If you assume any rate of improvement at all, then the games will become indistinguishable from reality.”
It’s a view shared by Terrile. “If one progresses at the current rate of technology a few decades into the future, very quickly we will be a society where there are artificial entities living in simulations that are much more abundant than human beings.”
If there are many more simulated minds than organic ones, then the chances of us being among the real minds starts to look more and more unlikely. As Terrile puts it: “If in the future there are more digital people living in simulated environments than there are today, then what is to say we are not part of that already?”
Reasons to believe that the universe is a simulation include the fact that it behaves mathematically and is broken up into pieces (subatomic particles) like a pixelated video game. “Even things that we think of as continuous – time, energy, space, volume – all have a finite limit to their size. If that’s the case, then our universe is both computable and finite. Those properties allow the universe to be simulated,” Terrile said.
“Quite frankly, if we are not living in a simulation, it is an extraordinarily unlikely circumstance,” he added.
So who has created this simulation? “Our future selves,” said Terrile.
Not everyone is so convinced by the hypothesis. “Is it logically possible that we are in a simulation? Yes. Are we probably in a simulation? I would say no,” said Max Tegmark, a professor of physics at MIT.
“In order to make the argument in the first place, we need to know what the fundamental laws of physics are where the simulations are being made. And if we are in a simulation then we have no clue what the laws of physics are. What I teach at MIT would be the simulated laws of physics,” he said.
Harvard theoretical physicist Lisa Randall is even more skeptical. “I don’t see that there’s really an argument for it,” she said. “There’s no real evidence.”
“It’s also a lot of hubris to think we would be what ended up being simulated.”
Terrile believes that recognizing that we are probably living in a simulation is as game-changing as Copernicus realizing that the Earth was not the center of the universe. “It was such a profound idea that it wasn’t even thought of as an assumption,” he said.
Before Copernicus, scientists had tried to explain the peculiar behaviour of the planets’ motion with complex mathematical models. “When they dropped the assumption, everything else became much simpler to understand.”
That we might be in a simulation is, Terrile argues, a simpler explanation for our existence than the idea that we are the first generation to rise up from primordial ooze and evolve into molecules, biology and eventually intelligence and self-awareness. The simulation hypothesis also accounts for peculiarities in quantum mechanics, particularly the measurement problem, whereby things only become defined when they are observed.
“For decades it’s been a problem. Scientists have bent over backwards to eliminate the idea that we need a conscious observer. Maybe the real solution is you do need a conscious entity like a conscious player of a video game,” he said.
For Tegmark, this doesn’t make sense. “We have a lot of problems in physics and we can’t blame our failure to solve them on simulation.”
How can the hypothesis be put to the test? On one hand, neuroscientists and artificial intelligence researchers can check whether it’s possible to simulate the human mind. So far, machines have proven to be good at playing chess and Go and putting captions on images. But can a machine achieve consciousness? We don’t know.
On the other hand, scientists can look for hallmarks of simulation. “Suppose someone is simulating our universe – it would be very tempting to cut corners in ways that makes the simulation cheaper to run. You could look for evidence of that in an experiment,” said Tegmark.
For Terrile, the simulation hypothesis has “beautiful and profound” implications.
First, it provides a scientific basis for some kind of afterlife or larger domain of reality above our world. “You don’t need a miracle, faith or anything special to believe it. It comes naturally out of the laws of physics,” he said.
Second, it means we will soon have the same ability to create our own simulations.
“We will have the power of mind and matter to be able to create whatever we want and occupy those worlds.”