Sometimes the solution to a problem is staring you in the face all along. Chip maker Intel is betting that will be true in the race to build quantum computers—machines that should offer immense processing power by exploiting the oddities of quantum mechanics.
Competitors IBM, Microsoft, and Google are all developing quantum components that are different from the ones crunching data in today’s computers. But Intel is trying to adapt the workhorse of existing computers, the silicon transistor, for the task.
Intel has a team of quantum hardware engineers in Portland, Oregon, who collaborate with researchers in the Netherlands, at TU Delft’s QuTech quantum research institute, under a $50 million grant established last year. Earlier this month Intel’s group reported that they can now layer the ultra-pure silicon needed for a quantum computer onto the standard wafers used in chip factories.
This strategy makes Intel an outlier among industry and academic groups working on qubits, as the basic components needed for quantum computers are known. Other companies can run code on prototype chips with several qubits made from superconducting circuits (see “Google’s Quantum Dream Machine”). No one has yet advanced silicon qubits that far.
A quantum computer would need to have thousands or millions of qubits to be broadly useful, though. And Jim Clarke, who leads Intel’s project as director of quantum hardware, argues that silicon qubits are more likely to get to that point (although Intel is also doing some research on superconducting qubits). One thing in silicon’s favor, he says: the expertise and equipment used to make conventional chips with billions of identical transistors should allow work on perfecting and scaling up silicon qubits to progress quickly.
Intel’s silicon qubits represent data in a quantum property called the “spin” of a single electron trapped inside a modified version of the transistors in its existing commercial chips. “The hope is that if we make the best transistors, then with a few material and design changes we can make the best qubits,” says Clarke.
The new process that helps Intel experiment with silicon qubits on standard chip wafers, developed with the materials companies Urenco and Air Liquide, should help speed up its research, says Andrew Dzurak, who works on silicon qubits at the University of New South Wales in Australia. “To get to hundreds of thousands of qubits, we will need incredible engineering reliability, and that is the hallmark of the semiconductor industry,” he says.
Companies developing superconducting qubits also make them using existing chip fabrication methods. But the resulting devices are larger than transistors, and there is no template for how to manufacture and package them up in large numbers, says Dzurak.
Chad Rigetti, founder and CEO of Rigetti Computing, a startup working on superconducting qubits similar to those Google and IBM are developing, agrees that this presents a challenge. But he argues that his chosen technology’s head start will afford ample time and resources to tackle the problem.
Google and Rigetti have both said that in just a few years they could build a quantum chip with tens or hundreds of qubits that dramatically outperforms conventional computers on certain problems, even doing useful work on problems in chemistry or machine learning.
The good news is that mental health professionals have smarter tools than ever before, with artificial intelligence-related technology coming to the forefront to help diagnose patients, often with much greater accuracy than humans.
A new study published in the journal Suicide and Life-Threatening Behavior, for example, showed that machine learning is up to 93 percent accurate in identifying a suicidal person. The research, led by John Pestian, a professor at Cincinnati Children’s Hospital Medical Center, involved 379 teenage patients from three area hospitals.
Each patient completed standardized behavioral rating scales and participated in a semi-structured interview, answering five open-ended questions such as “Are you angry?” to stimulate conversation, according to a press release from the university.
The researchers analyzed both verbal and non-verbal language from the data, then sent the information through a machine-learning algorithm that was able to determine with remarkable accuracy whether the person was suicidal, mentally ill but not suicidal, or neither.
“These computational approaches provide novel opportunities to apply technological innovations in suicide care and prevention, and it surely is needed,” Pentian says in the press release.
In 2014, suicide was ranked as the tenth leading cause of death in the United States, but the No. 2 cause of death for people age 15 to 24, according to the American Association of Suicidology.
A study just published in the journal Psychological Bulletin further punctuated the need for better tools to help with suicide prevention. A meta-analysis of 365 studies conducted over the last 50 years found that the ability of mental health experts to predict if someone will attempt suicide is “no better than chance.”
“One of the major reasons for this is that researchers have almost always tried to use a single factor (e.g., a depression diagnosis) to predict these things,” says lead author Joseph Franklin of Harvard University in an email exchange with Singularity Hub.
Franklin says that the complex nature behind such thoughts and behaviors requires consideration of tens if not hundreds of factors to make accurate predictions. He and others argue in a correspondence piece published earlier this year in Psychological Medicine that machine learning and related techniques are an ideal option. A search engine using only one factor would be ineffective at returning results; the same is true of today’s attempts to predict suicidal behavior.
He notes that researchers in Boston, including colleague Matthew K. Nock at Harvard, have already used machine learning to predict suicidal behaviors with 70 to 85 percent accuracy. Calling the work “amazing,” Franklin notes that the research is still in the preliminary stages, with small sample sizes.
“The work by the Pestian group is also interesting, with their use of vocal patterns/natural language processing being unique from most other work in this area so far,” Franklin says, adding that there are also limits as to what can be drawn from their findings at this point. “Nevertheless, this is a very interesting line of work that also represents a sharp and promising departure from what the field has been doing for the past 50 years.”
Machine learning has yet to be used in therapy, according to Franklin, while most conventional treatments for suicide fall short.
“So even though several groups are on the verge of being able to accurately predict suicidality on the scale of entire healthcare systems [with AI], it’s unclear what we should do with these at-risk people to reduce their risk,” Franklin says.
To that end, Franklin and colleagues have developed a free app called Tec-Tec that appears effective at “reducing self-cutting, suicide plans, and suicidal behaviors.”
The app is based on a psychological technique called evaluative conditioning. By continually pairing certain words and images, it changes associations with certain objects and concepts, according to the website, so that within a game-like design, Tec-Tec seeks to change associations with certain factors that may increase risk for self-injurious behaviors.
“We’re working on [additional] trials and soon hope to use machine learning to tailor the app to each individual over time,” Franklin says, “and to connect the people most in need with the app.”
Thirty-four participants were interviewed and assessed quarterly for two and a half years. Using automated analysis, transcripts of the interviews were evaluated for coherence and two syntactic markers of speech complexity—the length of a sentence and the number of clauses it contained.
The speech features analyzed by the computer predicted later psychosis development with 100 percent accuracy, outperforming classification from clinical interviews, according to the researchers.
“Recent developments in computer science, including natural language processing, could provide the foundation for future development of objective clinical tests for psychiatry,” they wrote.
The research uses “the latest methods in computer vision, machine learning and data mining” to assess children while they are performing certain physical and computer exercises, according to a press release from UTA. The exercises test a child’s attention, decision-making and ability to manage emotions. The data are then analyzed to determine the best type of intervention.
“We believe that the proposed computational methods will help provide quantifiable early diagnosis and allow us to monitor progress over time. In particular, it will help children overcome learning difficulties and lead them to healthy and productive lives,” says Fillia Makedon, a professor in UTA’s Department of Computer Science and Engineering.
Keeping an eye out for autism
Meanwhile, a group at the University of Buffalo has developed a mobile app that can detect autism spectrum disorder (ASD) in children as young as two years old with nearly 94 percent accuracy. The results were recently presented at the IEEE Wireless Health conference at the National Institutes of Health.
The app tracks eye movements of a child looking at pictures of social scenes, such as those showing multiple people, according to a press release from the university. The eye movements of someone with ASD are often different from those of a person without autism.
About one in 68 children in the United States has been diagnosed with ASD, according to the CDC. The UB study included 32 children ranging in age from two to 10. A larger study is planned for the future.
It takes less than a minute to administer the test, which can be done by a parent at home to determine if a child requires professional evaluation.
“This technology fills the gap between someone suffering from autism to diagnosis and treatment,” says Wenyao Xu, an assistant professor in UB’s School of Engineering and Applied Sciences.
Technology that helps treat our most vulnerable populations? Turns out, there is an app for that.
These used to be questions that only philosophers worried about. Scientists just got on with figuring out how the world is, and why. But some of the current best guesses about how the world is seem to leave the question hanging over science too.
Several physicists, cosmologists and technologists are now happy to entertain the idea that we are all living inside a gigantic computer simulation, experiencing a Matrix-style virtual world that we mistakenly think is real.
Our instincts rebel, of course. It all feels too real to be a simulation. The weight of the cup in my hand, the rich aroma of the coffee it contains, the sounds all around me – how can such richness of experience be faked?
But then consider the extraordinary progress in computer and information technologies over the past few decades. Computers have given us games of uncanny realism – with autonomous characters responding to our choices – as well as virtual-reality simulators of tremendous persuasive power.
It is enough to make you paranoid.
The Matrix formulated the narrative with unprecedented clarity. In that story, humans are locked by a malignant power into a virtual world that they accept unquestioningly as “real”. But the science-fiction nightmare of being trapped in a universe manufactured within our minds can be traced back further, for instance to David Cronenberg’s Videodrome (1983) and Terry Gilliam’s Brazil (1985).
Over all these dystopian visions, there loom two questions. How would we know? And would it matter anyway?
Elon Musk, CEO of Tesla and SpaceX (Credit: Kristoffer Tripplaar/Alamy)
The idea that we live in a simulation has some high-profile advocates.
In June 2016, technology entrepreneur Elon Musk asserted that the odds are “a billion to one” against us living in “base reality”.
Similarly, Google’s machine-intelligence guru Ray Kurzweil has suggested that “maybe our whole universe is a science experiment of some junior high-school student in another universe”
What’s more, some physicists are willing to entertain the possibility. In April 2016, several of them debated the issue at the American Museum of Natural History in New York, US.
None of these people are proposing that we are physical beings held in some gloopy vat and wired up to believe in the world around us, as in The Matrix.
Instead, there are at least two other ways that the Universe around us might not be the real one.
Cosmologist Alan Guth of the Massachusetts Institute of Technology, US has suggested that our entire Universe might be real yet still a kind of lab experiment. The idea is that our Universe was created by some super-intelligence, much as biologists breed colonies of micro-organisms.
A 14-year-old girl who said before dying of cancer that she wanted a chance to live longer has been allowed by the high court to have her body cryogenically frozen in the hope that she can be brought back to life at a later time.
The court ruled that the teenager’s mother, who supported the girl’s wish to be cryogenically preserved, should be the only person allowed to make decisions about the disposal of her body. Her estranged father had initially opposed her wishes.
During the last months of her life, the teenager, who had a rare form of cancer, used the internet to investigate cryonics. Known only as JS, she sent a letter to the court: “I have been asked to explain why I want this unusual thing done. I’m only 14 years old and I don’t want to die, but I know I am going to. I think being cryo‐preserved gives me a chance to be cured and woken up, even in hundreds of years’ time.
“I don’t want to be buried underground. I want to live and live longer and I think that in the future they might find a cure for my cancer and wake me up. I want to have this chance. This is my wish.”
Following the ruling, in a case described by the judge as exceptional, the body of JS has now been preserved and transported from where she lived in London to the US, where it has been frozen “in perpetuity” by a commercial company at a cost of £37,000.
The girl’s parents are divorced. She had lived with her mother for most of her life and had had no face-to-face contact with her father since 2008. She resisted his attempts to get back in touch when he learnt of her illness in 2015.
The judge, Mr Justice Peter Jackson, ruled that nothing about the case should be reported while she was alive because media coverage would distress her. She was too ill to attend the court hearing but the judge visited her in hospital.
Jackson wrote: “I was moved by the valiant way in which she was facing her predicament. It is no surprise that this application is the only one of its kind to have come before the courts in this country, and probably anywhere else. It is an example of the new questions that science poses to the law, perhaps most of all to family law … No other parent has ever been put in [the] position [of JS’s father].”
He added: “A dispute about a parent being able to see his child after death would be momentous enough on its own if the case did not also raise the issue of cryonic preservation.”
Since the first preservation by freezing in the 1960s the process has been performed only a few hundred times. The body has to be prepared shortly after death, ideally within minutes. Arrangements then have to be made for the body to be transported by a registered funeral director.
“The scientific theory underlying cryonics is speculative and controversial, and there is considerable debate about its ethical implications,” Jackson said. “On the other hand, cryopreservation, the preservation of cells and tissues by freezing, is now a well-known process in certain branches of medicine, for example the preservation of sperm and embryos as part of fertility treatment. Cryonics is cryopreservation taken to its extreme.”
The judge said the girl’s family was not well off but that her mother’s parents had raised the money. A voluntary UK group of cryonics enthusiasts, who were not medically trained, had offered to help make arrangements.
Co-operation of a hospital was required. “This situation gives rise to serious legal and ethical issues for the hospital trust,” the judge observed, “which has to act within the law and has duties to its other patients and to its staff.”
The hospital trust in the case was willing to help although it stressed it was not endorsing cryonics. “On the contrary, all the professionals feel deep unease about it,” the judge said.
The Human Tissue Authority (HTA), which regulates organisations which remove, store and use human tissue, had been consulted but said it had no remit to intervene in such a case.
“The HTA would be likely to make representations that activities of the present kind should be brought within the regulatory framework if they showed signs of increasing,” Jackson said.
The HTA said: “We are gathering information about cryopreservation to determine how widespread it is currently, or could become in the future, and any risks it may pose to the individual, or public confidence more broadly. We are in discussion with key stakeholders … and the possible need for regulatory oversight.”
The government may need to intervene in future, Jackson said: “It may be … events in this case suggest the need for proper regulation of cryonic preservation in this country if it is to happen in future.”
Inquiries made of American authorities revealed that there was no prohibition on human remains being shipped to the US for cryonic preservation, providing certain provisions were made.
During the course of the 14-year-old’s case, the father changed his mind and told the court: “I respect the decisions [my daughter] is making. This is the last and only thing she has asked from me.”
A child cannot make a will and the court had to decide where the girl’s best interests lay. The judge concluded that allowing the mother to make a decision about her daughter would be in her best interests. The girl died peacefully knowing that her body would be frozen, the judge recorded.
The Department of Health said: “Cases such as this are rare. Although there are no current plans for legislative change in this area, this is an area we will continue to keep under review with the Human Tissue Authority.
Life at the edge of death Murray Ballard, from the book The Prospect of Immortality
By Helen Thomson
“WE’RE taking people to the future!” says architect Stephen Valentine, as we drive through two gigantic gates into a massive plot of land in the middle of the sleepy, unassuming town that is Comfort, Texas. The scene from here is surreal. A lake with a newly restored wooden gazebo sits empty, waiting to be filled. A pregnant zebra strolls across a nearby field. And out in the distance some men in cowboy hats are starting to clear a huge area of shrub land. Soon the first few bricks will be laid here, marking the start of a scientific endeavour like no other.
After years of searching, Valentine chose this site as the unlikely home of the new Mecca of cryogenics. Called Timeship, the monolithic building will become the world’s largest structure devoted to cryopreservation, and will be home to thousands of people who are neither dead nor alive, frozen in time in the hope that one day technology will be able to bring them back to life. And last month, building work began.
Cryonics, the cooling of humans in the hope of reanimating them later, has a reputation as a vanity project for those who have more money than sense, but this “centre for immortality” is designed to be about much more than that. As well as bodies, it will store cells, tissues and organs, in a bid to drive forward the capabilities of cryogenics, the study of extremely low temperatures that has, in the last few years, made remarkable inroads in areas of science that affect us all; fertility therapy, organ transplantation and emergency medicine. What’s more, the cutting-edge facilities being built here should break through the limitations of current cryopreservation, making it more likely that tissues – and whole bodies – can be successfully defrosted in the future.
Timeship is the brainchild of Bill Faloon and Saul Kent, two entrepreneurs and prominent proponents of life extension research. Their vision was to create a building that would house research laboratories, DNA from near-extinct species, the world’s largest human organ biobank, and 50,000 cryogenically frozen bodies. Kent called it “all part of a plan to conquer ageing and death”.
In 1997, Kent asked Valentine, an architect based in New York, whether he could design a building that was stable enough to operate continuously for 100 years with minimal human input. It needed to withstand earthquakes, to be protected from natural disasters and acts of violence, and to survive without the main power supply for months on end. It was a list of demands that no building in the world currently satisfies.
Valentine spent months drawing up proposals for the building, together with advice from engineers who had previously worked for NASA and security experts from around the world. “We had to address everything from pandemics and cyberattacks to snipers and global warming,” says Fred Waterman, a risk mitigation expert on the Timeship team. The designs were approved by Kent but immediately put on ice. He believed the technology that would make the building worthwhile was not yet advanced enough to warrant its construction.
At body temperature, cells need a constant supply of oxygen. Without it they start to die and tissues decay. At low temperatures, cells need less oxygen because the chemical activity of metabolism slows down. At very low temperatures, metabolism stops altogether. The problem faced when trying to preserve human tissue by freezing it is that water in the tissue forms ice and causes damage. The trick is to replace the water with cryoprotectants, essentially antifreeze, which prevent ice from forming. This works well for small, uncomplicated structures like sperm and eggs. But when you try to scale it up to larger organs, damage still occurs.
But in 2000, Greg Fahy, a cryobiologist at 21st Century Medicine in Fontana, California, made a breakthrough with a technique called vitrification. It involves adding cryoprotectants then rapidly cooling an organ to prevent any freezing; instead the tissue turns into a glass-like state. Fahy later showed that you could vitrify a whole rabbit kidney that functioned well after thawing and transplantation. This was the breakthrough Kent and Faloon had been waiting for.
Cold comfort farm
The pair gave Valentine a multimillion-dollar budget and told him to find land on which to build Timeship. Valentine spent five years scouring the US, believing it to be the country most likely to remain politically stable for the next 100 years. He homed in on four states that fitted his exacting criteria. And after evaluating more than 200 sites in Texas alone, Valentine ended up in Comfort. Here he discovered the Bildarth Estate, which came with acres of land, a 1670-square-metre mansion and even a few zebras.
“There’s an urgent need to be able to store whole organs for longer”
Since then, Valentine, together with a team of specialists, has fine-tuned the project. Timeship’s architectural plans make it look like something between a fortress and a spaceship. The central building is a low-lying square with a single entrance. This sits inside a circular wall surrounded by concentric concrete rings. Inside are what Valentine calls “neighbourhoods”, collections of thermos-like dewars that will store the cryopreserved DNA, organs and bodies (see “Cool design”).
Parts of the project are somewhat theatrical – backup liquid nitrogen storage tanks are covered overhead by a glass-floored plaza on which you can walk surrounded by a fine mist of clouds – others are purely functional, like the three wind turbines that will provide year-round back-up energy.
The question is, do we need Timeship? Such an extravagant endeavour might not be vital, but it looks as if something similar will be necessary sooner or later. In fact, the strongest argument for such a facility, and the technological developments it promises, might have nothing to do with the desire to be frozen for the future.
We already have small biobanks for storing bones from human donors, as well as tendons, ligaments and stem cells. But with rapid advances in regenerative medicine, there is a growing need for large-scale facilities in which we can store more cryogenically frozen biological material.
Stem cells, for instance, are increasingly cryopreserved after being extracted and grown outside the body for use in regenerative therapies. “Beyond the age of 50, it’s harder to isolate stem cells for regenerative medicine,” says Mark Lowdell at University College London. “If I were in my 30s, I would certainly be cryopreserving some bone marrow for future tissue to fix my tennis injuries.” Lowdell will soon do the first transplant of a tissue-engineered larynx created from a donor larynx that has been seeded with cryopreserved stem cells to reduce the risk of rejection.
Then there’s the problem of organ shortage. In the US, almost 31,000 transplants were carried out in 2015, but at least six times as many people are on the waiting list – each day 12 people die before they can get a kidney. To make matters worse, many organs go to waste because their shelf life is too short to find a well-matched patient. Nearly 500 kidneys went unused in the US last year because the recipient couldn’t get the organ in time.
So there’s an urgent need to be able to store whole organs for longer. The issue is so important that the US government this month pledged to start funding research into this very area. We can already reversibly cryopreserve small bundles of cells – many thousands of babies have been born from vitrified human embryos. Doing the same with large organs, like kidneys or hearts, is harder, but not impossible. Over the past decade, for instance, several babies have been born from ovarian tissue that was removed before chemotherapy, cryopreserved and later replaced. Similarly, rabbit kidneys and rat limbs have been cryopreserved, thawed and placed in a new body. Fahy says his team is well on its way to the first human trial of a cryogenically frozen organ. “After decades of research, we’re now at a tipping point,” he says. Having improved both the vitrification technique and the cryoprotectant solution, they are moving to trials in pigs, and human trials could follow within five years, he says.
That might help prevent wastage, but we would still have a shortage of organs for transplant. Another solution is to grow them from scratch using our own stem cells, and keep them until we need them. So far, tiny 3D heart-like organs have been made from stem cells alone, as well as mini kidneys and livers, all with the ultimate aim of bioengineering replacement organs for transplantation.
Once organs can be produced like this, we will need a way of storing either the raw material or the organs themselves. “I’m not enthusiastic about the notion of freezing whole heads, but I can certainly imagine people needing to freeze cells, or ‘starter kits’ for the development of tissues, or even whole organs – and in the not-so-distant future,” says Arthur Caplan, a bioethicist at New York University Langone Medical Center.
Like Caplan, most scientists I spoke to said it was becoming more likely that we could bring individual cryopreserved organs back to life, but were less convinced by the idea of freezing whole bodies. So I decided to visit Alcor Life Extension Foundation, the world’s biggest cryonics facility, in Scottsville, Arizona, to find out what happens when a body is put on ice.
Alcor’s lobby has the feel of a doctor’s waiting room, except that lining the walls are portraits of men, women, children and the occasional dog. The people in the pictures are preserved there, some alongside their beloved pets.
Aaron Drake, head of Alcor’s medical response team, says the company has more than 1000 clients signed up worldwide – 99 per cent are healthy, but 1 per cent have a terminal disease. Some of them want to freeze their whole body, others – known as “neuros” – opt for just the head.
Drake admits that the techniques his firm uses aren’t perfect, which is why they continue to research the process. Recently, Alcor scientists placed acoustical devices on the brains of neuros as they were lowered into liquid nitrogen, listening as the heads cooled to -196 °C. The colder they got, the more frequently the team heard acoustical anomalies, which they attribute to micro-fracturing of the tissue. “That’s damage happening,” says Drake. It’s difficult to say what effects this might have. “It’s not universal or consistent, but it’s something we know doesn’t happen at around -140 °C.”
The problem is, to store a person at -140 °C, you have to keep them warmer than nitrogen’s boiling point, which is incredibly hard to do – certainly much harder than placing a body in a giant thermos full of liquid nitrogen, letting it boil and occasionally topping it up.
But at Timeship, Valentine thinks he’s cracked the problem. After years of experimentation, he has designed a system called a Temperature Control Vessel (TCV), a dewar that houses cryogenically preserved bodies, heads or tissues. Inside the dewar are moving rods that can be dipped into a pool of liquid nitrogen whenever a sensor notes that the temperature has risen from -140 °C. This would provide a relatively autonomous way of maintaining the contents at an ideal temperature (see “Cool design”).
Each TCV can carry hundreds of samples of tissue and organs, or four bodies and five heads.They are designed to be stacked together in a tessellating pattern that forms the neighbourhoods within the main building.
This should reduce some of the damage to brain tissue that the Alcor team heard. But even with that technology, is there any hope of reanimating a brain?
There is some evidence to suggest that certain properties of the mind – memories, for instance – can survive cryopreservation. In 2015, researchers trained worms to recognise a smell, then froze them. On thawing, the worms retained the smell memories. And this year, Fahy’s team cryopreserved a rabbit brain in a near-perfect state. Although the group used a chemical fixative that is not yet used in human preservation, the thawed rabbit brain appeared “uniformly excellent” when examined using electron microscopy.
“These kinds of experiments show that it’s not such a massive leap of faith to think that we could preserve the human mind,” says Max More, president and CEO of Alcor. But not everyone is convinced. Even if you could preserve the delicate structures of the human brain, the cryoprotectants themselves are toxic. “No matter how smart scientists are in the future, you can’t change mush into a functional brain,” says Caplan, “and I just don’t think that what we’re able to do right now to preserve the brain is good enough to ever bring it back to life.”
There are precedents for the idea that the human brain can be revived after being cooled, however. In 1986, two-and-a-half-year-old Michelle Funk fell into an icy creek where she was submerged for just over an hour. Despite showing no signs of life, doctors spent 2 hours warming her blood through a heart-lung machine. Eventually, she recovered fully. Her doctors figured that the sudden cooling of her brain must have slowed the organ’s need for oxygen, staving off brain damage.
“What we are doing is just an extension of emergency medicine – we are stretching time“
Funk’s recovery was so remarkable it spurred researchers to repeat the scenario experimentally in pigs and dogs – cryopreserving them for hours before bringing them back to life. The same procedure is now being tested in humans in a groundbreaking trial by surgeons at UPMC Presbyterian Hospital in Pittsburgh, Pennsylvania. There they are placing patients in suspended animation for a few hours, to buy time to fix injuries that would otherwise be lethal, such as gunshot wounds. The technique involves replacing the person’s blood with a cold saline solution and cooling the body. They will then try to fix the injuries and bring the patient back to life by slowly warming the body with blood.
That’s not so different from what goes on at Alcor, says More. “What we’re doing is trying to stretch the time in which the person is suspended. It’s just an extension of emergency medicine.” I ask More whether he really believes that his members will be brought back to life. “I don’t know if it will ever happen,” he says, “but we’re breaking no laws of physics here. Who is to say that in 100 years we won’t have the medical tools – some kind of nanotechnology perhaps – that can fix cells at an individual level and repair what’s necessary to revive someone in good health.”
This is the central argument in favour of cryonics – the possibility, no matter how slim, that it offers a chance of survival. “We think of cryonics as a scientific experiment,” says More. “People that are buried or cremated are our control group, and so far, everyone in the control group has died.”
Facing the future
It is an expensive experiment, however. Cryopreserving your body will set you back up to $220,000, payable on death – often via life insurance, with Alcor as the beneficiary.
“People often say that the money would be better spent on family or given to charity,” says Ole Moen, a philosopher and ethicist at the University of Oslo, Norway. “But what’s strange about this is that nobody complains when people spend money on expensive cancer treatments or long-term care – people drain the public healthcare budget trying to stay alive all the time,” he says. “So why complain when people want to spend their own money trying to live longer via cryonics?”
If you’re happy to fork out, there’s the big question of what kind of future you’d wake up to. “Even if you could get this technique up and running by some magical future science I believe you’d be a freak – you’d be so far out of it culturally, so lost, that you’d be at risk of being driven mad,” Caplan says.
With so many big unknowns, I leave Alcor and Timeship undecided on the utility of cryonics. What’s clear, though, is that the underlying research into cryopreservation is worthwhile. Whether it’s to help me have children, fix a future tennis injury or potentially even provide me with a new heart, I’d be first in line to freeze cells and tissues today that might help my future self live longer, and healthier.
On my way out of Alcor, I ask Drake whether he wants to be frozen, given that he has cryopreserved so many others. “Yes,” he says. “Not because I want to be immortal, I don’t think that’s possible. I just want to see if all this work was futile. I was the last person these people saw before they took their last breath. Will they see me again? Will they thank me? I don’t know if that will ever happen. But wouldn’t that be nice?”
What is death?
Death has been redefined several times over the past century. It was once considered the cessation of a heartbeat and breathing. Today it includes other scenarios, such as the cessation of brain activity. But even that’s not good enough for some.
“Death is a process, not a switch,” says Max More, president and CEO of the Alcor Life Extension Foundation in Scottsdale, Arizona. “If you go back 100 years and someone falls over in the street and stops breathing, doctors would say ‘this person is dead’. Today we can do CPR and defibrillation to restart their heart and they can be brought back to life. So when that doctor declared them dead, were they? With today’s standards, no they weren’t.” Instead, says More, what we’re really saying is “given today’s technology and the medicine I have available to me right now, there’s nothing more I can do for you”.
A definition that emerged in the 1990s in response to this problem is the information-theoretic definition of death. It states that a person is dead only when the structures that encode memory and personality are so disrupted that it is no longer possible in principle to restore them.
Therefore a person who is cryogenically frozen, with brain structures preserved in a state close to what they were before the pronouncement of clinical death, is not by this definition, actually dead. So if the people frozen at Alcor aren’t dead, what are they? “There’s no good word for what they are,” says More (see Interview “I want to put your death on ice so that you can live again“). “Some people say they are de-animated.”
This article appeared in print under the headline “The big freeze”
Wrinkles, grey hair and niggling aches are normally regarded as an inevitable part of growing older, but now scientists claim that the ageing process may be reversible.
The team showed that a new form of gene therapy produced a remarkable rejuvenating effect in mice. After six weeks of treatment, the animals looked younger, had straighter spines and better cardiovascular health, healed quicker when injured, and lived 30% longer.
Juan Carlos Izpisua Belmonte, who led the work at the Salk Institute in La Jolla, California, said: “Our study shows that ageing may not have to proceed in one single direction. With careful modulation, ageing might be reversed.”
The genetic techniques used do not lend themselves to immediate use in humans, and the team predict that clinical applications are a decade away. However, the discovery raises the prospect of a new approach to healthcare in which ageing itself is treated, rather than the various diseases associated with it.
The findings also challenge the notion that ageing is simply the result of physical wear and tear over the years. Instead, they add to a growing body of evidence that ageing is partially – perhaps mostly – driven by an internal genetic clock that actively causes our body to enter a state of decline.
The scientists are not claiming that ageing can be eliminated, but say that in the foreseeable future treatments designed to slow the ticking of this internal clock could increase life expectancy.
“We believe that this approach will not lead to immortality,” said Izpisua Belmonte. “There are probably still limits that we will face in terms of complete reversal of ageing. Our focus is not only extension of lifespan but most importantly health-span.”
Wolf Reik, a professor of epigenetics at the Babraham Institute, Cambridge, who was not involved in the work, described the findings as “pretty amazing” and agreed that the idea of life-extending therapies was plausible. “This is not science fiction,” he said.
The rejuvenating treatment given to the mice was based on a technique that has previously been used to “rewind” adult cells, such as skin cells, back into powerful stem cells, very similar to those seen in embryos. These so-called induced pluripotent stem (iPS) cells have the ability to multiply and turn into any cell type in the body and are already being tested in trials designed to provide “spare parts” for patients.
The latest study is the first to show that the same technique can be used to partially rewind the clock on cells – enough to make them younger, but without the cells losing their specialised function.
“Obviously there is a logic to it,” said Reik. “In iPS cells you reset the ageing clock and go back to zero. Going back to zero, to an embryonic state, is probably not what you want, so you ask: where do you want to go back to?”
The treatment involved intermittently switching on the same four genes that are used to turn skin cells into iPS cells. The mice were genetically engineered in such a way that the four genes could be artificially switched on when the mice were exposed to a chemical in their drinking water.
The scientists tested the treatment in mice with a genetic disorder, called progeria, which is linked to accelerated ageing, DNA damage, organ dysfunction and dramatically shortened lifespan.
After six weeks of treatment, the mice looked visibly younger, skin and muscle tone improved and they lived 30% longer. When the same genes were targeted in cells, DNA damage was reduced and the function of the cellular batteries, called the mitochondria, improved.
“This is the first time that someone has shown that reprogramming in an animal can provide a beneficial effect in terms of health and extend their lifespan,” said Izpisua Belmonte.
Crucially, the mice did not have an increased cancer risk, suggesting that the treatment had successfully rewound cells without turning them all the way back into stem cells, which can proliferate uncontrollably in the body.
The potential for carcinogenic side-effects means that the first people to benefit are likely to be those with serious genetic conditions, such as progeria, where there is more likely to be a medical justification for experimental treatments. “Obviously the tumour risk is lurking in the background,” said Reik.
The approach used in the mice could not be readily applied to humans as it would require embryos to be genetically manipulated, but the Salk team believe the same genes could be targeted with drugs.
“These chemicals could be administrated in creams or injections to rejuvenate skin, muscle or bones,” said Izpisua Belmonte. “We think these chemical approaches might be in human clinical trials in the next ten years.”
The findings are published in the journal Cell.
This article was amended on 16 December 2016. A previous version erroneously gave Wolf Reik’s affiliation as the University of Cambridge. This has now been corrected to the Babraham Institute, Cambridge.
A free AI-based scholarly search engine that aims to outdo Google Scholar is expanding its corpus of papers to cover some 10 million research articles in computer science and neuroscience, its creators announced on 11 November. Since its launch last year, it has been joined by several other AI-based academic search engines, most notably a relaunched effort from computing giant Microsoft.
Semantic Scholar, from the non-profit Allen Institute for Artificial Intelligence (AI2) in Seattle, Washington, unveiled its new format at the Society for Neuroscience annual meeting in San Diego. Some scientists who were given an early view of the site are impressed. “This is a game changer,” says Andrew Huberman, a neurobiologist at Stanford University, California. “It leads you through what is otherwise a pretty dense jungle of information.”
The search engine first launched in November 2015, promising to sort and rank academic papers using a more sophisticated understanding of their content and context. The popular Google Scholar has access to about 200 million documents and can scan articles that are behind paywalls, but it searches merely by keywords. By contrast, Semantic Scholar can, for example, assess which citations to a paper are most meaningful, and rank papers by how quickly citations are rising—a measure of how ‘hot’ they are.
When first launched, Semantic Scholar was restricted to 3 million papers in the field of computer science. Thanks in part to a collaboration with AI2’s sister organization, the Allen Institute for Brain Science, the site has now added millions more papers and new filters catering specifically for neurology and medicine; these filters enable searches based, for example, on which part of the brain part of the brain or cell type a paper investigates, which model organisms were studied and what methodologies were used. Next year, AI2 aims to index all of PubMed and expand to all the medical sciences, says chief executive Oren Etzioni.
“The one I still use the most is Google Scholar,” says Jose Manuel Gómez-Pérez, who works on semantic searching for the software company Expert System in Madrid. “But there is a lot of potential here.”
Semantic Scholar is not the only AI-based search engine around, however. Computing giant Microsoft quietly released its own AI scholarly search tool, Microsoft Academic, to the public this May, replacing its predecessor, Microsoft Academic Search, which the company stopped adding to in 2012.
Microsoft’s academic search algorithms and data are available for researchers through an application programming interface (API) and the Open Academic Society, a partnership between Microsoft Research, AI2 and others. “The more people working on this the better,” says Kuansan Wang, who is in charge of Microsoft’s effort. He says that Semantic Scholar is going deeper into natural-language processing—that is, understanding the meaning of full sentences in papers and queries—but that Microsoft’s tool, which is powered by the semantic search capabilities of the firm’s web-search engine Bing, covers more ground, with 160 million publications.
Like Semantic Scholar, Microsoft Academic provides useful (if less extensive) filters, including by author, journal or field of study. And it compiles a leaderboard of most-influential scientists in each subdiscipline. These are the people with the most ‘important’ publications in the field, judged by a recursive algorithm (freely available) that judges papers as important if they are cited by other important papers. The top neuroscientist for the past six months, according to Microsoft Academic, is Clifford Jack of the Mayo Clinic, in Rochester, Minnesota.
Other scholars say that they are impressed by Microsoft’s effort. The search engine is getting close to combining the advantages of Google Scholar’s massive scope with the more-structured results of subscription bibliometric databases such as Scopus and the Web of Science, says Anne-Wil Harzing, who studies science metrics at Middlesex University, UK, and has analysed the new product. “The Microsoft Academic phoenix is undeniably growing wings,” she says. Microsoft Research says it is working on a personalizable version—where users can sign in so that Microsoft can bring applicable new papers to their attention or notify them of citations to their own work—by early next year.
Other companies and academic institutions are also developing AI-driven software to delve more deeply into content found online. The Max Planck Institute for Informatics, based in Saarbrücken, Germany, for example, is developing an engine called DeepLife specifically for the health and life sciences. “These are research prototypes rather than sustainable long-term efforts,” says Etzioni.
In the long term, AI2 aims to create a system that will answer science questions, propose new experimental designs or throw up useful hypotheses. “In 20 years’ time, AI will be able to read—and more importantly, understand—scientific text,” Etzioni says.
This article is reproduced with permission and was first published on November 11, 2016.
When I published Abundance: The Future is Better Than You Think in February 2012, I included about 80 charts in the back of the book showing very strong evidence that the world is getting better.
Over the last five years, this trend has continued and accelerated.
This blog includes additional “Evidence for Abundance” that you can share with friends and family to change their mindset.
We truly are living in the most exciting time to be alive.
By the way, if you have additional ‘Evidence for Abundance’ (charts, data, etc.) that you’ve encountered, please email them to me at email@example.com.
Why This Is Important
Before I share the new “data” with you, it’s essential that you understand why this matters.
We live in a world where we are constantly bombarded by negative news from every angle. If you turn on CNN (what I call the Crisis News Network), you’ll predominantly hear about death, terrorism, airplane crashes, bombings, financial crisis and political scandal.
I think of the news as a drug pusher, and negative news as their drug.
There’s a reason for this.
We humans are wired to pay 10x more attention to negative news than positive news.
Being able to rapidly notice and pay attention to negative news (like a predator or a dangerous fire) was an evolutionary advantage to keep you alive on the savannas of Africa millions of years ago.
Today, we still pay more attention to negative news, and the news media knows this. They take advantage of it to drive our eyeballs to their advertisers. Typically, good news networks fail as businesses.
It’s not that the news media is lying — it’s just not a balanced view of what’s going on in the world.
And because your mindset matters a lot, my purpose of my work and this post is to share with you the data supporting the positive side of the equation and to give you insight to some fundamental truths about where humanity really is going…
The truth is, driven by advances in exponential technologies, things are getting much better around the world at an accelerating rate.
NOTE: This is not to say that there aren’t major issues we still face, like climate crisis, religious radicalism, terrorism, and so on. It’s just that we forget and romanticize the world in centuries past — and life back then was short and brutal.
My personal mission, and that of XPRIZE and Singularity University, is to help build a “bridge to abundance”: a world in which we are able to meet the basic needs of every man, woman and child.
So, now, let’s look at 10 new charts.
More Evidence for Abundance
Below are 10 powerful charts illustrating the positive developments we’ve made in recent years.
1. Living in Absolute Poverty (1981-2011)
Declining rates of absolute poverty (Source: Our World in Data, Max Roser)
Absolute poverty is defined as living on less than $1.25/day. Over the last 30 years, the share of the global population living in absolute poverty has declined from 53% to under 17%.
While there is still room for improvement (especially in sub-Saharan Africa and South Asia), the quality of life in every region above has been steadily improving and will continue to do so. Over the next 20 years, we have the ability to extinguish absolute poverty on Earth.
2. Child Labor Is on the Decline (2000-2020)
Child Labor on the decline (Source: International Labor Organization)
This chart depicts the actual and projected changes in the number of children (in millions) in hazardous work conditions and performing child labor between 2000 and 2020.
As you can see, in the last 16 years, the number of children in these conditions has been reduced by more than 50%. As we head to a world of low-cost robotics, where such machines can operate far faster, far cheaper and around the clock, the basic rationale for child labor will completely disappear, and it will drop to zero.
3. Income Spent on Food
Income spent on food (Source: USDA, Economic Research Service, Food Expenditure Series)
This chart shows the percent per capita of disposable income spent on food in the U.S. from 1960 to 2012.
If you focus on the blue line, ‘Food at home,’ you can see that over the last 50 years, the percent of our disposable income spent on food has dropped by more than 50 percent, from 14% to less than 6%.
This is largely a function of better food production technology, distribution processes and policies that have reduced the cost of food. We’re demonetizing food rapidly.
4. Infant Mortality Rates
Infant Mortality Rate (Source: Devpolicy, UN Interagency Group for Child Mortality Est. 2013)
This chart depicts global under-five-years-old mortality rates between 1990 and 2012 based on the number of deaths per 1,000 live births.
In the last 25 years, under-five mortality rates have dropped by 50%. Infant mortality rates and neonatal mortality rates have also dropped significantly.
And this is just in the last 25 years. If you looked at the last 100 years, which I talk about in Abundance, the improvements have been staggering.
5. Annual Cases of Guinea Worm
Guinea worm cases (Source: GiveWell, Carter Center)
Guinea worm is a nasty parasite that used to affect over 3.5 million people only 30 years ago. Today, thanks to advances in medical technologies, research and therapeutics, the parasite has almost been eradicated. In 2008, there were just 4,647 cases.
I’m sharing the chart above because it represents humanity’s growing ability to address and cure diseases that have plagued us for ages. Expect that through technologies such as gene drive/CRISPR-Cas9 and other genomic technologies, we will rapidly begin to eliminate dozens or hundreds of similar plagues.
6. Teen Birth Rates in the United States
Teen birth rates (Source: Vox, Centers for Disease Control)
The chart above shows the dramatic decline in the number of teen (15 to 19 years old) birth rates in the United States since 1950. At its peak, 89.1 out of 1,000 teenage women were giving birth. Today, it’s dropped under 29 out of 1,000.
This is largely a function of the population becoming better educated, the cost of birth control being reduced and becoming more widely available, and cultural shifts in the United States.
7. Homicide Rates in Western Europe
Homicide rates in Europe (Source: Our World in Data, Max Roser & Manuel Eisner)
The chart above shows the number of homicides per 100,000 people per year in five Western European regions from 1300 to 2010.
As you can see, Western Europe used to be a very dangerous place to live. Over the last 700+ years, the number of homicides per 100,000 people has decreased to almost zero.
It is important to look back this far (700 years) because we humans lose perspective and tend to romanticize the past, but forget how violent life truly was in, say, the Middle Ages, or even just a couple of hundred years ago.
We have made dramatic and positive changes. On an evolutionary time scale, 700 years is NOTHING, and our progress as a species is impressive.
8. U.S. Violent Crime Rates, 1973 – 2010
U.S. violent crime rates (Source: Gallup, Bureau of Justice Statistics)
In light of the recent terrorist shooting in Orlando, and the school shootings in years past, it is sometimes easy to lose perspective.
The truth is, in aggregate, we’ve made significant progress in reducing violent crimes in the United States in the last 50 years.
As recent as the early 80s and mid-90s, there were over 50 violent crime victims per 1,000 individuals. Recently, this number has dropped threefold to 15 victims per 1,000 people.
We continue to make our country (and the world) a safer place to live.
9. Average Years of Education, 1820-2003
Average years of education (Source: Our World in Data, Max Roser)
I love this chart. In the last 200 years, the average number of ‘years of education’ received by people worldwide has increased dramatically.
In the U.S. in 1820, the average person received less than 2 years of education. These days, it’s closer to 21 years of education, a 10X improvement.
We are rapidly continuing the demonetization, dematerialization and democratization of education. Today, I’m very proud of the $15 million Global Learning XPRIZE as a major step in that direction.
Within the next 20 years, the best possible education on Earth will be delivered by AI for free — and the quality will be the same for the son or daughter of a billionaire as it is for the son or daughter of the poorest parents on the planet.
10. Global Literacy Rates
Global literacy rates (Source: Our World in Data, Max Roser)
Along those same lines, the extraordinary chart above shows how global literacy rates have increased from around 10% to close to 100% in the last 500 years.
This is both a function of technology democratizing access to education, as well as abundance giving us the freedom of time to learn.
Education and literacy is a core to my abundance thesis — a better-educated world raises all tides.
Again, if you have other great examples of abundance (charts and data), please send them to me at firstname.lastname@example.org.
We live in the most exciting time to be alive! Enjoy it.
Microsoft has vowed to “solve the problem of cancer” within a decade by using ground-breaking computer science to crack the code of diseased cells so they can be reprogrammed back to a healthy state.
In a dramatic change of direction for the technology giant, the company has assembled a “small army” of the world’s best biologists, programmers and engineers who are tackling cancer as if it were a bug in a computer system.
This summer Microsoft opened its first wet laboratory where it will test out the findings of its computer scientists who are creating huge maps of the internal workings of cell networks.
Microsoft opened its first ‘wet’ laboratory this summer
The researchers are even working on a computer made from DNA which could live inside cells and look for faults in bodily networks, like cancer. If it spotted cancerous chances it would reboot the system and clear out the diseased cells.
Chris Bishop, laboratory director at Microsoft Research, said: “I think it’s a very natural thing for Microsoft to be looking at because we have tremendous expertise in computer science and what is going on in cancer is a computational problem.
“It’s not just an analogy, it’s a deep mathematical insight. Biology and computing are disciplines which seem like chalk and cheese but which have very deep connections on the most fundamental level.”
The biological computation group at Microsoft are developing molecular computers built from DNA which act like a doctor to spot cancer cells and destroy them.
Andrew Philips, head of the group, said: “It’s long term, but… I think it will be technically possible in five to 10 years time to put in a smart molecular system that can detect disease.”
The programming principles and tools group has already developed software that mimics the healthy behavior of a cell, so that it can be compared to that of a diseased cell, to work out where the problem occurred and how it can be fixed.
The Bio Model Analyser software is already being used to help researchers understand how to treat leukemia more effectively.
Dr Jasmin Fisher, senior researcher and an associate professor at Cambridge University, said: “If we are able to control and regulate cancer then it becomes like any chronic disease and then the problem is solved.”
“I think for some of the cancers five years, but definitely within a decade. Then we will probably have a century free of cancer.”
She believes that in the future smart devices will monitor health continually and compare it to how the human body should be operating, so that it can quickly detect problems.
“My own personal vision is that in the morning you wake up, you check your email and at the same time all of our genetic data, our pulse, our sleep patterns, how much we exercised, will be fed into a computer which will check your state of well-being and tell you how prone you are to getting flu, or some other horrible thing,” she added.
“In order to get there we need these kind of computer models which mimic and model the fundamental processes that are happening in our bodies.
“Under normal development cells divide and they die and there is a certain balance, the problems start when that balance is broken and that’s how we had uncontrolled proliferation and tumours.
“If we could have all of that sitting on your personal computer and monitoring your health state then it will alert us when something is coming.”
Improved scanning technology offers hope
Patients undergoing radiotherapy could see treatment slashed from hours to just minutes with a new innovation to quickly map the size of a tumour.
Currently radiologists must scan a tumour and then painstakingly draw the outline of the cancer on dozens of sections by hand to create a 3D map before treatment, a process which can take up to four hours.
They also must outline nearby important organs to make sure they are protected from the blast of radiation.
But Microsoft engineers have developed a programme which can delineate a tumour within minutes, meaning treatment can happen immediately.
The programme can also show doctors how effective each treatment has been, so the dose can be altered depending on how much the tumour has been shrunk.
“Eyeballing works very well for diagnosing,” said Antonio Criminisi, a machine learning and computer vision expert who heads radiomics research in Microsoft’s Cambridge, UK, lab.
“Expert radiologists can look at an image – say a scan of someone’s brain – and be able to say in two seconds, ‘Yes, there’s a tumor. No, there isn’t a tumor. But delineating a tumour by hand is not very accurate.”
The system could eventually evaluate 3D scans pixel by pixel to tell the radiologist exactly how much the tumor has grown, shrunk or changed shape since the last scan.
It also could provide information about things like tissue density, to give the radiologist a better sense of whether something is more likely a cyst or a tumor. And it could provide more fine-grained analysis of the health of cells surrounding a tumor.
“Doing all of that by eye is pretty much impossible,” added Dr Criminisi.
The images could also be 3D printed so that surgeons could practice a tricky operation, such as removing a hard-to -reach brain tumour, before surgery.
Artificial intelligence (AI) has already transformed our lives — from the autonomous cars on the roads to the robotic vacuums and smart thermostats in our homes. Over the next 15 years, AI technologies will continue to make inroads in nearly every area of our lives, from education to entertainment, health care to security.
The question is, are we ready? Do we have the answers to the legal and ethical quandaries that will certainly arise from the increasing integration of AI into our daily lives? Are we even asking the right questions?
Now, a panel of academics and industry thinkers has looked ahead to 2030 to forecast how advances in AI might affect life in a typical North American city and spark discussion about how to ensure the safe, fair, and beneficial development of these rapidly developing technologies.
“Now is the time to consider the design, ethical, and policy challenges that AI technologies raise,” said Grosz. “If we tackle these issues now and take them seriously, we will have systems that are better designed in the future and more appropriate policies to guide their use.”
“We believe specialized AI applications will become both increasingly common and more useful by 2030, improving our economy and quality of life,” said Peter Stone, a computer scientist at the University of Texas, Austin, and chair of the report. “But this technology will also create profound challenges, affecting jobs and incomes and other issues that we should begin addressing now to ensure that the benefits of AI are broadly shared.”
The report investigates eight areas of human activity in which AI technologies are already affecting urban life and will be even more pervasive by 2030: transportation, home/service robots, health care, education, entertainment, low-resource communities, public safety and security, employment, and the workplace.
Some of the biggest challenges in the next 15 years will be creating safe and reliable hardware for autonomous cars and health care robots; gaining public trust for AI systems, especially in low-resource communities; and overcoming fears that the technology will marginalize humans in the workplace.
Issues of liability and accountability also arise with questions such as: Who is responsible when a self-driven car crashes or an intelligent medical device fails? How can we prevent AI applications from being used for racial discrimination or financial cheating?
The report doesn’t offer solutions but rather is intended to start a conversation between scientists, ethicists, policymakers, industry leaders, and the general public.
Grosz said she hopes the AI 100 report “initiates a century-long conversation about ways AI-enhanced technologies might be shaped to improve life and societies.”