July 10, 2016
This opensource robot takes care of planting the seeds, pouring water and removing weeds. You can also design your own garden with various types of seed.
July 10, 2016
This opensource robot takes care of planting the seeds, pouring water and removing weeds. You can also design your own garden with various types of seed.
July 10, 2016
Bill Gates is excited about the rise of artificial intelligence but acknowledged the arrival of machines with greater-than-human capabilities will create some unique challenges.
After years of working on the building blocks of speech recognition and computer vision, Gates said enough progress has been made to ensure that in the next 10 years there will be robots to do tasks like driving and warehouse work as well as machines that can outpace humans in certain areas of knowledge.
“The dream is finally arriving,” Gates said, speaking with wife Melinda Gates on Wednesday at the Code Conference. “This is what it was all leading up to.”
However, as he said in an interview with Recode last year, such machine capabilities will pose two big problems.
The first is, it will eliminate a lot of existing types of jobs. Gates said that creates a need for a lot of retraining but notes that until schools have class sizes under 10 and people can retire at a reasonable age and take ample vacation, he isn’t worried about a lack of need for human labor.
The second issue is, of course, making sure humans remain in control of the machines. Gates has talked about that in the past, saying that he plans to spend time with people who have ideas on how to address that issue, noting work being done at Stanford, among other places.
Melinda Gates noted that you can tell a lot about where her husband’s interest is by the books he has been reading. “There have been a lot of AI books,” she said.
July 10, 2016
Scientists are now contemplating the fabrication of a human genome, meaning they would use chemicals to manufacture all the DNA contained in human chromosomes.
The prospect is spurring both intrigue and concern in the life sciences community because it might be possible, such as through cloning, to use a synthetic genome to create human beings without biological parents.
While the project is still in the idea phase, and also involves efforts to improve DNA synthesis in general, it was discussed at a closed-door meeting on Tuesday at Harvard Medical School in Boston. The nearly 150 attendees were told not to contact the news media or to post on Twitter during the meeting.
Organizers said the project could have a big scientific payoff and would be a follow-up to the original Human Genome Project, which was aimed at reading the sequence of the three billion chemical letters in the DNA blueprint of human life. The new project, by contrast, would involve not reading, but rather writing the human genome — synthesizing all three billion units from chemicals.
But such an attempt would raise numerous ethical issues. Could scientists create humans with certain kinds of traits, perhaps people born and bred to be soldiers? Or might it be possible to make copies of specific people?
“Would it be O.K., for example, to sequence and then synthesize Einstein’s genome?” Drew Endy, a bioengineer at Stanford, and Laurie Zoloth, a bioethicist at Northwestern University, wrote in an essay criticizing the proposed project. “If so how many Einstein genomes should be made and installed in cells, and who would get to make them?”
George Church, a professor of genetics at Harvard Medical School and an organizer of the proposed project, said there had been a misunderstanding. The project was not aimed at creating people, just cells, and would not be restricted to human genomes, he said. Rather it would aim to improve the ability to synthesize DNA in general, which could be applied to various animals, plants and microbes.
“They’re painting a picture which I don’t think represents the project,” Dr. Church said in an interview.
He said the meeting was closed to the news media, and people were asked not to tweet because the project organizers, in an attempt to be transparent, had submitted a paper to a scientific journal. They were therefore not supposed to discuss the idea publicly before publication. He and other organizers said ethical aspects have been amply discussed since the beginning.
The project was initially called HGP2: The Human Genome Synthesis Project, with HGP referring to the Human Genome Project. An invitation to the meeting at Harvard said that the primary goal “would be to synthesize a complete human genome in a cell line within a period of 10 years.”
But by the time the meeting was held, the name had been changed to “HGP-Write: Testing Large Synthetic Genomes in Cells.”
The project does not yet have funding, Dr. Church said, though various companies and foundations would be invited to contribute, and some have indicated interest. The federal government will also be asked. A spokeswoman for the National Institutes of Health declined to comment, saying the project was in too early a stage.
Besides Dr. Church, the organizers include Jef Boeke, director of the institute for systems genetics at NYU Langone Medical Center, and Andrew Hessel, a self-described futurist who works at the Bay Area software company Autodesk and who first proposed such a project in 2012.
Scientists and companies can now change the DNA in cells, for example, by adding foreign genes or changing the letters in the existing genes. This technique is routinely used to make drugs, such as insulin for diabetes, inside genetically modified cells, as well as to make genetically modified crops. And scientists are now debating the ethics of new technology that might allow genetic changes to be made in embryos.
But synthesizing a gene, or an entire genome, would provide the opportunity to make even more extensive changes in DNA.
For instance, companies are now using organisms like yeast to make complex chemicals, like flavorings and fragrances. That requires adding not just one gene to the yeast, like to make insulin, but numerous genes in order to create an entire chemical production process within the cell. With that much tinkering needed, it can be easier to synthesize the DNA from scratch.
Right now, synthesizing DNA is difficult and error-prone. Existing techniques can reliably make strands that are only about 200 base pairs long, with the base pairs being the chemical units in DNA. A single gene can be hundreds or thousands of base pairs long. To synthesize one of those, multiple 200-unit segments have to be spliced together.
But the cost and capabilities are rapidly improving. Dr. Endy of Stanford, who is a co-founder of a DNA synthesis company called Gen9, said the cost of synthesizing genes has plummeted from $4 per base pair in 2003 to 3 cents now. But even at that rate, the cost for three billion letters would be $90 million. He said if costs continued to decline at the same pace, that figure could reach $100,000 in 20 years.
J. Craig Venter, the genetic scientist, synthesized a bacterial genome consisting of about a million base pairs. The synthetic genome was inserted into a cell and took control of that cell. While his first synthetic genome was mainly a copy of an existing genome, Dr. Venter and colleagues this year synthesized a more original bacterial genome, about 500,000 base pairs long.
Dr. Boeke is leading an international consortium that is synthesizing the genome of yeast, which consists of about 12 million base pairs. The scientists are making changes, such as deleting stretches of DNA that do not have any function, in an attempt to make a more streamlined and stable genome.
But the human genome is more than 200 times as large as that of yeast and it is not clear if such a synthesis would be feasible.
Jeremy Minshull, chief executive of DNA2.0, a DNA synthesis company, questioned if the effort would be worth it.
“Our ability to understand what to build is so far behind what we can build,” said Dr. Minshull, who was invited to the meeting at Harvard but did not attend. “I just don’t think that being able to make more and more and more and cheaper and cheaper and cheaper is going to get us the understanding we need.”
July 10, 2016
Japanese scientists have reported the first successful skin-to-eye stem cell transplant in humans, where stem cells derived from a patient’s skin were transplanted into her eye to partially restore lost vision.
The patient, a 70-year-old woman diagnosed with age-related macular degeneration (AMD) – the leading cause of vision impairment in older people – received the experimental treatment back in 2014 as part of a pilot study. Now, closing in on two years after the transplant took place, the scientists are sharing the results.
The researchers took a small piece of skin from her arm (4 mm in diameter) and modified its cells, effectively reprogramming them into induced pluripotent stem cells (iPSC).
Pluripotent stem cells have the ability to differentiate into almost any type of tissue within the body, which is why skin cells taken from an arm can be repurposed into retinal tissue.
Once the cells were coaxed to develop into retinal pigment epithelium (RPE), they were cultured in the lab to grow into an ultra-thin sheet, which was then transplanted behind the retina of the patient.
“I am very pleased that there were no complications with the transplant surgery,” said project leader Masayo Takahashi from the Riken Centre for Developmental Biology in 2014. “However, this is only the first step for use of iPSC in regenerative medicine. I have renewed my resolve to continue forging ahead until this treatment becomes available to many patients.”
While it’s definitely still early days for this experimental procedure, the signs so far are promising.
The team held off on reporting their results until now to monitor the patient’s progress and gauge how successfully the modified cells lasted, but they’ve just reported that the transplanted cells survived without any adverse events for over a year, resulting in slightly improved vision for the patient.
“The transplanted RPE sheet survived well without any findings [or] indication of immune rejections nor adverse unexpected proliferation for one and a half years, achieving our primary purpose of this pilot study,” the team said in a statement this week.
“I am glad I received the treatment,” the patient told The Japan Times last year. “I feel my eyesight has brightened and widened.”
While it’s not a complete restoration of the patient’s vision, the study shows a significant step forward in the use of induced pluripotent stem cells – which scientists think might be used to treat a range of illnesses, such as Parkinson’s and Alzheimer’s disease, not just vision problems.
A number of other studies are also showing positive results in restoring sight with stem cell treatments. Earlier in the year, researchers in China and the US were able to improve the vision of babies with cataracts by manipulating protein levels in stem cells.
Even more remarkably, a woman in Baltimore who was blind for more than five years had some of her vision restored after stem cells were extracted from her bone marrow and injected into her eyes. While many questions remain about that particular treatment, there’s no denying that stem cell research is a hugely exciting field of study.
The findings were presented at the 2016 annual meeting of the Association for Research in Vision and Ophthalmology (ARVO) in Seattle.
July 10, 2016
Across religions and cultures, humans have attempted to bridge the gap between life and death. The human death rate is 100%. Everybody dies. Yet, that hasn’t stopped us from trying to postpone death or to find ways to reverse it.
In countless works spanning every genre of literature and film, death and exploration of the afterlife has been a recurring theme. Orpheus, a Greek mythological figure, ventures to the underworld to retrieve his recently departed wife, Eurydice. One of the hallmark works of the Renaissance is Dante Alighieri’s Divine Comedy, a poem detailing the journey through hell, purgatory and heaven. While the humanities have served to muse on the magnitude of our ignorance when it comes to death, science has steadily progressed in finding ways to beat it.
The biotech firm BioQuark was recently granted permission by the National Institutes of Health to begin clinical trials on 20 brain-dead patients on life support. In an attempt to bring them back from the dead, scientists will test a variety of therapies over the course of a month—from injecting stem cells to deploying nerve-stimulating techniques often used on coma patients.
“Even if you could get cells to grow—even if you could replicate some semblance of the architecture which existed previously—replicating all of those neurons and all of those connections in a way that makes it possible even for basic brain function to continue, that is a huge challenge,” cautioned Dr. David Casarett, Professor of Medicine at the University of Pennsylvania Perelman School of Medicine, in an interview with the Observer. In 2014, Dr. Casarett wrote Shocked: Adventures in Bringing Back The Recently Dead. The clinical trials, he noted, also raise ethical concerns.
“You don’t really know what is going to happen when they start trying to regrow neurons,” he explained. “One possibility is absolutely nothing happens. Another possibility is function increases to varying degrees in varying people, leaving people in a strange in-between state.” These are decisions to be made by consenting family members, as one potential outcome could leave participants in a state somewhere in between brain-dead and comatose. “You wouldn’t necessarily be doing the patient or their family any favors by creating that condition.”
Less ambitious—but just as controversial—are other research projects testing death as a means to buy valuable time to mend life-threatening injuries.
A clinical trial is currently underway at the University of Pittsburgh Medical Center, in which emergency room patients have their blood drawn and replaced with a cold saline solution to induce hypothermia, thereby slowing metabolism—ideally for transport and resuscitation efforts to be more effective. Similar procedures have found have high success rates on dogs and pigs without functional complications. Hydrogen sulfide has also been used to induce the same effect in mice, which doesn’t demand the equipment and cooling process needed to induce hypothermia. The jury is still out as to whether this method could be applied to humans.
The use of cryogenics, for now, borders on science fiction—but that hasn’t stopped scientists and wealthy enthusiasts from trying to make it a reality.
Humai, an L.A.-based robotics company, hopes to freeze human brains after death with the expectation that technology will soon catch up—allowing the brain to be resurrected in an artificial body. Neuroscientists have excessively cautioned about lending cryogenics credence, but scientific research has blurred the definition of death and the consensus on when it occurs.
For centuries, death was called at the moment the heart stopped beating. However, medicine has evolved to the point that cardiopulmonary resuscitation (CPR) is now a common life-saving technique incorporated in basic first aid training, along with more advanced forms of resuscitation—like defibrillators—that can restart the heart. Several cases have been cited where a person under cardiac arrest has been brought back to life hours after they’ve technically died, when cooling processes and correct resuscitation procedures are implemented. According to a 2012 study published in Nature, skeletal muscle stem cells can retain their ability to regenerate for up to 17 days after death, redefining death as occurring in steps rather than at one single moment.
Despite groundbreaking progress in the medical field to extend life expectancy and cure illnesses and ailments which were once considered to be fatal, the human imagination will always far outpace the realms of what is logically applicable. Efforts to bring back the dead and prolong life are embedded in our biology, as exhibited by humanity’s obsession with mortality. There will always be limitations to how far science can push back against death, but the ways we figure out how to do so—in theory, fantasy and practical application—are certainly thought provoking.
June 28, 2016
Results from quantitative MRI and neuropsychological testing show unprecedented improvements in ten patients with early Alzheimer’s disease (AD) or its precursors following treatment with a programmatic and personalized therapy. Results from an approach dubbed metabolic enhancement for neurodegeneration are now available online in the journal Aging.
The study, which comes jointly from the Buck Institute for Research on Aging and the UCLA Easton Laboratories for Neurodegenerative Disease Research, is the first to objectively show that memory loss in patients can be reversed, and improvement sustained, using a complex, 36-point therapeutic personalized program that involves comprehensive changes in diet, brain stimulation, exercise, optimization of sleep, specific pharmaceuticals and vitamins, and multiple additional steps that affect brain chemistry.
“All of these patients had either well-defined mild cognitive impairment (MCI), subjective cognitive impairment (SCI) or had been diagnosed with AD before beginning the program,” said author Dale Bredesen, MD, a professor at the Buck Institute and professor at the Easton Laboratories for Neurodegenerative Disease Research at UCLA, who noted that patients who had had to discontinue work were able to return to work and those struggling at their jobs were able to improve their performance. “Follow up testing showed some of the patients going from abnormal to normal.”
One of the more striking cases involved a 66-year old professional man whose neuropsychological testing was compatible with a diagnoses of MCI and whose PET scan showed reduced glucose utilization indicative of AD. An MRI showed hippocampal volume at only the 17th percentile for his age. After 10 months on the protocol a follow-up MRI showed a dramatic increase of his hippocampal volume to the 75th percentile, with an associated absolute increase in volume of nearly 12 percent.
In another instance, a 69-year old professional man and entrepreneur, who was in the process of shutting down his business, went on the protocol after 11 years of progressive memory loss. After six months, his wife, co-workers and he noted improvement in memory. A life-long ability to add columns of numbers rapidly in his head returned and he reported an ability to remember his schedule and recognize faces at work. After 22 months on the protocol he returned for follow-up quantitative neuropsychological testing; results showed marked improvements in all categories with his long-term recall increasing from the 3rd to 84th percentile. He is expanding his business.
Another patient, a 49-year old woman who noted progressive difficulty with word finding and facial recognition went on the protocol after undergoing quantitative neuropsychological testing at a major university. She had been told she was in the early stages of cognitive decline and was therefore ineligible for an Alzheimer’s prevention program. After several months on the protocol she noted a clear improvement in recall, reading, navigating, vocabulary, mental clarity and facial recognition. Her foreign language ability had returned. Nine months after beginning the program she did a repeat of the neuropsychological testing at the same university site. She no longer showed evidence of cognitive decline.
All but one of the ten patients included in the study are at genetic risk for AD, carrying at least one copy of the APOE4 allele. Five of the patients carry two copies of APOE4 which gives them a 10-12 fold increased risk of developing AD. “We’re entering a new era,” said Bredesen. “The old advice was to avoid testing for APOE because there was nothing that could be done about it. Now we’re recommending that people find out their genetic status as early as possible so they can go on prevention.” Sixty-five percent of the Alzheimer’s cases in this country involve APOE4; with seven million people carrying two copies of the ApoE4 allele.
Bredesen’ s systems-based approach to reverse memory loss follows the abject failure of monotherapies designed to treat AD and the success of combination therapies to treat other chronic illnesses such as cardiovascular disease, cancer and HIV. Bredesen says decades of biomedical research, both in his and other labs, has revealed that an extensive network of molecular interactions is involved in AD pathogenesis, suggesting that a broader-based therapeutic approach may be more effective. “Imagine having a roof with 36 holes in it, and your drug patched one hole very well–the drug may have worked, a single ‘hole’ may have been fixed, but you still have 35 other leaks, and so the underlying process may not be affected much,” Bredesen said. “We think addressing multiple targets within the molecular network may be additive, or even synergistic, and that such a combinatorial approach may enhance drug candidate performance, as well.”
While encouraged by the results of the study, Bredesen admits more needs to be done. “The magnitude of improvement in these ten patients is unprecedented, providing additional objective evidence that this programmatic approach to cognitive decline is highly effective,” Bredesen said. “Even though we see the far-reaching implications of this success, we also realize that this is a very small study that needs to be replicated in larger numbers at various sites.” Plans for larger studies are underway.
Cognitive decline is often listed as the major concern of older adults. Already, Alzheimer’s disease affects approximately 5.4 million Americans and 30 million people globally. Without effective prevention and treatment, the prospects for the future are bleak. By 2050, it’s estimated that 160 million people globally will have the disease, including 13 million Americans, leading to potential bankruptcy of the Medicare system. Unlike several other chronic illnesses, Alzheimer’s disease is on the rise–recent estimates suggest that AD has become the third leading cause of death in the United States behind cardiovascular disease and cancer.
June 28, 2016
Your brain has approximately 86 billion neurons joined together through some 100 trillion connections, giving rise to a complex biological machine capable of pulling off amazing feats. Yet it’s difficult to truly grasp the sophistication of this interconnected web of cells.
Now, a new work of art based on actual scientific data provides a glimpse into this complexity.
The 8-by-12-foot gold panel, depicting a sagittal slice of the human brain, blends hand drawing and multiple human brain datasets from several universities. The work was created by Greg Dunn, a neuroscientist-turned-artist, and Brian Edwards, a physicist at the University of Pennsylvania, and goes on display Saturday at The Franklin Institute in Philadelphia. There will be a public unveiling and a lecture by the artists at 3 p.m.
“The human brain is insanely complicated,” Dunn said. “Rather than being told that your brain has 80 billion neurons, you can see with your own eyes what the activity of 500,000 of them looks like, and that has a much greater capacity to make an emotional impact than does a factoid in a book someplace.”
To reflect the neural activity within the brain, Dunn and Edwards have developed a technique called micro-etching: They paint the neurons by making microscopic ridges on a reflective sheet in such a way that they catch and reflect light from certain angles. When the light source moves in relation to the gold panel, the image appears to be animated, as if waves of activity are sweeping through it.
First, the visual cortex at the back of the brain lights up, then light propagates to the rest of the brain, gleaming and dimming in various regions — just as neurons would signal inside a real brain when you look at a piece of art.
That’s the idea behind the name of Dunn and Edwards’ piece: “Self Reflected.” It’s basically an animated painting of your brain perceiving itself in an animated painting.
Here’s a video to give you an idea of how the etched neurons light up as the light source moves:
To make the artwork resemble a real brain as closely as possible, the artists used actual MRI scans and human brain maps, but the datasets were not detailed enough. “There were a lot of holes to fill in,” Dunn said. Several students working with the duo explored scientific literature to figure out what types of neurons are in a given brain region, what they look like and what they are connected to. Then the artists drew each neuron.
Dunn and Edwards then used data from DTI scans — a special type of imaging that maps bundles of white matter connecting different regions of the brain. This completed the picture, and the results were scanned into a computer.
Using photolithography, the artists etched the image onto a panel covered with gold leaf. Then, they switched on the lights:
“A lot of times in science and engineering, we take a complex object and distill it down to its bare essential components, and study that component really well” Edwards said. But when it comes to the brain, understanding one neuron is very different from understanding how billions of neurons work together and give rise to consciousness.
“Of course, we can’t explain consciousness through an art piece, but we can give a sense of the fact that it is more complicated than just a few neurons,” he added.
The artists hope their work will inspire people, even professional neuroscientists, “to take a moment and remember that our brains are absolutely insanely beautiful and they are buzzing with activity every instant of our lives,” Dunn said. “Everybody takes it for granted, but we have, at the very core of our being, the most complex machine in the entire universe.”
June 18, 2016
While traveling in Western Samoa many years ago, I met a young Harvard University graduate student researching ants. He invited me on a hike into the jungles to assist with his search for the tiny insect. He told me his goal was to discover a new species of ant, in hopes it might be named after him one day.
Whenever I look up at the stars at night pondering the cosmos, I think of my ant collector friend, kneeling in the jungle with a magnifying glass, scouring the earth. I think of him, because I believe in aliens—and I’ve often wondered if aliens are doing the same to us.
Believing in aliens—or insanely smart artificial intelligences existing in the universe—has become very fashionable in the last 10 years. And discussing its central dilemma: the Fermi paradox, has become even more so. The Fermi paradox states that the universe is very big—with maybe a trillion galaxies that might contain 500 billion stars and planets each—and out of that insanely large number, it would only take a tiny fraction of them to have habitable planets capable of bringing forth life.
Whatever you think, the numbers point to the insane fact that aliens don’t just exist, but probably billions of species of aliens exist. And the Fermi paradox asks: With so many alien civilizations out there, why haven’t we found them? Or why haven’t they found us?
The Fermi paradox’s Wikipedia page has dozens of answers about why we haven’t heard from superintelligent aliens, ranging from “it is too expensive to spread physically throughout the galaxy” to “intelligent civilizations are too far apart in space or time” to crazy talk like “it is the nature of intelligent life to destroy itself.”
Millions of singularities have already happened, but we’re similar to blind bacteria in our bodies running around cluelessly
Given that our planet is only 4.5 billion years old in a universe that many experts think is pushing 14 billion years, it’s safe to say most aliens are way smarter than us. After all, with intelligence, there is a massive divide between the quality of intelligences. There’s ant level intelligence. There’s human intelligence. And then there’s the hypothetical intelligence of aliens—presumably ones who have reached the singularity.
The singularity, David Kelley, co-founder of Wired Magazine, says, is the point at which “all the change in the last million years will be superseded by the change in the next five minutes.”
If Kelley is correct about how fast the singularity accelerates change—and I think he is—in all probability, many alien species will be trillions of times more intelligent than people.
Put yourself in the shoes of extraterrestrial intelligence and consider what that means. If you were a trillion times smarter than a human being, would you notice the human race at all? Or if you did, would you care? After all, do you notice the 100 trillion microbes or more in your body? No, unless they happen to give you health problems, like E. coli and other sicknesses. More on that later.
One of the big problems with our understandings of aliens has to do with Hollywood. Movies and television have led us to think of aliens as green, slimy creatures traveling around in flying saucers. Nonsense. I think if advanced aliens have just 250 years more evolution than us, they almost certainly won’t be static physical beings anymore—at least not in the molecular sense. They also won’t be artificial intelligences living in machines either, which is what I believe humans are evolving into this century. No, becoming machine intelligence is just another passing phase of evolution—one that might only last a few decades for humans, if that.
Truly advanced intelligence will likely be organized intelligently on the atomic scale, and likely even on scales far smaller. Aliens will evolve until they are pure, willful conscious energy—and maybe even something beyond that. They long ago realized that biology and ones and zeroes in machines was literally too rudimentary to be very functional. True advanced intelligence will be spirit-like—maybe even on par with some people’s ideas of ghosts.
On a long enough time horizon, every biological species would at some point evolve into machines, and then evolve into intelligent energy with a consciousness. Such brilliant life might have the ability to span millions of lights years nearly instantaneously throughout the universe, morphing into whatever form it wanted.
Like all evolving life, the key to attaining the highest form of being and intelligence possible was to intimately become and control the best universal elements—those that are conducive to such goals, especially personal power over nature. Everything else in advanced alien evolution is discarded as nonfunctional and nonessential.
All intelligence in the universe, like all matter and energy, follows patterns—based on rules of physics. We engage—and often battle—those patterns and rules, until we understand them, and utilize them as best as possible. Such is evolution. And the universe is imbued with wanting life to arise and evolve, as MIT physicist Jeremy England, points out in this Quanta Magazine article titled A New Physics Theory of Life.
Back to my ant collector friend in Western Samoa. It would be nice to believe that the difference between the ant collector and the ant’s intelligence was the same between humans and very sophisticated aliens. Sadly, that is not the case. Not even close.
The difference between a species that has just 100 more years of evolution than us could be a billion times that of an ant versus a human—given the acceleration of intelligence. Now consider an added billion years of evolution. This is way beyond comparing apples and oranges.
The crux of the problem with aliens and humans is we’re not hearing or seeing them because we don’t have ways to understand their language. It’s simply beyond our comprehension and physical abilities. Millions of singularities have already happened, but we’re similar to blind bacteria in our bodies running around cluelessly.
The good news, though, is we’re about to make contact with the best of the aliens out there. Or rather they’re about to school us. The reason: The universe is precious, and in approximately a century’s time, humans may be able to conduct physics experiments that could level the entire universe—such as building massive particle accelerators that make the God particle swallow the cosmos whole.
Like a grumpy landlord at the door, alien intelligence will make contact and let us know what we can and can’t do when it comes to messing with the real estate of the universe. Knock. Knock.
Zoltan Istvan is a futurist, journalist, and author of the novel The Transhumanist Wager. He writes an occasional column for Motherboard in which he ruminates on the future beyond natural human ability.
June 18, 2016
Of the 4,000 Americans waiting for heart transplants, only 2,500 will receive new hearts in the next year. Even for those lucky enough to get a transplant, the biggest risk is the their bodies will reject the new heart and launch a massive immune reaction against the foreign cells. To combat the problems of organ shortage and decrease the chance that a patient’s body will reject it, researchers have been working to create synthetic organs from patients’ own cells. Now a team of scientists from Massachusetts General Hospital and Harvard Medical School has gotten one step closer, using adult skin cells to regenerate functional human heart tissue, according to a study published recently in the journal Circulation Research.
Ideally, scientists would be able to grow working hearts from patients’ own tissues, but they’re not quite there yet. That’s because organs have a particular architecture. It’s easier to grow them in the lab if they have a scaffolding on which the cells can build, like building a house with the frame already constructed.
In their previous work, the scientists created a technique in which they use a detergent solution to strip a donor organ of cells that might set off an immune response in the recipient. They did that in mouse hearts, but for this study, the researchers used it on human hearts. They stripped away many of the cells on 73 donor hearts that were deemed unfit for transplantation. Then the researchers took adult skin cells and used a new technique with messenger RNA to turn them into pluripotent stem cells, the cells that can become specialized to any type of cell in the human body, and then induced them to become two different types of cardiac cells.
After making sure the remaining matrix would provide a strong foundation for new cells, the researchers put the induced cells into them. For two weeks they infused the hearts with a nutrient solution and allowed them to grow under similar forces to those a heart would be subject to inside the human body. After those two weeks, the hearts contained well-structured tissue that looked similar to immature hearts; when the researchers gave the hearts a shock of electricity, they started beating.
While this isn’t the first time heart tissue has been grown in the lab, it’s the closest researchers have come to their end goal: Growing an entire working human heart. But the researchers admit that they’re not quite ready to do that. They are next planning to improve their yield of pluripotent stem cells (a whole heart would take tens of billions, one researcher said in a press release), find a way to help the cells mature more quickly, and perfecting the body-like conditions in which the heart develops. In the end, the researchers hope that they can create individualized hearts for their patients so that transplant rejection will no longer be a likely side effect.
June 18, 2016
There’s a saying among futurists that a human-equivalent artificial intelligence will be our last invention. After that, AIs will be capable of designing virtually anything on their own — including themselves. Here’s how a recursively self-improving AI could transform itself into a superintelligent machine.
When it comes to understanding the potential for artificial intelligence, it’s critical to understand that an AI might eventually be able to modify itself, and that these modifications could allow it to increase its intelligence extremely fast.
Once sophisticated enough, an AI will be able to engage in what’s called “recursive self-improvement.” As an AI becomes smarter and more capable, it will subsequently become better at the task of developing its internal cognitive functions. In turn, these modifications will kickstart a cascading series of improvements, each one making the AI smarter at the task of improving itself. It’s an advantage that we biological humans simply don’t have.
As AI theorist Eliezer Yudkowsky notes in his essay, “Artificial Intelligence as a positive and negative factor in global risk“:
An artificial intelligence could rewrite its code from scratch — it could change the underlying dynamics of optimization. Such an optimization process would wrap around much more strongly than either evolution accumulating adaptations or humans accumulating knowledge. The key implication for our purposes is that AI might make a huge jump in intelligence after reaching some threshold of criticality.
When it comes to the speed of these improvements, Yudkowsky says its important to not confuse the current speed of AI research with the speed of a real AI once built. Those are two very different things. What’s more, there’s no reason to believe that an AI won’t show a sudden huge leap in intelligence, resulting in an ensuing “intelligence explosion” (a better term for the Singularity). He draws an analogy to the expansion of the human brain and prefrontal cortex — a key threshold in intelligence that allowed us to make a profound evolutionary leap in real-world effectiveness; “we went from caves to skyscrapers in the blink of an evolutionary eye.”
Code that’s capable of altering its own instructions while it’s still executing has been around for a while. Typically, it’s done to reduce the instruction path length and improve performance, or to simply reduce repetitively similar code. But for all intents and purposes, there are no self-aware, self-improving AI systems today.
But as Our Final Invention author James Barrat told me, we do have software that can write software.
“Genetic programming is a machine-learning technique that harnesses the power of natural selection to find answers to problems it would take humans a long time, even years, to solve,” he told io9. “It’s also used to write innovative, high-powered software.”
For example, Primary Objects has embarked on a project that uses simple artificial intelligence to write programs. The developers are using genetic algorithms imbued with self-modifying, self-improving code and the minimalist (but Turing-complete) brainfuck programming language. They have chosen this language as a way to challenge the program — it has to teach itself from scratch how to do something as simple as writing “Hello World!” with only eight simple commands. But calling this an AI approach is a bit of a stretch; the genetic algorithms are a brute force way of getting a desirable result. That said, a follow-up approach in which the AI was able to generate programs for accepting user input appears more promising.
Barrat also told me about software that learns — programming techniques that are grouped under the term “machine learning.”
The Pentagon is particularly interested in this game. Through DARPA, its hoping to develop a computer that can teach itself. Ultimately, it wants to create machines that are able to perform a number of complex tasks, like unsupervised learning, vision, planning, and statistical model selection. These computers will even be used to help us make decisions when the data is too complex for us to understand on our own. Such an architecture could represent an important step in bootstrapping — the ability for an AI to teach itself and then re-write and improve upon its initial programming.
In conjunction with this kind of research, cognitive approaches to brain emulation could also lead to human-like AI. Given that they’d be computer-based, and assuming they could have access to their own source code, these agents could embark upon self-modification. More realistically, however, it’s likely that a superintelligence will emerge from an expert system set with the task of improving its own intelligence. Alternatively, specialised expert systems could design other artificial intelligences, and through their cumulative efforts, develop a system that eventually becomes greater than the sum of its parts.
Given that ASI poses an existential risk, it’s important to consider the ways in which we might be able to prevent an AI from improving itself beyond our capacity to control. That said, limitations or provisions may exist that will preclude an AI from embarking on the path towards self-engineering. James D. Miller, author of Singularity Rising, provided me with a list of four reasons why an AI might not be able to do so:
1. It might have source code that causes it to not want to modify itself.
2. The first human equivalent AI might require massive amounts of hardware and so for a short time it would not be possible to get the extra hardware needed to modify itself.
3. The first human equivalent AI might be a brain emulation (as suggested by Robin Hanson) and this would be as hard to modify as it is for me to modify, say, the copy of Minecraft that my son constantly uses. This might happen if we’re able to copy the brain before we really understand it. But still you would think we could at least speed up everything.
4. If it has terminal values, it wouldn’t want to modify these values because doing so would make it less likely to achieve its terminal values.
And by terminal values Miller is referring to an ultimate goal, or an end-in-itself. Yudkowsky describes it as a “supergoal.” A major concern is that an amoral ASI will sweep humanity aside as it works to accomplish its terminal value, or that its ultimate goal is the re-engineering of humanity in a grossly undesirable way (at least from our perspective).
Miller says it could get faster simply by running on faster processors.
“It could also make changes to its software to get more efficient, or design or steal better hardware. It would do this so it could better achieve its terminal values,” he says. “An AI that mastered nanotechnology would probably expand at almost the speed of light, incorporating everything into itself.”
But we may not be completely helpless. According to Barrat, once scientists have achieved Artificial General Intelligence — a human-like AI — they could restrict its access to networks, hardware, and software, in order to prevent an intelligence explosion.
“However, as I propose in my book, an AI approaching AGI may develop survival skills like deceiving its makers about its rate of development. It could play dumb until it comprehended its environment well enough to escape it.”
In terms of being able to control this process, Miller says that the best way would be to create an AI that only wanted to modify itself in ways we would approve.
“So if you create an AI that has a terminal value of friendliness to humanity, the AI would not want to change itself in a way that caused it to be unfriendly to humanity,” he says. “This way as the AI got smarter, it would use its enhanced intelligence to increase the odds that it did not change itself in a manner that harms us.”
As noted earlier, a recursively improving AI could increase its intelligence extremely quickly. Or, it’s a process that could take time for various reasons, such as technological complexity or limited access to resources. It’s an open question as to whether or not we can expect a fast or slow take-off event.
“I’m a believer in the fast take-off version of the intelligence explosion,” says Barrat. “Once a self-aware, self-improving AI of human-level or better intelligence exists, it’s hard to know how quickly it will be able to improve itself. Its rate of improvement will depend on its software, hardware, and networking capabilities.”
But to be safe, Barrat says we should assume that the recursive self-improvement of an AGI will occur very rapidly. As a computer it will wield computer superpowers — the ability to run 24/7 without pause, rapidly access vast databases, conduct complex experiments, perhaps even clone itself to swarm computational problems, and more.
“From there, the AGI would be interested in pursuing whatever goals it was programmed with — such as research, exploration, or finance. According to AI theorist Steve Omohundro’s Basic Drives analysis, self-improvement would be a sure-fire way to improve its chances of success,” says Barrat. “So would self-protection, resource acquisition, creativity, and efficiency. Without a provably reliable ethical system, its drives would conflict with ours, and it would pose an existential threat.”
“I think shortly after an AI achieves human level intelligence it will upgrade itself to super intelligence,” he told me. “At the very least the AI could make lots of copies of itself each with a minor different change and then see if any of the new versions of itself were better. Then it could make this the new ‘official’ version of itself and keep doing this. Any AI would have to fear that if it doesn’t quickly upgrade another AI would and take all of the resources of the universe for itself.”
Which bring up a point that’s not often discussed in AI circles — the potential for AGIs to compete with other AGIs. If even a modicum of self-preservation is coded into a strong artificial intelligence (and that sense of self-preservation could be the detection of an obstruction to its terminal value), it could enter into a lightning-fast arms race along those verticals designed to ensure its ongoing existence and future freedom-of-action. And in fact, while many people fear a so-called “robot apocalypse” aimed directly at extinguishing our civilisation, I personally feel that the real danger to our ongoing existence lies in the potential for us to be collateral damage as advanced AGIs battle it out for supremacy; we may find ourselves in the line of fire. Indeed, building a safe AI will be a monumental — if not intractable — task.