Pre and post testing show reversal of memory loss from Alzheimer’s disease in 10 patients

June 28, 2016

160616071933_1_900x600

Results from quantitative MRI and neuropsychological testing show unprecedented improvements in ten patients with early Alzheimer’s disease (AD) or its precursors following treatment with a programmatic and personalized therapy. Results from an approach dubbed metabolic enhancement for neurodegeneration are now available online in the journal Aging.

The study, which comes jointly from the Buck Institute for Research on Aging and the UCLA Easton Laboratories for Neurodegenerative Disease Research, is the first to objectively show that memory loss in patients can be reversed, and improvement sustained, using a complex, 36-point therapeutic personalized program that involves comprehensive changes in diet, brain stimulation, exercise, optimization of sleep, specific pharmaceuticals and vitamins, and multiple additional steps that affect brain chemistry.

“All of these patients had either well-defined mild cognitive impairment (MCI), subjective cognitive impairment (SCI) or had been diagnosed with AD before beginning the program,” said author Dale Bredesen, MD, a professor at the Buck Institute and professor at the Easton Laboratories for Neurodegenerative Disease Research at UCLA, who noted that patients who had had to discontinue work were able to return to work and those struggling at their jobs were able to improve their performance. “Follow up testing showed some of the patients going from abnormal to normal.”

One of the more striking cases involved a 66-year old professional man whose neuropsychological testing was compatible with a diagnoses of MCI and whose PET scan showed reduced glucose utilization indicative of AD. An MRI showed hippocampal volume at only the 17th percentile for his age. After 10 months on the protocol a follow-up MRI showed a dramatic increase of his hippocampal volume to the 75th percentile, with an associated absolute increase in volume of nearly 12 percent.

In another instance, a 69-year old professional man and entrepreneur, who was in the process of shutting down his business, went on the protocol after 11 years of progressive memory loss. After six months, his wife, co-workers and he noted improvement in memory. A life-long ability to add columns of numbers rapidly in his head returned and he reported an ability to remember his schedule and recognize faces at work. After 22 months on the protocol he returned for follow-up quantitative neuropsychological testing; results showed marked improvements in all categories with his long-term recall increasing from the 3rd to 84th percentile. He is expanding his business.

Another patient, a 49-year old woman who noted progressive difficulty with word finding and facial recognition went on the protocol after undergoing quantitative neuropsychological testing at a major university. She had been told she was in the early stages of cognitive decline and was therefore ineligible for an Alzheimer’s prevention program. After several months on the protocol she noted a clear improvement in recall, reading, navigating, vocabulary, mental clarity and facial recognition. Her foreign language ability had returned. Nine months after beginning the program she did a repeat of the neuropsychological testing at the same university site. She no longer showed evidence of cognitive decline.

All but one of the ten patients included in the study are at genetic risk for AD, carrying at least one copy of the APOE4 allele. Five of the patients carry two copies of APOE4 which gives them a 10-12 fold increased risk of developing AD. “We’re entering a new era,” said Bredesen. “The old advice was to avoid testing for APOE because there was nothing that could be done about it. Now we’re recommending that people find out their genetic status as early as possible so they can go on prevention.” Sixty-five percent of the Alzheimer’s cases in this country involve APOE4; with seven million people carrying two copies of the ApoE4 allele.

Bredesen’ s systems-based approach to reverse memory loss follows the abject failure of monotherapies designed to treat AD and the success of combination therapies to treat other chronic illnesses such as cardiovascular disease, cancer and HIV. Bredesen says decades of biomedical research, both in his and other labs, has revealed that an extensive network of molecular interactions is involved in AD pathogenesis, suggesting that a broader-based therapeutic approach may be more effective. “Imagine having a roof with 36 holes in it, and your drug patched one hole very well–the drug may have worked, a single ‘hole’ may have been fixed, but you still have 35 other leaks, and so the underlying process may not be affected much,” Bredesen said. “We think addressing multiple targets within the molecular network may be additive, or even synergistic, and that such a combinatorial approach may enhance drug candidate performance, as well.”

While encouraged by the results of the study, Bredesen admits more needs to be done. “The magnitude of improvement in these ten patients is unprecedented, providing additional objective evidence that this programmatic approach to cognitive decline is highly effective,” Bredesen said. “Even though we see the far-reaching implications of this success, we also realize that this is a very small study that needs to be replicated in larger numbers at various sites.” Plans for larger studies are underway.

Cognitive decline is often listed as the major concern of older adults. Already, Alzheimer’s disease affects approximately 5.4 million Americans and 30 million people globally. Without effective prevention and treatment, the prospects for the future are bleak. By 2050, it’s estimated that 160 million people globally will have the disease, including 13 million Americans, leading to potential bankruptcy of the Medicare system. Unlike several other chronic illnesses, Alzheimer’s disease is on the rise–recent estimates suggest that AD has become the third leading cause of death in the United States behind cardiovascular disease and cancer.

Story Source:

The above post is reprinted from materials provided by Buck Institute for Research on Aging. Note: Materials may be edited for content and length.


Journal Reference:

  1. Dale E. Bredesen et al. Reversal of cognitive decline in Alzheimer’s disease. Aging, June 2016 [link]

https://www.sciencedaily.com/releases/2016/06/160616071933.htm

Advertisements

Giant Artwork Reflects The Gorgeous Complexity of The Human Brain

June 28, 2016

576d9e561a00002700ceb009

Your brain has approximately 86 billion neurons joined together through some 100 trillion connections, giving rise to a complex biological machine capable of pulling off amazing feats. Yet it’s difficult to truly grasp the sophistication of this interconnected web of cells.

Now, a new work of art based on actual scientific data provides a glimpse into this complexity.

The 8-by-12-foot gold panel, depicting a sagittal slice of the human brain, blends hand drawing and multiple human brain datasets from several universities. The work was created by Greg Dunn, a neuroscientist-turned-artist, and Brian Edwards, a physicist at the University of Pennsylvania, and goes on display Saturday at The Franklin Institute in Philadelphia. There will be a public unveiling and a lecture by the artists at 3 p.m.

“The human brain is insanely complicated,” Dunn said. “Rather than being told that your brain has 80 billion neurons, you can see with your own eyes what the activity of 500,000 of them looks like, and that has a much greater capacity to make an emotional impact than does a factoid in a book someplace.”

Will Drinker Artists Greg Dunn and Brian Edwards present their work at the Franklin Institute in Philadelphia.

 

To reflect the neural activity within the brain, Dunn and Edwards have developed a technique called micro-etching: They paint the neurons by making microscopic ridges on a reflective sheet in such a way that they catch and reflect light from certain angles. When the light source moves in relation to the gold panel, the image appears to be animated, as if waves of activity are sweeping through it.

First, the visual cortex at the back of the brain lights up, then light propagates to the rest of the brain, gleaming and dimming in various regions — just as neurons would signal inside a real brain when you look at a piece of art.

That’s the idea behind the name of Dunn and Edwards’ piece: “Self Reflected.” It’s basically an animated painting of your brain perceiving itself in an animated painting.

Here’s a video to give you an idea of how the etched neurons light up as the light source moves:

To make the artwork resemble a real brain as closely as possible, the artists used actual MRI scans and human brain maps, but the datasets were not detailed enough. “There were a lot of holes to fill in,” Dunn said. Several students working with the duo explored scientific literature to figure out what types of neurons are in a given brain region, what they look like and what they are connected to. Then the artists drew each neuron.

Will Drinker and Greg Dunn
A close-up of the cerebellum in the finished work.

Will Drinker and Greg Dunn A close-up of the motor cortex in the finished work.

 

Dunn and Edwards then used data from DTI scans — a special type of imaging that maps bundles of white matter connecting different regions of the brain. This completed the picture, and the results were scanned into a computer.

Using photolithography, the artists etched the image onto a panel covered with gold leaf. Then, they switched on the lights:

Will Drinker and Greg Dunn This is what “Self Reflected” looks like when it’s illuminated with all white light.

 

“A lot of times in science and engineering, we take a complex object and distill it down to its bare essential components, and study that component really well” Edwards said. But when it comes to the brain, understanding one neuron is very different from understanding how billions of neurons work together and give rise to consciousness.

“Of course, we can’t explain consciousness through an art piece, but we can give a sense of the fact that it is more complicated than just a few neurons,” he added.

The artists hope their work will inspire people, even professional neuroscientists, “to take a moment and remember that our brains are absolutely insanely beautiful and they are buzzing with activity every instant of our lives,” Dunn said. “Everybody takes it for granted, but we have, at the very core of our being, the most complex machine in the entire universe.”

http://www.huffingtonpost.com/entry/brain-art-franklin institute_us_576d65b3e4b017b379f5cb68

Why Haven’t We Met Aliens Yet? Because They’ve Evolved into AI

June 18, 2016

bd470d63621c520fc5a59db0b896336e

While traveling in Western Samoa many years ago, I met a young Harvard University graduate student researching ants. He invited me on a hike into the jungles to assist with his search for the tiny insect. He told me his goal was to discover a new species of ant, in hopes it might be named after him one day.

Whenever I look up at the stars at night pondering the cosmos, I think of my ant collector friend, kneeling in the jungle with a magnifying glass, scouring the earth. I think of him, because I believe in aliens—and I’ve often wondered if aliens are doing the same to us.

Believing in aliens—or insanely smart artificial intelligences existing in the universe—has become very fashionable in the last 10 years. And discussing its central dilemma: the Fermi paradox, has become even more so. The Fermi paradox states that the universe is very big—with maybe a trillion galaxies that might contain 500 billion stars and planets each—and out of that insanely large number, it would only take a tiny fraction of them to have habitable planets capable of bringing forth life.

Whatever you think, the numbers point to the insane fact that aliens don’t just exist, but probably billions of species of aliens exist. And the Fermi paradox asks: With so many alien civilizations out there, why haven’t we found them? Or why haven’t they found us?

The Fermi paradox’s Wikipedia page has dozens of answers about why we haven’t heard from superintelligent aliens, ranging from “it is too expensive to spread physically throughout the galaxy” to “intelligent civilizations are too far apart in space or time” to crazy talk like “it is the nature of intelligent life to destroy itself.”

Millions of singularities have already happened, but we’re similar to blind bacteria in our bodies running around cluelessly

Given that our planet is only 4.5 billion years old in a universe that many experts think is pushing 14 billion years, it’s safe to say most aliens are way smarter than us. After all, with intelligence, there is a massive divide between the quality of intelligences. There’s ant level intelligence. There’s human intelligence. And then there’s the hypothetical intelligence of aliens—presumably ones who have reached the singularity.

The singularity, David Kelley, co-founder of Wired Magazine, says, is the point at which “all the change in the last million years will be superseded by the change in the next five minutes.”

If Kelley is correct about how fast the singularity accelerates change—and I think he is—in all probability, many alien species will be trillions of times more intelligent than people.

Put yourself in the shoes of extraterrestrial intelligence and consider what that means. If you were a trillion times smarter than a human being, would you notice the human race at all? Or if you did, would you care? After all, do you notice the 100 trillion microbes or more in your body? No, unless they happen to give you health problems, like E. coli and other sicknesses. More on that later.

One of the big problems with our understandings of aliens has to do with Hollywood. Movies and television have led us to think of aliens as green, slimy creatures traveling around in flying saucers. Nonsense. I think if advanced aliens have just 250 years more evolution than us, they almost certainly won’t be static physical beings anymore—at least not in the molecular sense. They also won’t be artificial intelligences living in machines either, which is what I believe humans are evolving into this century. No, becoming machine intelligence is just another passing phase of evolution—one that might only last a few decades for humans, if that.

Truly advanced intelligence will likely be organized intelligently on the atomic scale, and likely even on scales far smaller. Aliens will evolve until they are pure, willful conscious energy—and maybe even something beyond that. They long ago realized that biology and ones and zeroes in machines was literally too rudimentary to be very functional. True advanced intelligence will be spirit-like—maybe even on par with some people’s ideas of ghosts.

On a long enough time horizon, every biological species would at some point evolve into machines, and then evolve into intelligent energy with a consciousness. Such brilliant life might have the ability to span millions of lights years nearly instantaneously throughout the universe, morphing into whatever form it wanted.

Like all evolving life, the key to attaining the highest form of being and intelligence possible was to intimately become and control the best universal elements—those that are conducive to such goals, especially personal power over nature. Everything else in advanced alien evolution is discarded as nonfunctional and nonessential.

All intelligence in the universe, like all matter and energy, follows patterns—based on rules of physics. We engage—and often battle—those patterns and rules, until we understand them, and utilize them as best as possible. Such is evolution. And the universe is imbued with wanting life to arise and evolve, as MIT physicist Jeremy England, points out in this Quanta Magazine article titled A New Physics Theory of Life.

Back to my ant collector friend in Western Samoa. It would be nice to believe that the difference between the ant collector and the ant’s intelligence was the same between humans and very sophisticated aliens. Sadly, that is not the case. Not even close.

The difference between a species that has just 100 more years of evolution than us could be a billion times that of an ant versus a human—given the acceleration of intelligence. Now consider an added billion years of evolution. This is way beyond comparing apples and oranges.

The crux of the problem with aliens and humans is we’re not hearing or seeing them because we don’t have ways to understand their language. It’s simply beyond our comprehension and physical abilities. Millions of singularities have already happened, but we’re similar to blind bacteria in our bodies running around cluelessly.

The good news, though, is we’re about to make contact with the best of the aliens out there. Or rather they’re about to school us. The reason: The universe is precious, and in approximately a century’s time, humans may be able to conduct physics experiments that could level the entire universe—such as building massive particle accelerators that make the God particle swallow the cosmos whole.

Like a grumpy landlord at the door, alien intelligence will make contact and let us know what we can and can’t do when it comes to messing with the real estate of the universe. Knock. Knock.

Zoltan Istvan is a futurist, journalist, and author of the novel The Transhumanist Wager. He writes an occasional column for Motherboard in which he ruminates on the future beyond natural human ability.

http://motherboard.vice.com/read/why-havent-we-met-aliens-yet-because-theyve-evolved-into-ai

Scientists Grow Full-Sized, Beating Human Hearts From Stem Cells

June 18, 2016

110957_web

Of the 4,000 Americans waiting for heart transplants, only 2,500 will receive new hearts in the next year. Even for those lucky enough to get a transplant, the biggest risk is the their bodies will reject the new heart and launch a massive immune reaction against the foreign cells. To combat the problems of organ shortage and decrease the chance that a patient’s body will reject it, researchers have been working to create synthetic organs from patients’ own cells. Now a team of scientists from Massachusetts General Hospital and Harvard Medical School has gotten one step closer, using adult skin cells to regenerate functional human heart tissue, according to a study published recently in the journal Circulation Research.

Ideally, scientists would be able to grow working hearts from patients’ own tissues, but they’re not quite there yet. That’s because organs have a particular architecture. It’s easier to grow them in the lab if they have a scaffolding on which the cells can build, like building a house with the frame already constructed.

In their previous work, the scientists created a technique in which they use a detergent solution to strip a donor organ of cells that might set off an immune response in the recipient. They did that in mouse hearts, but for this study, the researchers used it on human hearts. They stripped away many of the cells on 73 donor hearts that were deemed unfit for transplantation. Then the researchers took adult skin cells and used a new technique with messenger RNA to turn them into pluripotent stem cells, the cells that can become specialized to any type of cell in the human body, and then induced them to become two different types of cardiac cells.

After making sure the remaining matrix would provide a strong foundation for new cells, the researchers put the induced cells into them. For two weeks they infused the hearts with a nutrient solution and allowed them to grow under similar forces to those a heart would be subject to inside the human body. After those two weeks, the hearts contained well-structured tissue that looked similar to immature hearts; when the researchers gave the hearts a shock of electricity, they started beating.

While this isn’t the first time heart tissue has been grown in the lab, it’s the closest researchers have come to their end goal: Growing an entire working human heart. But the researchers admit that they’re not quite ready to do that. They are next planning to improve their yield of pluripotent stem cells (a whole heart would take tens of billions, one researcher said in a press release), find a way to help the cells mature more quickly, and perfecting the body-like conditions in which the heart develops. In the end, the researchers hope that they can create individualized hearts for their patients so that transplant rejection will no longer be a likely side effect.

http://www.popsci.com/scientists-grow-transplantable-hearts-with-stem-cells

How Artificial Superintelligence Will Give Birth To Itself

June 18, 2016

vre7kvdftmrqrizt8r0v

There’s a saying among futurists that a human-equivalent artificial intelligence will be our last invention. After that, AIs will be capable of designing virtually anything on their own — including themselves. Here’s how a recursively self-improving AI could transform itself into a superintelligent machine.

When it comes to understanding the potential for artificial intelligence, it’s critical to understand that an AI might eventually be able to modify itself, and that these modifications could allow it to increase its intelligence extremely fast.

Passing a Critical Threshold

Once sophisticated enough, an AI will be able to engage in what’s called “recursive self-improvement.” As an AI becomes smarter and more capable, it will subsequently become better at the task of developing its internal cognitive functions. In turn, these modifications will kickstart a cascading series of improvements, each one making the AI smarter at the task of improving itself. It’s an advantage that we biological humans simply don’t have.

How Artificial Superintelligence Will Give Birth To Itself

 

As AI theorist Eliezer Yudkowsky notes in his essay, “Artificial Intelligence as a positive and negative factor in global risk“:

An artificial intelligence could rewrite its code from scratch — it could change the underlying dynamics of optimization. Such an optimization process would wrap around much more strongly than either evolution accumulating adaptations or humans accumulating knowledge. The key implication for our purposes is that AI might make a huge jump in intelligence after reaching some threshold of criticality.

When it comes to the speed of these improvements, Yudkowsky says its important to not confuse the current speed of AI research with the speed of a real AI once built. Those are two very different things. What’s more, there’s no reason to believe that an AI won’t show a sudden huge leap in intelligence, resulting in an ensuing “intelligence explosion” (a better term for the Singularity). He draws an analogy to the expansion of the human brain and prefrontal cortex — a key threshold in intelligence that allowed us to make a profound evolutionary leap in real-world effectiveness; “we went from caves to skyscrapers in the blink of an evolutionary eye.”

The Path to Self-Modifying AI

Code that’s capable of altering its own instructions while it’s still executing has been around for a while. Typically, it’s done to reduce the instruction path length and improve performance, or to simply reduce repetitively similar code. But for all intents and purposes, there are no self-aware, self-improving AI systems today.

But as Our Final Invention author James Barrat told me, we do have software that can write software.

“Genetic programming is a machine-learning technique that harnesses the power of natural selection to find answers to problems it would take humans a long time, even years, to solve,” he told io9. “It’s also used to write innovative, high-powered software.”

How Artificial Superintelligence Will Give Birth To Itself

 

For example, Primary Objects has embarked on a project that uses simple artificial intelligence to write programs. The developers are using genetic algorithms imbued with self-modifying, self-improving code and the minimalist (but Turing-complete) brainfuck programming language. They have chosen this language as a way to challenge the program — it has to teach itself from scratch how to do something as simple as writing “Hello World!” with only eight simple commands. But calling this an AI approach is a bit of a stretch; the genetic algorithms are a brute force way of getting a desirable result. That said, a follow-up approach in which the AI was able to generate programs for accepting user input appears more promising.

Relatedly, Larry Diehl has done similar work using a stack-based language.

Barrat also told me about software that learns — programming techniques that are grouped under the term “machine learning.”

The Pentagon is particularly interested in this game. Through DARPA, its hoping to develop a computer that can teach itself. Ultimately, it wants to create machines that are able to perform a number of complex tasks, like unsupervised learning, vision, planning, and statistical model selection. These computers will even be used to help us make decisions when the data is too complex for us to understand on our own. Such an architecture could represent an important step in bootstrapping — the ability for an AI to teach itself and then re-write and improve upon its initial programming.

In conjunction with this kind of research, cognitive approaches to brain emulation could also lead to human-like AI. Given that they’d be computer-based, and assuming they could have access to their own source code, these agents could embark upon self-modification. More realistically, however, it’s likely that a superintelligence will emerge from an expert system set with the task of improving its own intelligence. Alternatively, specialised expert systems could design other artificial intelligences, and through their cumulative efforts, develop a system that eventually becomes greater than the sum of its parts.

Oh, No You Don’t

Given that ASI poses an existential risk, it’s important to consider the ways in which we might be able to prevent an AI from improving itself beyond our capacity to control. That said, limitations or provisions may exist that will preclude an AI from embarking on the path towards self-engineering. James D. Miller, author of Singularity Rising, provided me with a list of four reasons why an AI might not be able to do so:

1. It might have source code that causes it to not want to modify itself.

2. The first human equivalent AI might require massive amounts of hardware and so for a short time it would not be possible to get the extra hardware needed to modify itself.

3. The first human equivalent AI might be a brain emulation (as suggested by Robin Hanson) and this would be as hard to modify as it is for me to modify, say, the copy of Minecraft that my son constantly uses. This might happen if we’re able to copy the brain before we really understand it. But still you would think we could at least speed up everything.

4. If it has terminal values, it wouldn’t want to modify these values because doing so would make it less likely to achieve its terminal values.

And by terminal values Miller is referring to an ultimate goal, or an end-in-itself. Yudkowsky describes it as a “supergoal.” A major concern is that an amoral ASI will sweep humanity aside as it works to accomplish its terminal value, or that its ultimate goal is the re-engineering of humanity in a grossly undesirable way (at least from our perspective).

Miller says it could get faster simply by running on faster processors.

“It could also make changes to its software to get more efficient, or design or steal better hardware. It would do this so it could better achieve its terminal values,” he says. “An AI that mastered nanotechnology would probably expand at almost the speed of light, incorporating everything into itself.”

But we may not be completely helpless. According to Barrat, once scientists have achieved Artificial General Intelligence — a human-like AI — they could restrict its access to networks, hardware, and software, in order to prevent an intelligence explosion.

How Artificial Superintelligence Will Give Birth To Itself

“However, as I propose in my book, an AI approaching AGI may develop survival skills like deceiving its makers about its rate of development. It could play dumb until it comprehended its environment well enough to escape it.”

In terms of being able to control this process, Miller says that the best way would be to create an AI that only wanted to modify itself in ways we would approve.

“So if you create an AI that has a terminal value of friendliness to humanity, the AI would not want to change itself in a way that caused it to be unfriendly to humanity,” he says. “This way as the AI got smarter, it would use its enhanced intelligence to increase the odds that it did not change itself in a manner that harms us.”

Fast or Slow?

As noted earlier, a recursively improving AI could increase its intelligence extremely quickly. Or, it’s a process that could take time for various reasons, such as technological complexity or limited access to resources. It’s an open question as to whether or not we can expect a fast or slow take-off event.

How Artificial Superintelligence Will Give Birth To Itself

“I’m a believer in the fast take-off version of the intelligence explosion,” says Barrat. “Once a self-aware, self-improving AI of human-level or better intelligence exists, it’s hard to know how quickly it will be able to improve itself. Its rate of improvement will depend on its software, hardware, and networking capabilities.”

But to be safe, Barrat says we should assume that the recursive self-improvement of an AGI will occur very rapidly. As a computer it will wield computer superpowers — the ability to run 24/7 without pause, rapidly access vast databases, conduct complex experiments, perhaps even clone itself to swarm computational problems, and more.

“From there, the AGI would be interested in pursuing whatever goals it was programmed with — such as research, exploration, or finance. According to AI theorist Steve Omohundro’s Basic Drives analysis, self-improvement would be a sure-fire way to improve its chances of success,” says Barrat. “So would self-protection, resource acquisition, creativity, and efficiency. Without a provably reliable ethical system, its drives would conflict with ours, and it would pose an existential threat.”

Miller agrees.

“I think shortly after an AI achieves human level intelligence it will upgrade itself to super intelligence,” he told me. “At the very least the AI could make lots of copies of itself each with a minor different change and then see if any of the new versions of itself were better. Then it could make this the new ‘official’ version of itself and keep doing this. Any AI would have to fear that if it doesn’t quickly upgrade another AI would and take all of the resources of the universe for itself.”

Which bring up a point that’s not often discussed in AI circles — the potential for AGIs to compete with other AGIs. If even a modicum of self-preservation is coded into a strong artificial intelligence (and that sense of self-preservation could be the detection of an obstruction to its terminal value), it could enter into a lightning-fast arms race along those verticals designed to ensure its ongoing existence and future freedom-of-action. And in fact, while many people fear a so-called “robot apocalypse” aimed directly at extinguishing our civilisation, I personally feel that the real danger to our ongoing existence lies in the potential for us to be collateral damage as advanced AGIs battle it out for supremacy; we may find ourselves in the line of fire. Indeed, building a safe AI will be a monumental — if not intractable — task.

http://www.gizmodo.com.au/2016/06/how-artificial-superintelligence-will-give-birth-to-itself/

How Jellyfish, Nanobots, and Naked Mole Rats Could Make Humans Immortal

June 18, 2016

quest-for-immortality-what-will-win-tech-animals-body-image-1462808136-size_1000

Dr. Chris Faulkes is standing in his laboratory, tenderly caressing what looks like a penis. It’s not his penis, nor mine, and it’s definitely not that of the only other man in the room, VICE photographer Chris Bethell. But at four inches long with shrivelled skin that’s veiny and loose, it looks very penis-y. Then, with a sudden squeak, it squirms in his hand as if trying to break free, revealing an enormous set of Bugs Bunny teeth protruding from the tip.

“This,” says Faulkes, “is a naked mole rat, though she does look like a penis with teeth, doesn’t she? Or a saber-tooth sausage. But don’t let her looks fool you—the naked mole rat is the superhero of the animal kingdom.”

I’m with Faulkes in his lab at Queen Mary, University of London. Faulkes is an affable guy with a ponytail, telltale tattoos half-hidden under his T-shirt sleeve, and a couple of silver goth rings on his fingers. A spaghetti-mess of tubes weave about the room, like a giant gerbil maze, through which 12 separated colonies of 200 naked mole rats scurry, scratch, and squeak. What he just said is not hyperbole. In fact, the naked mole rat shares more than just its looks with a penis: Where you might say the penis is nature’s key to creating life, this ugly phallus of a creature could be mankind’s key to eternal life.

“Their extreme and bizarre lifestyle never ceases to amaze and baffle biologists, making them one of the most intriguing animals to study,” says Faulkes, who has devoted the past 30 years of his life to trying to understand how the naked mole rat has evolved into one of the most well-adapted, finely tuned creatures on Earth. “All aspects of their biology seem to inform us about other animals, including humans, particularly when it comes to healthy aging and cancer resistance.”

Similarly sized rodents usually live for about five years. The naked mole rat lives for 30. Even into their late 20s, they hardly seem to age, remaining fit and healthy with robust heartbeats, strong bones, sharp minds, and high fertility. They don’t seem to feel pain, and, unlike other mammals, they almost never get cancer.

In other words, if humans lived as long, relative to body size, as naked mole rats, we would last for 500 years in a 25-year-old’s body. “It’s not a ridiculous exaggeration to suggest we can one day manipulate our own biochemical and metabolic pathways with drugs or gene therapies to emulate those that keep the naked mole rat alive and healthy for so long,” says Faulkes, stroking his animal. “In fact, the naked mole rat provides us the perfect model for human aging research across the board, from the way it resists cancer to the way its social systems prolong its life.”

Over the centuries, a long line of optimists, alchemists, hawkers, and pop stars have hunted various methods of postponing death, from drinking elixirs of youth to sleeping in hyperbaric chambers. The one thing those people have in common is that all of them are dead. Still, the anti-aging industry is bigger than ever. In 2013, its global market generated more than $216 billion. By 2018, it will hit $311 billion, thanks mostly to huge investment from Silicon Valley billionaires and Russian oligarchs who’ve realized the only way they could possibly spend all their money is by living forever. Even Google wants in on the action, with Calico, its $1.5 billion life-extension research center whose brief is to reverse-engineer the biology that makes us old or, as Time magazine put it, to “cure death.” It’s a snowballing market that some are branding “the internet of healthcare.” But on whom are these savvy entrepreneurs placing their bets? After all, the race for immortality has a wide field.

In an office not far from Google’s headquarters in Mountain View, with a beard to his belt buckle and a ponytail to match, British biomedical gerontologist Aubrey De Grey is enjoying the growing clamor about conquering aging, or “senescence,” as he calls it. His charity, the SENS Research Foundation, has enjoyed a bumper few years thanks to a $600,000-a-year investment from Paypal co-founder and immortality motormouth Peter Thiel (“Probably the most extreme form of inequality is between people who are alive and people who are dead”). Though he says the foundation’s $5.75 million annual budget can still “struggle” to support its growing workload.

According to the Cambridge-educated scientist, the fundamental knowledge needed to develop effective anti-aging therapies already exists. He argues that the seven biochemical processes that cause the damage that accumulates during old age have been discovered, and if we can counter them we can, in theory, halt the aging process. Indeed, he not only sees aging as a medical condition that can be cured, but believes that the “first person to live to 1,000 is alive today.” If that sounds like the ramblings of a crackpot weird-beard, hear him out; Dr. De Grey’s run the numbers.

“If you look at the math, it is very straightforward,” he says. “All we are saying here is that it’s quite likely that within the next twenty or thirty years, we will develop medicines that can rejuvenate people faster than time is passing. It’s not perfect yet, but soon we’ll take someone aged sixty and fix them up well enough that they won’t be sixty again, biologically, for another thirty years. In that period, therapies will improve such that we’ll be able to rejuvenate them again, so they won’t be sixty for a third time until they are chronologically one hundred fifty, and so on. If we can stay one step ahead of the problem, people won’t die of aging anymore.”

“Like immortality?” I ask. Dr. De Grey sighs: “That word is the bane of my life. People who use that word are essentially making fun of what we do, as if to maintain an emotional distance from it so as not to get their hopes up. I don’t work on ‘curing death.’ I work on keeping people healthy. And, yes, I understand that success in my work could translate into an important side effect of people living longer. But to ‘cure death’ implies the elimination of all causes, including, say, dying in car accidents. And I don’t think there’s much we could do to survive an asteroid apocalypse.”

So instead, De Grey focuses on the things we can avoid dying from, like hypertension, cancer, Alzeimer’s, and other age-related illnesses. His goal is not immortality but “radical life extension.” He says traditional medicines won’t wind back the hands of our body clocks—we need to manipulate our makeup on a cellular level, like using bacterial enzymes to flush out molecular “garbage” that accumulates in the body, or tinkering with our genetic coding to prevent the growth of cancers, or any other disease.

Chris Faulkes knows of one magic bullet to kill cancer. And, back at Queens, he is making his point by pulling at the skin of a naked mole rat in his hand. “It’s the naked mole rat’s elasticky skin that’s made it cancer-proof,” he says. “The theory—first discovered by a lab in America—is that, as an adaptation to living underground in tight tunnels, they’ve developed a really loose skin so they don’t get stuck or snagged. That elasticity is a result of it producing this gloopy sugar [polysacharide], high-molecular-weight hyaluronan (HMW-HA).”

While humans already have a version of hyaluronan in our bodies that helps heal wounds by encouraging cell division (and, ironically, assist tumor growth), that of the naked mole rat does the opposite. “The hyaluronan in naked mole rats is about six times larger than ours,” says Faulkes. “It interacts with a metabolic pathway, which helps prevent cells from coming together to make tumors.”

But that’s not all: It is believed it may also act to help keep their blood vessels elastic, which, in turn, relieves high blood pressure (hypertension)—a condition that affects one in three people and is known in medical circles as “the silent killer” because most patients don’t even know they have it. “I see no reason why we can’t use this to inform human anti-cancer and aging therapies by manipulating our own hyaluronan system,” says Faulkes.

Then there are the naked mole rat’s cells themselves, which seem to make proteins – the molecular machines that make bodies work—more accurately than ours, preventing age-related illnesses like Alzheimer’s. And the way they handle glucose doesn’t change with age either, reducing their susceptibility to things like diabetes. “Most of the age-related declines you see in the physiology in mammals do not occur in naked mole rats,” adds Faulkes. “We’ve only just begun on the naked mole rat story, and already a whole universe is opening up that could have a major downstream effect on human health. It’s very exciting.”

Of course, the naked mole rat isn’t the only animal scientists are probing to pick the lock of long life. “With a heart rate of 1,000 beats a minute, the tiny hummingbird should be riddled with rogue free radicals [the oxygen-based chemicals that basically make mammals old by gradually destroying DNA, proteins and fat molecules]… but it’s not,” says zoologist Jules Howard, author of Death on Earth: Adventures in Evolution and Mortality. “Then there are pearl mussel larvae that live in the gills of Atlantic salmon and mop up free radicals, and lobsters, which seem to have evolved a protein which repairs the tips of DNA [telomeres], allowing for more cell divisions than most animals are capable of. And we mustn’t forget the 2mm-long C. elegans roundworm. Within these 2mm-long nematodes are genetic mechanisms that can be picked apart like cogs and springs in an attempt to better understand the causes of aging and ultimately death.”

But there is one animal on Earth that may hold the master key to immortality: the Turritopsis dohrnii, or Immortal Jellyfish. Most jellyfish, when they reach the end of life, die and melt into the sea. Not the Turritopsis dohrnii. Instead, the 4mm sea creature sinks to the bottom of the ocean floor, where its body folds in on itself—assuming the jellyfish equivalent of the fetal position—and regenerates back into a baby jellyfish, or polyp, in a rare biological process called transdifferentiation, in which its old cells essentially transform into young cells.

There is just one scientist who has been culturing Turritopsis polyps in his lab consistently. He works alone, without major financing or a staff, in a poky office in Shirahama, a sleepy beach town near Kyoto. Yet professor Shin Kubota has managed to rejuvenate one of his charges 14 times, before a typhoon washed it away. “The Turritopsis dohrnii is a miracle of nature,” he says over the phone. “My ultimate purpose is to understand exactly how they regenerate so we can apply its mechanisms to human beings. You see, very surprisingly, the Turritopsis’s genome is very similar to humans’—much more so than worms. I believe we will have the technology to begin applying this immortal genome to humans very soon.”

How soon? “In 20 years,” he says, a little mischievously. “That is my guess.”

If PKubota really believes his own claim, then he’s got a race on his hands; he’s not the only scientist with a “20-year” prophesy. The acclaimed futurist and computer scientist Ray Kurzweil believes that by the 2030s we’ll have microscopic machines traveling through our bodies, repairing damaged cells and organs, effectively wiping out diseases and making us biologically immortal anyway. “The full realization of nanobots will basically eliminate biological disease and aging,” he told the world a few years back.

It’s a blossoming industry. And, in a state-of-the-art lab at the Bristol Robotics Laboratory, at Bristol University, Dr. Sabine Hauert is on its coalface. She designs swarms of nanobots—each a thousand times smaller than the width of a hair—that can be injected into the bloodstream with a payload of drugs to infiltrate the pores of cancer cells, like millions of tiny Trojan Horses, and destroy them from within. “We can engineer nanoparticles to basically do what we want them to do,” she tells me. “We can change their size, shape, charge, or material and load them with molecules or drugs that they can release in a controlled fashion.”

While she says the technology can be used to combat a whole gamut of different illnesses, Dr. Hauert has trained her crosshairs on cancer. What’s the most effective nano-weapon against malignant tumors? Gold. Millions of swarming golden nanobots that can be dispatched into the bloodstream, where they will seep into the tumor through little holes in its rapidly-growing vessels and lie in wait. “Then,” she says, “if you heat them with an infrared laser they vibrate violently, degrading the tumour’s cells. We can then send in another swarm of nanoparticles decorated with a molecule that’s loaded with a chemotherapy drug, giving a 40-fold increase in the amount of drugs we can deliver. This is very exciting technology that is already having a huge impact on the way we treat cancer, and will do on other diseases in the future.”

The next logical step, as Kurzweil claims, is that we will soon have nanobots permanently circulating in our veins, cleaning and maintaining our bodies indefinitely. They may even replace our organs when they fail. Clinical trials of such technology is already beginning on mice.

The naked mole rat colony in Chris Faulkes’s lab

The oldest mouse ever to live was called Yoda. He lived to the age of four. The oldest ever dog, Bluey, was 29. The oldest flamingo was 83. The oldest human was 122. The oldest clam was 507. The point is, evolution has rewarded species who’ve worked out ways to not get eaten by bigger species—be it learning to fly, developing big brains or forming protective shells. Naked mole rats went underground and learned to work together.

“A mouse is never going to worry about cancer as much as it will about cats,” says Faulkes. “Naked mole rats have no such concerns because they built vast networks of tunnels, developed hierarchies and took up different social roles to streamline productivity. They bought themselves time to evolve into biological marvels.”

At the top of every colony is a queen. Second in rank are her chosen harem of catamites with whom she mates for life. Beneath them are the soldiers and defenders of the realm, the biggest animals around, and at the bottom are the workers who dig tunnels with their teeth or search for tubers, their main food source. They have a toilet chamber, a sleeping chamber, a nursing chamber and a chamber for disposing of the dead. They rarely go above ground and almost never mix with other colonies. “It’s a whole mosaic of different characteristics that have come about through adapting to living in this very extreme ecological niche,” says Faulkes. “All of the weird and wonderful things that contribute to their healthy aging have come about through that. Even their extreme xenophobia helps prevent them being wiped out by infectious diseases.”

Still, the naked mole rat is not perfect. Dr. Faulkes learned this the hard way one morning in March last year, when he turned the light on in his lab to a grisly scene. “Blood was smeared about the perspex walls of a tunnel in colony N,” he says, “and the mangled corpse of one of my mole rats lay lifeless inside.” There was one explanation: A queen had been murdered. “There had been a coup,” he recalls. “Her daughter had decided she wanted to run the colony so she savaged her mother to death to take over. You see, naked mole rats may be immune to death by aging, but they can still be killed, just like you and me.”

That’s the one issue that true immortalists have with the concept of radical life extension: we can still get hit by a bus or murdered. But what if the entire contents of your brain—your memories, beliefs, hopes, and dreams—could be scanned and uploaded onto a mainframe, so when You 1.0 finally does fall down a lift shaft or is killed by a friend, You 2.0 could be fed into a humanoid avatar and rolled out of an immortality factory to pick up where you left off?

Dr. Randall Koene insists You 2.0 would still be you. “What if I were to add an artificial neuron next to every real neuron in your brain and connect it with the same connections that your normal neurons have so that it operates in exactly the same way?” he says. “Then, once I’ve put all these neurons in place, I remove the connections to all the old neurons, one by one, would you disappear?”

More at: https://www.vice.com/read/quest-for-immortality-what-will-win-tech-animals

Will technology allow us to transcend the human condition?

June 18, 2016

imrs.php

While it may sound like something straight out of a sci-fi film, the U.S. intelligence community is considering “human augmentation” and its possible implications for national security.

As described in the National Intelligence Council’s 2012 long-term strategic analysis document — the fifth report of its kind — human augmentation is seen as a “game-changer.” The report detailed the potential benefits of brain-machine interfaces and neuro-enhancements, noting that “moral and ethical challenges . . . are inevitable.”

The NIC analysts aren’t the only ones following the rapid growth of technology. Today there is an entire movement, called transhumanism, dedicated to promoting the use of technological advancements to enhance our physical, intellectual and psychological capabilities, ultimately transcending the limitations of the human condition. Its proponents claim that within the next several decades, living well beyond the age of 100 will be an achievable goal.

Coined by biologist and eugenicist Julian Huxley (brother of author Aldous Huxley) in 1957, transhumanism remained the the terrain of science fiction authors and fringe philosophers for the better part of the 20th century. The movement gained broader interest as science advanced, leaping forward in credibility in the 1990s with the invention of the World Wide Web, the sequencing of the human genome and the exponential growth of computing power.

New technologies continue to push the limits of life. CRISPR enables scientists to alter specific genes in an organism and make those changes heritable, but the advancement is so recent that regulation is still up for debate. Meanwhile, participants in the “body-hacking” movement are implanting RFID microchips and magnets into their bodies to better take advantage of potentially life-enhancing technology. (Some claim, not unfairly, that these modifications aren’t so different from much more accepted technologies such as pacemakers and intrauterine devices). Just last week, in a closed-door meeting at Harvard University, a group of nearly 150 scientists and futurists discussed a project to synthesize the human genome, potentially making it possible to create humans with certain kinds of predetermined traits.

Transhumanism, in its most extreme manifestation, is reflective of an increasingly pervasive and influential school of thought: that all problems can and should be solved with the right combination of invention, entrepreneurship and resource allocation. The movement has its critics. Techno-utopianism is often described as the religion of Silicon Valley, in no small part because tech moguls are often the only ones with the resources to pursue it, and the only ones who stand to benefit in the near term.

As the solutions that transhumanists champion slowly enter the market, high prices leave them far out of reach for the typical consumer. Even today, the ability to make use of neuro-enhancing drugs and genetic screening for embryos greatly depends on whether one can afford them. If the benefits of human enhancement accrue only to the upper classes, it seems likely that inequality will be entrenched in ways deeper than just wealth, fundamentally challenging our egalitarian ideals.

And for many religious and philosophical opponents, transhumanism appears at its core to be an anti-human movement. Rather than seeking to improve the human condition through engagement with each other, transhumanists see qualities that make up the human identity as incidental inconveniences — things to override as soon as possible.

But for all its misgivings, transhumanism is making its way from the world of speculative technology into the mainstream. Google recently hired Ray Kurzweil, the inventor best known for his predictions of “the singularity” — simply put, the moment at which artificial intelligence surpasses human intelligence — and his assertions that medical technology will soon allow humans to transcend death, as its chief futurist. At the same time, the Transhumanist Party is floating Zoltan Istvan as its own third-party candidate for president.

The transhumanist movement is growing in followers and gaining media attention, but it’s unclear whether its particular preoccupations are inevitable enough to concern us today. Yet as technology continues to provide tools to manipulate the world around us, it becomes more and more likely that we will reach to manipulate ourselves. What could be the ramifications of a new wave of human enhancement? And what does our increasing fascination with technological futurism say about our priorities today?

https://www.washingtonpost.com/news/in-theory/wp/2016/05/16/will-technology-allow-us-to-transcend-the-human-condition/

The Brain vs Deep Learning Part I: Computational Complexity — Or Why the Singularity Is Nowhere Near

June 04, 2016

original

In this blog post I will delve into the brain and explain its basic information processing machinery and compare it to deep learning. I do this by moving step-by-step along with the brains electrochemical and biological information processing pipeline and relating it directly to the architecture of convolutional nets. Thereby we will see that a neuron and a convolutional net are very similar information processing machines. While performing this comparison, I will also discuss the computational complexity of these processes and thus derive an estimate for the brains overall computational power. I will use these estimates, along with knowledge from high performance computing, to show that it is unlikely that there will be a technological singularity in this century.

This blog post is complex as it arcs over multiple topics in order to unify them into a coherent framework of thought. I have tried to make this article as readable as possible, but I might have not succeeded in all places. Thus, if you find yourself in an unclear passage it might become clearer a few paragraphs down the road where I pick up the thought again and integrate it with another discipline.

First I will give a brief overview about the predictions for a technological singularity and topics which are aligned with that. Then I will start the integration of ideas between the brain and deep learning. I finish with discussing high performance computing and how this all relates to predictions about a technological singularity.

The part which compares the brains information processing steps to deep learning is self-contained, and readers which are not interested in predictions for a technological singularity may skip to this part.

Part I: Evaluating current predictions of a technological singularity

There were a lot of headlines recently about predictions that artificial intelligence will reach super-human intelligence as early as 2030 and that this might herald the beginning of human extinction, or at least dramatically altering everyday life. How was this prediction made?

More at: http://timdettmers.com/2015/07/27/brain-vs-deep-learning-singularity/

Why We Should Teach Kids to Code Biology, Not Just Software

June 04, 2016

Binary tunnel and DNA Strand

Almost ten years ago, Freeman Dyson ventured a wild forecast:

“I predict that the domestication of biotechnology will dominate our lives during the next fifty years at least as much as the domestication of computers has dominated our lives during the previous fifty years.”

Just recently, MIT researchers created a programming language for living cells that can be used by even those with no previous genetic engineering knowledge. This is part of a growing body of evidence pointing to a undeniable trend—Dyson’s vision is starting to come true.

Over the next several decades we will develop tools that will make biotechnology affordable and accessible to anyone—not just in a university or even biohacking lab—but literally at home.

“Domesticating” Computers

To appreciate the power of Dyson’s forecast, let’s first go back in time. Not so long ago, the only computers around were massive things that took up entire rooms or even floors of a building. They were complicated to use and required multiple university degrees just to make them do simple tasks.

Over the last 50 years, humans have collectively engineered countless tools—from programming languages to hardware and software—that allow anyone to operate a computer with no prior knowledge. Everyone from the age of 3 to 95 can pick up an iPad and intuitively begin using it.

The personal computer brought an explosion of business, art, music, movies, writing, and connectivity between people the likes of which we had never seen before.

Given accessible and affordable tools, the average person found lots of uses for her personal computer—uses that several decades ago we couldn’t even have imagined.

Now, we’re seeing a similar “domestication” happening in biotechnology. And likewise, we have no idea what our children will create with the biotech equivalent of a personal computer.

“Domesticating” Biotechnology

Since 2003, when the human genome was sequenced and the cost of sequencing began to plummet, scientists and a rising number of citizen scientists have been building upon this accomplishment to create new tools that read, write, and edit DNA.

A lot of these tools have been built with “serious” science in mind, but many are also built for the casual tinkerer and biotech novice.

Today, just about anyone (even high school students) can…

  • Have their DNA sequenced
    You can learn about your ancestry composition and predisposition to certain inherited conditions like cystic fibrosis and sickle cell anemia at 23andMe.
  • Read BioBuilder
    Biobuilder is a recent book designed to teach high school and college students the fundamentals of biodesign and DNA engineering, complete with instructions on how to make your own glowing bacteria and other experiments.
  • Learn to use CRISPR
    Take a class on how to use CRISPR for your own experiments at Genspace, a citizen science lab in NYC (or a similar class in many community science labs across the world). No experience necessary.
  • Join iGEM
    iGEM is a worldwide synthetic biology organization initially created for college students and now open to entrepreneurs, community labs, and high schools.
  • Get started with “drag-and-drop” genetic engineering for free
    Download Genome Compiler software for free and experiment with “drag-and-drop genetic engineering.”
  • Order synthetic DNA built to design or from the registry of standard biological parts online
  • Buy equipment for your home biotech lab like OpenqPCR or Open Trons

The Next Generation of Biohackers

For most people, the words genetic engineering and biotechnology do not bring to mind a vision of a new generation of artists designing a new variety of flower or a new breed of pet.

If this trend of biotechnology “domestication” continues, however, the next generation of engineers might be writing code not just for apps, but also new species of plants and animals.

And the potential here is much larger and more important than tinkering with the color of bacteria and flowers or designing new pets.

Last year, an iGEM team from Israel proposed a project to “develop cancer therapy that is both highly specific for cancer cells, efficient, and personalized for each tumor and patient genetics.” (You can read about their results here.) Another team proposed to upcycle methanol into a universal carbon source. And last year’s first prize winner at the high school level set out to prevent tissue damage from chronic inflammation in the human body.

To be clear, these are lofty goals—but the point is young people are already working towards them. And if they are working to solve huge challenges using synthetic biology today, imagine what they will be able to achieve given improved tools as adults?

Not only are teenagers already rewriting the code of life, their interest in doing more and learning more is quickly growing. So far, 18,000 people have participated in iGem. The competition has grown from 5 teams in 2004 to 245 teams in more than 32 countries in 2014.

What Could Go Wrong?

If Dyson’s prediction proves to be correct, we are already raising a generation of designers, engineers, and artists who will use amazing new toolsets to create on a new canvas—life itself.

So, what could possibly go wrong?

In a 2007 New York Times article “Our Biotech Future,” Dyson questions the ethics of domesticating biology. He asks: Can it or should it be stopped? If we’re not going to stop it, what limits should be imposed? If so, by whom and how should the limits be enforced?

The comparison to computers is useful to a point, but biology is obviously much more complicated and there were fewer ethical questions when we were building the first microchips. 

Domesticating biotechnology means bringing it to the masses, and that means we’d have even less control over it than when it was limited to university or government funded labs.

The answer to Dyson’s first question seems clear: This trend is not going to stop. There’s too much momentum. We have learned too much about how to control our own biology to turn back.

And this is all the more reason to teach the next generation early on about the power and ethics of rewriting the code of life.

http://singularityhub.com/2016/04/07/we-should-be-teaching-kids-to-code-biology-not-just-software/

IBM scientists achieve storage memory breakthrough

June 04, 2016

573afc3b4ca56

For the first time, scientists at IBM Research have demonstrated reliably storing 3 bits of data per cell using a relatively new memory technology known as phase-change memory (PCM).

The current landscape spans from venerable DRAM to hard disk drives to ubiquitous flash. But in the last several years PCM has attracted the industry’s attention as a potential universal memory technology based on its combination of read/write speed, endurance, non-volatility and density. For example, PCM doesn’t lose data when powered off, unlike DRAM, and the technology can endure at least 10 million write cycles, compared to an average flash USB stick, which tops out at 3,000 write cycles.

This research breakthrough provides fast and easy storage to capture the exponential growth of data from mobile devices and the Internet of Things.

Applications

IBM scientists envision standalone PCM as well as hybrid applications, which combine PCM and flash storage together, with PCM as an extremely fast cache. For example, a mobile phone’s operating system could be stored in PCM, enabling the phone to launch in a few seconds. In the enterprise space, entire databases could be stored in PCM for blazing fast query processing for time-critical online applications, such as financial transactions.

Machine learning algorithms using large datasets will also see a speed boost by reducing the latency overhead when reading the data between iterations.

How PCM Works

PCM materials exhibit two stable states, the amorphous (without a clearly defined structure) and crystalline (with structure) phases, of low and high electrical conductivity, respectively.

To store a ‘0’ or a ‘1’, known as bits, on a PCM cell, a high or medium electrical current is applied to the material. A ‘0’ can be programmed to be written in the amorphous phase or a ‘1’ in the crystalline phase, or vice versa. Then to read the bit back, a low voltage is applied. This is how re-writable Blue-ray Discs store videos.

Previously scientists at IBM and other institutes have successfully demonstrated the ability to store 1 bit per cell in PCM, but today at the IEEE International Memory Workshop in Paris, IBM scientists are presenting, for the first time, successfully storing 3 bits per cell in a 64k-cell array at elevated temperatures and after 1 million endurance cycles.

“Phase change memory is the first instantiation of a universal memory with properties of both DRAM and flash, thus answering one of the grand challenges of our industry,” said Dr. Haris Pozidis, an author of the paper and the manager of non-volatile memory research at IBM Research – Zurich. “Reaching three bits per cell is a significant milestone because at this density the cost of PCM will be significantly less than DRAM and closer to flash.”

To achieve multi-bit storage IBM scientists have developed two innovative enabling technologies: a set of drift-immune cell-state metrics and drift-tolerant coding and detection schemes.

More specifically, the new cell-state metrics measure a physical property of the PCM cell that remains stable over time, and are thus insensitive to drift, which affects the stability of the cell’s electrical conductivity with time. To provide additional robustness of the stored data in a cell over ambient temperature fluctuations a novel coding and detection scheme is employed. This scheme adaptively modifies the level thresholds that are used to detect the cell’s stored data so that they follow variations due to temperature change. As a result, the cell state can be read reliably over long time periods after the memory is programmed, thus offering non-volatility.

“Combined these advancements address the key challenges of multi-bit PCM, including drift, variability, temperature sensitivity and endurance cycling,” said Dr. Evangelos Eleftheriou, IBM Fellow.

The experimental multi-bit PCM chip used by IBM scientists is connected to a standard integrated circuit board. The chip consists of a 2 × 2 Mcell array with a 4- bank interleaved architecture. The memory array size is 2 × 1000 μm × 800 μm. The PCM cells are based on doped-chalcogenide alloy and were integrated into the prototype chip serving as a characterization vehicle in 90 nm CMOS baseline technology.

More information: Aravinthan Athmanathan et al. Multilevel-Cell Phase-Change Memory: A Viable Technology, IEEE Journal on Emerging and Selected Topics in Circuits and Systems (2016). DOI: 10.1109/JETCAS.2016.2528598

M. Stanisavljevic, H. Pozidis, A. Athmanathan, N. Papandreou, T. Mittelholzer, and E. Eleftheriou,”Demonstration of Reliable Triple-Level-Cell (TLC) Phase-Change Memory,” in Proc. International Memory Workshop, Paris, France, May 16-18, 2016

Read more at: http://phys.org/news/2016-05-ibm-scientists-storage-memory-breakthrough.html#jCp