Why the “You” in an Afterlife Wouldn’t Really Be You

July 23, 2017

The Discovery is a 2017 Netflix film in which Robert Redford plays a scientist who proves that the afterlife is real. “Once the body dies, some part of our consciousness leaves us and travels to a new plane,” the scientist explains, evidenced by his machine that measures, as another character puts it, “brain wavelengths on a subatomic level leaving the body after death.”

This idea is not too far afield from a real theory called quantum consciousness, proffered by a wide range of people, from physicist Roger Penrose to physician Deepak Chopra. Some versions hold that our mind is not strictly the product of our brain and that consciousness exists separately from material substance, so the death of your physical body is not the end of your conscious existence. Because this is the topic of my next book, Heavens on Earth: The Scientific Search for the Afterlife, Immortality, and Utopia (Henry Holt, 2018), the film triggered a number of problems I have identified with all such concepts, both scientific and religious.

First, there is the assumption that our identity is located in our memories, which are presumed to be permanently recorded in the brain: if they could be copied and pasted into a computer or duplicated and implanted into a resurrected body or soul, we would be restored. But that is not how memory works. Memory is not like a DVR that can play back the past on a screen in your mind. Memory is a continually edited and fluid process that utterly depends on the neurons in your brain being functional. It is true that when you go to sleep and wake up the next morning or go under anesthesia for surgery and come back hours later, your memories return, as they do even after so-called profound hypothermia and circulatory arrest. Under this procedure, a patient’s brain is cooled to as low as 50 degrees Fahrenheit, which causes electrical activity in neurons to stop—suggesting that long-term memories are stored statically. But that cannot happen if your brain dies. That is why CPR has to be done so soon after a heart attack or drowning—because if the brain is starved of oxygen-rich blood, the neurons die, along with the memories stored therein.

Second, there is the supposition that copying your brain’s connectome—the diagram of its neural connections—uploading it into a computer (as some scientists suggest) or resurrecting your physical self in an afterlife (as many religions envision) will result in you waking up as if from a long sleep either in a lab or in heaven. But a copy of your memories, your mind or even your soul is not you. It is a copy of you, no different than a twin, and no twin looks at his or her sibling and thinks, “There I am.” Neither duplication nor resurrection can instantiate you in another plane of existence.

Third, your unique identity is more than just your intact memories; it is also your personal point of view. Neuroscientist Kenneth Hayworth, a senior scientist at the Howard Hughes Medical Institute and president of the Brain Preservation Foundation, divided this entity into the MEMself and the POVself. He believes that if a complete MEMself is transferred into a computer (or, presumably, resurrected in heaven), the POVself will awaken. I disagree. If this were done without the death of the person, there would be two memory selves, each with its own POVself looking out at the world through its unique eyes. At that moment, each would take a different path in life, thereby recording different memories based on different experiences. “You” would not suddenly have two POVs. If you died, there is no known mechanism by which your POVself would be transported from your brain into a computer (or a resurrected body). A POV depends entirely on the continuity of self from one moment to the next, even if that continuity is broken by sleep or anesthesia. Death is a permanent break in continuity, and your personal POV cannot be moved from your brain into some other medium, here or in the hereafter.

If this sounds dispiriting, it is just the opposite. Awareness of our mortality is uplifting because it means that every moment, every day and every relationship matters. Engaging deeply with the world and with other sentient beings brings meaning and purpose. We are each of us unique in the world and in history, geographically and chronologically. Our genomes and connectomes cannot be duplicated, so we are individuals vouchsafed with awareness of our mortality and self-awareness of what that means. What does it mean? Life is not some temporary staging before the big show hereafter—it is our personal proscenium in the drama of the cosmos here and now.”

This article was originally published with the title “Who Are You?”

ABOUT THE AUTHOR(S)

Michael Shermer is publisher of Skeptic magazine (www.skeptic.com) and a Presidential Fellow at Chapman University. His next book is Heavens on Earth. Follow him on Twitter @michaelshermer

https://www.scientificamerican.com/article/why-the-ldquo-you-rdquo-in-an-afterlife-wouldnt-really-be-you/

Advertisements

Exponential Growth Will Transform Humanity in the Next 30 Years

February 25, 2017

aaeaaqaaaaaaaambaaaajgqyndzhmtlilwu4yzctndlkns04mwrhltdjmdi4nwi3yzrlng

By Peter Diamantis

As we close out 2016, if you’ll allow me, I’d like to take a risk and venture into a topic I’m personally compelled to think about… a topic that will seem far out to most readers.

Today’s extraordinary rate of exponential growth may do much more than just disrupt industries. It may actually give birth to a new species, reinventing humanity over the next 30 years.

I believe we’re rapidly heading towards a human-scale transformation, the next evolutionary step into what I call a “Meta-Intelligence,” a future in which we are all highly connected—brain to brain via the cloud—sharing thoughts, knowledge and actions. In this post, I’m investigating the driving forces behind such an evolutionary step, the historical pattern we are about to repeat, and the implications thereof. Again, I acknowledge that this topic seems far-out, but the forces at play are huge and the implications are vast. Let’s dive in…

A Quick Recap: Evolution of Life on Earth in 4 Steps

About 4.6 billion years ago, our solar system, the sun and the Earth were formed.

Step 1: 3.5 billion years ago, the first simple life forms, called “prokaryotes,” came into existence.These prokaryotes were super-simple, microscopic single-celled organisms, basically a bag of cytoplasm with free-floating DNA. They had neither a distinct nucleus nor specialized organelles.

Step 2: Fast-forwarding one billion years to 2.5 billion years ago, the next step in evolution created what we call “eukaryotes”—life forms that distinguished themselves by incorporating biological ‘technology’ into themselves. Technology that allowed them to manipulate energy (via mitochondria) and information (via chromosomes) far more efficiently. Fast forward another billion years for the next step.

Step 3: 1.5 billion years ago, these early eukaryotes began working collaboratively and formed the first “multi-cellular life,” of which you and I are the ultimate examples (a human is a multicellular creature of 10 trillion cells).

Step 4: The final step I want to highlight happened some 400 million years ago, when lungfish crawled out of the oceans onto the shores, and life evolved from the oceans onto land.

The Next Stages of Human Evolution: 4 Steps

Today, at a massively accelerated rate—some 100 million times faster than the steps I outlined above—life is undergoing a similar evolution. In this next stage of evolution, we are going from evolution by natural selection (Darwinism) to evolution by intelligent direction. Allow me to draw the analogy for you:

Step 1: Simple humans today are analogous to prokaryotes. Simple life, each life form independent of the others, competing and sometimes collaborating.

Step 2: Just as eukaryotes were created by ingesting technology, humans will incorporate technology into our bodies and brains that will allow us to make vastly more efficient use of information (BCI) and energy.

Step 3: Enabled with BCI and AI, humans will become massively connected with each other and billions of AIs (computers) via the cloud, analogous to the first multicellular lifeforms 1.5 billion years ago. Such a massive interconnection will lead to the emergence of a new global consciousness, and a new organism I call the Meta-Intelligence.

Step 4: Finally, humanity is about to crawl out of the gravity well of Earth to become a multiplanetary species. Our journey to the moon, Mars, asteroids and beyond represents the modern-day analogy of the journey made by lungfish climbing out of the oceans some 400 million years ago.

The 4 Forces Driving the Evolution and Transformation of Humanity

Four primary driving forces are leading us towards our transformation of humanity into a meta-intelligence both on and off the Earth:

  1. We’re wiring our planet
  2. Emergence of brain-computer interface
  3. Emergence of AI
  4. Opening of the space frontier

Let’s take a look.

1. Wiring the Planet: Today, there are 2.9 billion people connected online. Within the next six to eight years, that number is expected to increase to nearly 8 billion, with each individual on the planet having access to a megabit-per-second connection or better. The wiring is taking place through the deployment of 5G on the ground, plus networks being deployed by Facebook, Google, Qualcomm, Samsung, Virgin, SpaceX and many others. Within a decade, every single human on the planet will have access to multi-megabit connectivity, the world’s information, and massive computational power on the cloud.

2. Brain-Computer Interface: A multitude of labs and entrepreneurs are working to create lasting, high-bandwidth connections between the digital world and the human neocortex (I wrote about that in detail here). Ray Kurzweil predicts we’ll see human-cloud connection by the mid-2030s, just 18 years from now. In addition, entrepreneurs like Bryan Johnson (and his company Kernel) are committing hundreds of millions of dollars towards this vision. The end results of connecting your neocortex with the cloud are twofold: first, you’ll have the ability to increase your memory capacity and/or cognitive function millions of fold; second, via a global mesh network, you’ll have the ability to connect your brain to anyone else’s brain and to emerging AIs, just like our cell phones, servers, watches, cars and all devices are becoming connected via the Internet of Things.

3. Artificial Intelligence/Human Intelligence: Next, and perhaps most significantly, we are on the cusp of an AI revolution. Artificial intelligence, powered by deep learning and funded by companies such as Google, Facebook, IBM, Samsung and Alibaba, will continue to rapidly accelerate and drive breakthroughs. Cumulative “intelligence” (both artificial and human) is the single greatest predictor of success for both a company or a nation. For this reason, beside the emerging AI “arms race,” we will soon see a race focused on increasing overall human intelligence. Whatever challenges we might have in creating a vibrant brain-computer interface (e.g., designing long-term biocompatible sensors or nanobots that interface with your neocortex), those challenges will fall quickly over the next couple of decades as AI power tools give us ever-increasing problem-solving capability. It is an exponential atop an exponential. More intelligence gives us the tools to solve connectivity and mesh problems and in turn create greater intelligence.

4. Opening the Space Frontier: Finally, it’s important to note that the human race is on the verge of becoming a multiplanetary species. Thousands of years from now, whatever we’ve evolved into, we will look back at these next few decades as the moment in time when the human race moved off Earth irreversibly. Today, billions of dollars are being invested privately into the commercial space industry. Efforts led by SpaceX are targeting humans on Mars, while efforts by Blue Origin are looking at taking humanity back to the moon, and plans by my own company, Planetary Resources, strive to unlock near-infinite resources from the asteroids.

In Conclusion

The rate of human evolution is accelerating as we transition from the slow and random process of “Darwinian natural selection” to a hyper-accelerated and precisely-directed period of “evolution by intelligent direction.” In this post, I chose not to discuss the power being unleashed by such gene-editing techniques as CRISPR-Cas9. Consider this yet another tool able to accelerate evolution by our own hand.

The bottom line is that change is coming, faster than ever considered possible. All of us leaders, entrepreneurs and parents have a huge responsibility to inspire and guide the transformation of humanity on and off the Earth. What we do over the next 30 years—the bridges we build to abundance—will impact the future of the human race for millennia to come. We truly live during the most exciting time ever in human history.

https://singularityhub.com/2016/12/21/exponential-growth-will-transform-humanity-in-the-next-30-years/

New AI Mental Health Tools Beat Human Doctors at Assessing Patients

December 18, 2016

shutterstock_315345644

About 20 percent of youth in the United States live with a mental health condition, according to the National Institute of Mental Health.

That’s the bad news.

The good news is that mental health professionals have smarter tools than ever before, with artificial intelligence-related technology coming to the forefront to help diagnose patients, often with much greater accuracy than humans.

A new study published in the journal Suicide and Life-Threatening Behavior, for example, showed that machine learning is up to 93 percent accurate in identifying a suicidal person. The research, led by John Pestian, a professor at Cincinnati Children’s Hospital Medical Center, involved 379 teenage patients from three area hospitals.

Each patient completed standardized behavioral rating scales and participated in a semi-structured interview, answering five open-ended questions such as “Are you angry?” to stimulate conversation, according to a press release from the university.

The researchers analyzed both verbal and non-verbal language from the data, then sent the information through a machine-learning algorithm that was able to determine with remarkable accuracy whether the person was suicidal, mentally ill but not suicidal, or neither.

“These computational approaches provide novel opportunities to apply technological innovations in suicide care and prevention, and it surely is needed,” Pentian says in the press release.

In 2014, suicide was ranked as the tenth leading cause of death in the United States, but the No. 2 cause of death for people age 15 to 24, according to the American Association of Suicidology.

A study just published in the journal Psychological Bulletin further punctuated the need for better tools to help with suicide prevention. A meta-analysis of 365 studies conducted over the last 50 years found that the ability of mental health experts to predict if someone will attempt suicide is “no better than chance.”

“One of the major reasons for this is that researchers have almost always tried to use a single factor (e.g., a depression diagnosis) to predict these things,” says lead author Joseph Franklin of Harvard University in an email exchange with Singularity Hub.

Franklin says that the complex nature behind such thoughts and behaviors requires consideration of tens if not hundreds of factors to make accurate predictions. He and others argue in a correspondence piece published earlier this year in Psychological Medicine that machine learning and related techniques are an ideal option. A search engine using only one factor would be ineffective at returning results; the same is true of today’s attempts to predict suicidal behavior.

He notes that researchers in Boston, including colleague Matthew K. Nock at Harvard, have already used machine learning to predict suicidal behaviors with 70 to 85 percent accuracy. Calling the work “amazing,” Franklin notes that the research is still in the preliminary stages, with small sample sizes.

“The work by the Pestian group is also interesting, with their use of vocal patterns/natural language processing being unique from most other work in this area so far,” Franklin says, adding that there are also limits as to what can be drawn from their findings at this point. “Nevertheless, this is a very interesting line of work that also represents a sharp and promising departure from what the field has been doing for the past 50 years.”

Machine learning has yet to be used in therapy, according to Franklin, while most conventional treatments for suicide fall short.

“So even though several groups are on the verge of being able to accurately predict suicidality on the scale of entire healthcare systems [with AI], it’s unclear what we should do with these at-risk people to reduce their risk,” Franklin says.

To that end, Franklin and colleagues have developed a free app called Tec-Tec that appears effective at “reducing self-cutting, suicide plans, and suicidal behaviors.”

The app is based on a psychological technique called evaluative conditioning. By continually pairing certain words and images, it changes associations with certain objects and concepts, according to the website, so that within a game-like design, Tec-Tec seeks to change associations with certain factors that may increase risk for self-injurious behaviors.

“We’re working on [additional] trials and soon hope to use machine learning to tailor the app to each individual over time,” Franklin says, “and to connect the people most in need with the app.”

Catching schizophrenic speech

Last year, researchers in a study published in the journal Schizophrenia also had promising results in using machine-learning algorithms to predict later psychosis onset in high-risk youths.

Thirty-four participants were interviewed and assessed quarterly for two and a half years. Using automated analysis, transcripts of the interviews were evaluated for coherence and two syntactic markers of speech complexity—the length of a sentence and the number of clauses it contained.

The speech features analyzed by the computer predicted later psychosis development with 100 percent accuracy, outperforming classification from clinical interviews, according to the researchers.

“Recent developments in computer science, including natural language processing, could provide the foundation for future development of objective clinical tests for psychiatry,” they wrote.

Diagnosing ADHD early

In a project now under way, scientists at the University of Texas at Arlington and Yale University will combine computing power and psychiatric expertise to design an AI system that can assess a common disorder among youth: attention-deficit/hyperactivity disorder (ADHD), which the Centers for Disease Control and Prevention (CDC) says affects 8.5 percent of children ages 8 to 15.

The research uses “the latest methods in computer vision, machine learning and data mining” to assess children while they are performing certain physical and computer exercises, according to a press release from UTA. The exercises test a child’s attention, decision-making and ability to manage emotions. The data are then analyzed to determine the best type of intervention.

“We believe that the proposed computational methods will help provide quantifiable early diagnosis and allow us to monitor progress over time. In particular, it will help children overcome learning difficulties and lead them to healthy and productive lives,” says Fillia Makedon, a professor in UTA’s Department of Computer Science and Engineering.

Keeping an eye out for autism

Meanwhile, a group at the University of Buffalo has developed a mobile app that can detect autism spectrum disorder (ASD) in children as young as two years old with nearly 94 percent accuracy. The results were recently presented at the IEEE Wireless Health conference at the National Institutes of Health.

The app tracks eye movements of a child looking at pictures of social scenes, such as those showing multiple people, according to a press release from the university. The eye movements of someone with ASD are often different from those of a person without autism.

About one in 68 children in the United States has been diagnosed with ASD, according to the CDC. The UB study included 32 children ranging in age from two to 10. A larger study is planned for the future.

It takes less than a minute to administer the test, which can be done by a parent at home to determine if a child requires professional evaluation.

“This technology fills the gap between someone suffering from autism to diagnosis and treatment,” says Wenyao Xu, an assistant professor in UB’s School of Engineering and Applied Sciences.

Technology that helps treat our most vulnerable populations? Turns out, there is an app for that.

https://singularityhub.com/2016/12/02/new-ai-mental-health-tools-beat-human-doctors-at-assessing-patients/

We might live in a computer program, but it may not matter

December 18, 2016

grow-your-business-in-the-opportunity-matrix

By Philip Ball

Are you real? What about me?

These used to be questions that only philosophers worried about. Scientists just got on with figuring out how the world is, and why. But some of the current best guesses about how the world is seem to leave the question hanging over science too.

Several physicists, cosmologists and technologists are now happy to entertain the idea that we are all living inside a gigantic computer simulation, experiencing a Matrix-style virtual world that we mistakenly think is real.

Our instincts rebel, of course. It all feels too real to be a simulation. The weight of the cup in my hand, the rich aroma of the coffee it contains, the sounds all around me – how can such richness of experience be faked?

But then consider the extraordinary progress in computer and information technologies over the past few decades. Computers have given us games of uncanny realism – with autonomous characters responding to our choices – as well as virtual-reality simulators of tremendous persuasive power.

It is enough to make you paranoid.

The Matrix formulated the narrative with unprecedented clarity. In that story, humans are locked by a malignant power into a virtual world that they accept unquestioningly as “real”. But the science-fiction nightmare of being trapped in a universe manufactured within our minds can be traced back further, for instance to David Cronenberg’s Videodrome (1983) and Terry Gilliam’s Brazil (1985).

Over all these dystopian visions, there loom two questions. How would we know? And would it matter anyway?

Elon Musk, CEO of Tesla and SpaceX (Credit: Kristoffer Tripplaar/Alamy)

Elon Musk, CEO of Tesla and SpaceX (Credit: Kristoffer Tripplaar/Alamy)

The idea that we live in a simulation has some high-profile advocates.

In June 2016, technology entrepreneur Elon Musk asserted that the odds are “a billion to one” against us living in “base reality”.

Similarly, Google’s machine-intelligence guru Ray Kurzweil has suggested that “maybe our whole universe is a science experiment of some junior high-school student in another universe”

What’s more, some physicists are willing to entertain the possibility. In April 2016, several of them debated the issue at the American Museum of Natural History in New York, US.

None of these people are proposing that we are physical beings held in some gloopy vat and wired up to believe in the world around us, as in The Matrix.

Instead, there are at least two other ways that the Universe around us might not be the real one.

Cosmologist Alan Guth of the Massachusetts Institute of Technology, US has suggested that our entire Universe might be real yet still a kind of lab experiment. The idea is that our Universe was created by some super-intelligence, much as biologists breed colonies of micro-organisms.

More at: http://www.bbc.com/earth/story/20160901-we-might-live-in-a-computer-program-but-it-may-not-matter

IBM is one step closer to mimicking the human brain

September 24, 2016

810

Scientists at IBM have claimed a computational breakthrough after imitating large populations of neurons for the first time.

Neurons are electrically excitable cells that process and transmit information in our brains through electrical and chemical signals. These signals are passed over synapses, specialised connections with other cells.

It’s this set-up that inspired scientists at IBM to try and mirror the way the biological brain functions using phase-change materials for memory applications.

Using computers to try to mimic the human brain is something that’s been theorised for decades due to the challenges of recreating the density and power. Now, for the first time, scientists have created their own “randomly spiking” artificial neurons that can store and process data.

“The breakthrough marks a significant step forward in the development of energy-efficient, ultra-dense integrated neuromorphic technologies for applications in cognitive computing,” the scientists said.

The artificial neurons consist of phase-change materials, including germanium antimony telluride, which exhibit two stable states, an amorphous one (without a clearly defined structure) and a crystalline one (with structure). These materials are also the basis of re-writable Blue-ray but in this system the artificial neurons do not store digital information; they are analogue, just like the synapses and neurons in a biological brain.

The beauty of these powerful phase-change-based artificial neurons, which can perform various computational primitives such as data-correlation detection and unsupervised learning at high speeds, is that they use very little energy – just like human brain.

In a demonstration published in the journal Nature Nanotechnology, the team applied a series of electrical pulses to the artificial neurons, which resulted in the progressive crystallisation of the phase-change material, ultimately causing the neuron to fire.

In neuroscience, this function is known as the integrate-and-fire property of biological neurons. This is the foundation for event-based computation and, in principle, is quite similar to how a biological brain triggers a response when an animal touches something hot, for instance.

Tomas Tuma, co-author of the paper, said the breakthrough could help create a new generation of extremely dense neuromorphic computing systems
Tomas Tuma, co-author of the paper, said the breakthrough could help create a new generation of extremely dense neuromorphic computing systems

As part of the study, the researchers organised hundreds of artificial neurons into populations and used them to represent fast and complex signals. When tested, the artificial neurons were able to sustain billions of switching cycles, which would correspond to multiple years of operation at an update frequency of 100Hz.

The energy required for each neuron update was less than five picojoule and the average power less than 120 microwatts — for comparison, 60 million microwatts power a 60 watt light bulb, IBM’s research paper said.

When exploiting this integrate-and-fire property, even a single neuron can be used to detect patterns and discover correlations in real-time streams of event-based data. “This will significantly reduce the area and power consumption as it will be using tiny nanoscale devices that act as neurons,” IBM scientist and author, Dr. Abu Sebastian told WIRED.

This, IBM believes, could be helpful in the further development of internet of things technologies, especially when developing tiny sensors.

“Populations of stochastic phase-change neurons, combined with other nanoscale computational elements such as artificial synapses, could be a key enabler for the creation of a new generation of extremely dense neuromorphic computing systems,” said Tomas Tuma, co-author of the paper.

This could be useful in sensors collecting and analysing volumes of weather data, for instance, said Sebastian, collected at the edge, in remote locations, for faster and more accurate weather forecasts.

The artificial neurons could also detect patterns in financial transactions to find discrepancies or use data from social media to discover new cultural trends in real time. While large populations of these high-speed, low-energy nano-scale neurons could also be used in neuromorphic co-processors with co-located memory and processing units.

http://www.wired.co.uk/article/scientists-mimicking-human-brain-computation

Pre and post testing show reversal of memory loss from Alzheimer’s disease in 10 patients

June 28, 2016

160616071933_1_900x600

Results from quantitative MRI and neuropsychological testing show unprecedented improvements in ten patients with early Alzheimer’s disease (AD) or its precursors following treatment with a programmatic and personalized therapy. Results from an approach dubbed metabolic enhancement for neurodegeneration are now available online in the journal Aging.

The study, which comes jointly from the Buck Institute for Research on Aging and the UCLA Easton Laboratories for Neurodegenerative Disease Research, is the first to objectively show that memory loss in patients can be reversed, and improvement sustained, using a complex, 36-point therapeutic personalized program that involves comprehensive changes in diet, brain stimulation, exercise, optimization of sleep, specific pharmaceuticals and vitamins, and multiple additional steps that affect brain chemistry.

“All of these patients had either well-defined mild cognitive impairment (MCI), subjective cognitive impairment (SCI) or had been diagnosed with AD before beginning the program,” said author Dale Bredesen, MD, a professor at the Buck Institute and professor at the Easton Laboratories for Neurodegenerative Disease Research at UCLA, who noted that patients who had had to discontinue work were able to return to work and those struggling at their jobs were able to improve their performance. “Follow up testing showed some of the patients going from abnormal to normal.”

One of the more striking cases involved a 66-year old professional man whose neuropsychological testing was compatible with a diagnoses of MCI and whose PET scan showed reduced glucose utilization indicative of AD. An MRI showed hippocampal volume at only the 17th percentile for his age. After 10 months on the protocol a follow-up MRI showed a dramatic increase of his hippocampal volume to the 75th percentile, with an associated absolute increase in volume of nearly 12 percent.

In another instance, a 69-year old professional man and entrepreneur, who was in the process of shutting down his business, went on the protocol after 11 years of progressive memory loss. After six months, his wife, co-workers and he noted improvement in memory. A life-long ability to add columns of numbers rapidly in his head returned and he reported an ability to remember his schedule and recognize faces at work. After 22 months on the protocol he returned for follow-up quantitative neuropsychological testing; results showed marked improvements in all categories with his long-term recall increasing from the 3rd to 84th percentile. He is expanding his business.

Another patient, a 49-year old woman who noted progressive difficulty with word finding and facial recognition went on the protocol after undergoing quantitative neuropsychological testing at a major university. She had been told she was in the early stages of cognitive decline and was therefore ineligible for an Alzheimer’s prevention program. After several months on the protocol she noted a clear improvement in recall, reading, navigating, vocabulary, mental clarity and facial recognition. Her foreign language ability had returned. Nine months after beginning the program she did a repeat of the neuropsychological testing at the same university site. She no longer showed evidence of cognitive decline.

All but one of the ten patients included in the study are at genetic risk for AD, carrying at least one copy of the APOE4 allele. Five of the patients carry two copies of APOE4 which gives them a 10-12 fold increased risk of developing AD. “We’re entering a new era,” said Bredesen. “The old advice was to avoid testing for APOE because there was nothing that could be done about it. Now we’re recommending that people find out their genetic status as early as possible so they can go on prevention.” Sixty-five percent of the Alzheimer’s cases in this country involve APOE4; with seven million people carrying two copies of the ApoE4 allele.

Bredesen’ s systems-based approach to reverse memory loss follows the abject failure of monotherapies designed to treat AD and the success of combination therapies to treat other chronic illnesses such as cardiovascular disease, cancer and HIV. Bredesen says decades of biomedical research, both in his and other labs, has revealed that an extensive network of molecular interactions is involved in AD pathogenesis, suggesting that a broader-based therapeutic approach may be more effective. “Imagine having a roof with 36 holes in it, and your drug patched one hole very well–the drug may have worked, a single ‘hole’ may have been fixed, but you still have 35 other leaks, and so the underlying process may not be affected much,” Bredesen said. “We think addressing multiple targets within the molecular network may be additive, or even synergistic, and that such a combinatorial approach may enhance drug candidate performance, as well.”

While encouraged by the results of the study, Bredesen admits more needs to be done. “The magnitude of improvement in these ten patients is unprecedented, providing additional objective evidence that this programmatic approach to cognitive decline is highly effective,” Bredesen said. “Even though we see the far-reaching implications of this success, we also realize that this is a very small study that needs to be replicated in larger numbers at various sites.” Plans for larger studies are underway.

Cognitive decline is often listed as the major concern of older adults. Already, Alzheimer’s disease affects approximately 5.4 million Americans and 30 million people globally. Without effective prevention and treatment, the prospects for the future are bleak. By 2050, it’s estimated that 160 million people globally will have the disease, including 13 million Americans, leading to potential bankruptcy of the Medicare system. Unlike several other chronic illnesses, Alzheimer’s disease is on the rise–recent estimates suggest that AD has become the third leading cause of death in the United States behind cardiovascular disease and cancer.

Story Source:

The above post is reprinted from materials provided by Buck Institute for Research on Aging. Note: Materials may be edited for content and length.


Journal Reference:

  1. Dale E. Bredesen et al. Reversal of cognitive decline in Alzheimer’s disease. Aging, June 2016 [link]

https://www.sciencedaily.com/releases/2016/06/160616071933.htm

Giant Artwork Reflects The Gorgeous Complexity of The Human Brain

June 28, 2016

576d9e561a00002700ceb009

Your brain has approximately 86 billion neurons joined together through some 100 trillion connections, giving rise to a complex biological machine capable of pulling off amazing feats. Yet it’s difficult to truly grasp the sophistication of this interconnected web of cells.

Now, a new work of art based on actual scientific data provides a glimpse into this complexity.

The 8-by-12-foot gold panel, depicting a sagittal slice of the human brain, blends hand drawing and multiple human brain datasets from several universities. The work was created by Greg Dunn, a neuroscientist-turned-artist, and Brian Edwards, a physicist at the University of Pennsylvania, and goes on display Saturday at The Franklin Institute in Philadelphia. There will be a public unveiling and a lecture by the artists at 3 p.m.

“The human brain is insanely complicated,” Dunn said. “Rather than being told that your brain has 80 billion neurons, you can see with your own eyes what the activity of 500,000 of them looks like, and that has a much greater capacity to make an emotional impact than does a factoid in a book someplace.”

Will Drinker Artists Greg Dunn and Brian Edwards present their work at the Franklin Institute in Philadelphia.

 

To reflect the neural activity within the brain, Dunn and Edwards have developed a technique called micro-etching: They paint the neurons by making microscopic ridges on a reflective sheet in such a way that they catch and reflect light from certain angles. When the light source moves in relation to the gold panel, the image appears to be animated, as if waves of activity are sweeping through it.

First, the visual cortex at the back of the brain lights up, then light propagates to the rest of the brain, gleaming and dimming in various regions — just as neurons would signal inside a real brain when you look at a piece of art.

That’s the idea behind the name of Dunn and Edwards’ piece: “Self Reflected.” It’s basically an animated painting of your brain perceiving itself in an animated painting.

Here’s a video to give you an idea of how the etched neurons light up as the light source moves:

To make the artwork resemble a real brain as closely as possible, the artists used actual MRI scans and human brain maps, but the datasets were not detailed enough. “There were a lot of holes to fill in,” Dunn said. Several students working with the duo explored scientific literature to figure out what types of neurons are in a given brain region, what they look like and what they are connected to. Then the artists drew each neuron.

Will Drinker and Greg Dunn
A close-up of the cerebellum in the finished work.

Will Drinker and Greg Dunn A close-up of the motor cortex in the finished work.

 

Dunn and Edwards then used data from DTI scans — a special type of imaging that maps bundles of white matter connecting different regions of the brain. This completed the picture, and the results were scanned into a computer.

Using photolithography, the artists etched the image onto a panel covered with gold leaf. Then, they switched on the lights:

Will Drinker and Greg Dunn This is what “Self Reflected” looks like when it’s illuminated with all white light.

 

“A lot of times in science and engineering, we take a complex object and distill it down to its bare essential components, and study that component really well” Edwards said. But when it comes to the brain, understanding one neuron is very different from understanding how billions of neurons work together and give rise to consciousness.

“Of course, we can’t explain consciousness through an art piece, but we can give a sense of the fact that it is more complicated than just a few neurons,” he added.

The artists hope their work will inspire people, even professional neuroscientists, “to take a moment and remember that our brains are absolutely insanely beautiful and they are buzzing with activity every instant of our lives,” Dunn said. “Everybody takes it for granted, but we have, at the very core of our being, the most complex machine in the entire universe.”

http://www.huffingtonpost.com/entry/brain-art-franklin institute_us_576d65b3e4b017b379f5cb68

The Brain vs Deep Learning Part I: Computational Complexity — Or Why the Singularity Is Nowhere Near

June 04, 2016

original

In this blog post I will delve into the brain and explain its basic information processing machinery and compare it to deep learning. I do this by moving step-by-step along with the brains electrochemical and biological information processing pipeline and relating it directly to the architecture of convolutional nets. Thereby we will see that a neuron and a convolutional net are very similar information processing machines. While performing this comparison, I will also discuss the computational complexity of these processes and thus derive an estimate for the brains overall computational power. I will use these estimates, along with knowledge from high performance computing, to show that it is unlikely that there will be a technological singularity in this century.

This blog post is complex as it arcs over multiple topics in order to unify them into a coherent framework of thought. I have tried to make this article as readable as possible, but I might have not succeeded in all places. Thus, if you find yourself in an unclear passage it might become clearer a few paragraphs down the road where I pick up the thought again and integrate it with another discipline.

First I will give a brief overview about the predictions for a technological singularity and topics which are aligned with that. Then I will start the integration of ideas between the brain and deep learning. I finish with discussing high performance computing and how this all relates to predictions about a technological singularity.

The part which compares the brains information processing steps to deep learning is self-contained, and readers which are not interested in predictions for a technological singularity may skip to this part.

Part I: Evaluating current predictions of a technological singularity

There were a lot of headlines recently about predictions that artificial intelligence will reach super-human intelligence as early as 2030 and that this might herald the beginning of human extinction, or at least dramatically altering everyday life. How was this prediction made?

More at: http://timdettmers.com/2015/07/27/brain-vs-deep-learning-singularity/

IBM’s resistive computing could massively accelerate AI — and get us closer to Asimov’s Positronic Brain

April 23, 2016

cq5dam.web.1280.1280

With the recent rapid advances in machine learning has come a renaissance for neural networks — computer software that solves problems a little bit like a human brain, by employing a complex process of pattern-matching distributed across many virtual nodes, or “neurons.” Modern compute power has enabled neural networks to recognize images, speech, and faces, as well as to pilot self-driving cars, and win at Go and Jeopardy. Most computer scientists think that is only the beginning of what will ultimately be possible. Unfortunately, the hardware we use to train and run neural networks looks almost nothing like their architecture. That means it can take days or even weeks to train a neural network to solve a problem — even on a compute cluster — and then require a large amount of power to solve the problem once they’re trained.

Neuromorphic computing may be key to advancing AI

Researchers at IBM aim to change all that, by perfecting another technology that, like neural networks, first appeared decades ago. Loosely called resistive computing, the concept is to have compute units that are analog in nature, small in substance, and can retain their history so they can learn during the training process. Accelerating neural networks with hardware isn’t new to IBM. It recently announced the sale of some of its TrueNorth chips to Lawrence National Labs for AI research. TrueNorth’s design is neuromorphic, meaning that the chips roughly approximate the brain’s architecture of neurons and synapses. Despite its slow clock rate of 1 KHz, TrueNorth can run neural networks very efficiently because of its million tiny processing units that each emulate a neuron.

Until now, though, neural network accelerators like TrueNorth have been limited to the problem-solving portion of deploying a neural network. Training — the painstaking process of letting the system grade itself on a test data set, and then tweaking parameters (called weights) until it achieves success — still needs to be done on traditional computers. Moving from CPUs to GPUs and custom silicon has increased performance and reduced the power consumption required, but the process is still expensive and time consuming. That is where new work by IBM researchers Tayfun Gokmen and Yuri Vlasov comes in. They propose a new chip architecture, using resistive computing to create tiles of millions of Resistive Processing Units (RPUs), which can be used for both training and running neural networks.

Using Resistive Computing to break the neural network training bottleneck

Deep neural networks have at least one hidden layer, and often hundreds. That makes them expensive to emulate on traditional hardware.Resistive Computing is a large topic, but roughly speaking, in the IBM design each small processing unit (RPU) mimics a synapse in the brain. It receives a variety of analog inputs — in the form of voltages — and based on its past “experience” uses a weighted function of them to decide what result to pass along to the next set of compute elements. Synapses have a bewildering, and not-yet totally understood layout in the brain, but chips with resistive elements tend to have them neatly organized in two-dimensional arrays. For example, IBM’s recent work shows how it is possible to organize them in 4,096-by-4,096 arrays.

Because resistive compute units are specialized (compared with a CPU or GPU core), and don’t need to either convert analog to digital information, or access memory other than their own, they can be fast and consume little power. So, in theory, a complex neural network — like the ones used to recognize road signs in a self-driving car, for example — can be directly modeled by dedicating a resistive compute element to each of the software-described nodes. However, because RPUs are imprecise — due to their analog nature and a certain amount of noise in their circuitry — any algorithm run on them needs to be made resistant to the imprecision inherent in resistive computing elements.

Traditional neural network algorithms — both for execution and training — have been written assuming high-precision digital processing units that could easily call on any needed memory values. Rewriting them so that each local node can execute largely on its own, and be imprecise, but produce a result that is still sufficiently accurate, required a lot of software innovation.

For these new software algorithms to work at scale, advances were also needed in hardware. Existing technologies weren’t adequate to create “synapses” that could be packed together closely enough, and operate with low power in a noisy environment, to make resistive processing a practical alternative to existing approaches. Runtime execution happened first, with the logic for training a neural net on a hybrid resistive computer not developed until 2014. At the time, researchers at the University of Pittsburg and Tsinghua University claimed that such a solution could result in a 3-to-4-order-of-magnitude gain in power efficiency at the cost of only about 5% in accuracy.

IBM researchers claim an RPU-based design will be massively more efficient for neural network applications

IBM researchers claim an RPU-based design will be massively more efficient for neural network applications, shown in this Table from their paper

Moving from execution to training

This new work from IBM pushes the use of resistive computing even further, postulating a system where almost all computation is done on RPUs, with traditional circuitry only needed for support functions and input and output. This innovation relies on combining a version of a neural network training algorithm that can run on an RPU-based architecture with a hardware specification for an RPU that could run it.

As far as putting the ideas into practice, so far resistive compute has been mostly a theoretical construct. The first resistive memory (RRAM) became available for prototyping in 2012, and isn’t expected to be a mainstream product for several more years. And those chips, while they will help scale memory systems, and show the viability of using resistive technology in computing, don’t address the issue of synapse-like processing.

If RPUs can be built, the sky is the limit

The RPU design proposed is expected to accommodate a variety of deep neural network (DNN) architectures, including fully-connected and convolutional, which makes them potentially useful across nearly the entire spectrum of neural network applications. Using existing CMOS technology, and assuming RPUs in 4,096-by-4,096-element tiles with an 80-nanosecond cycle time, one of these tiles would be able to execute about 51 GigaOps per second, using a minuscule amount of power. A chip with 100 tiles and a single complementary CPU core could handle a network with up to 16 billion weights while consuming only 22 watts (only two of which are actually from the RPUs — the rest is from the CPU core needed to help get data in and out of the chip and provide overall control).

That is a staggering number compared to what is possible when chugging data through the relatively lesser number of cores in even a GPU (think about 16 million compute elements, compared with a few thousand). Using chips densely packed with these RPU tiles, the researchers claim that, once built, a resistive-computing-based AI system can achieve performance improvements of up to 30,000 times compared with current architectures, all with a power efficiency of 84,000 GigaOps per-second per-watt. If this becomes a reality, we could be on our way to realizing Isaac Asimov’s fantasy vision of the robotic Positronic brain.

IBM’s resistive computing could massively accelerate AI — and get us closer to Asimov’s Positronic Brain

In a future brave new world will it be possible to live forever?

April 23, 2016

transhumanism

January is a month for renewal and for change. Many of us have been gifted shiny new fitness trackers, treated ourselves to some new gadget or other, or upgraded to the latest smartphone. As we huff and puff our way out of the season of excess we find ourselves wishing we could trade in our overindulged bodies for the latest model.

The reality is that, even with the best of care, the human body eventually ceases to function but if I can upgrade my smartphone, why can’t I upgrade myself? Using technology, is it not possible to live forever(ish)?

After all, humans have been “upgrading” themselves in various ways for centuries. The invention of writing allowed us to offload memories, suits of armour made the body invincible to spears, eyeglasses gave us perfect 20/20 vision, the list goes on.

This is something that designer and author Natasha Vita-More has been thinking about for a long time. In 1983 she wrote The Transhumanist Manifesto, setting out her vision for a future where technology can lead to “radical life extension” – if not living forever, then living for a lot longer than is currently possible.

Vita-More has also designed a prototype whole body prosthetic she calls Primo PostHuman. This is a hypothetical artificial body that could replace our own and into which we could, in theory, upload our consciousness. This is more in the realm of living forever but is a concept as distant to us as Leonardo da Vinci’s sketch of a flying machine was to 15th century Europeans.

Even so, while the replacement body seems much closer to science fiction than science, recent advances in robotics and prosthetics have not only given us artificial arms that can detect pressure and temperature but limbs that can be controlled by thoughts using a brain-computer interface.

As a transhumanist, Vita-More is excited by these scientific developments. She defines a transhumanist to be “a person who wants to engage with technology, extend the human lifespan, intervene with the disease of aging, and wants to look critically at all of these things”.

Transhumanism, she explains, looks at not just augmenting or bypassing the frailties of the human body but also improving intelligence, eradicating diseases and disabilities, and even equipping us with greater empathy.

“The goal is to stay alive as long as possible, as healthy as possible, with greater consciousness or humaneness. No-one wants to stay alive drooling in a wheelchair,” she adds.

Who wouldn’t want to be smarter, stronger, healthier and kinder? What could possibly go wrong?

A lot, says Dr Fiachra O’Brolcháin, a Marie Curie/Assistid Research Fellow at the Institute of Ethics, Dublin City University whose research involves the ethics of technology.

Take for example being taller than average: this correlates with above average income so it is a desirable trait. But if medical technology allowed for parents to choose a taller than average child, then this could lead to a “height race”, where each generation becomes taller and taller, he explains.

“Similarly, depending on the society, even non-homophobic people might select against having gay children (assuming this were possible) if they thought this would be a disadvantage. We might find ourselves inaugurating an era of ‘liberal eugenics’, in which future generations are created according to consumer choice.”

Then there is the problem of affordability. Most of us do not have the financial means to acquire the latest cutting-edge tech until prices drop and it becomes mainstream. Imagine a future where only the rich could access human enhancements, live long lives and avoid health problems.

Elysium, starring Matt Damon, takes this idea to its most extreme, leading to a scenario similar to what O’Brolcháin describes as “an unbridgeable divide between the enhanced and the unenhanced”.

Despite the hyper focus on these technological enhancements that come with real risks and ethical dilemmas, the transhumanist movement also seems to be about kicking back against – or at least questioning – what society expects of you.

“There’s a certain parameter of what is normal or natural. There’s a certain parameter of what one is supposed to be,” says Vita-More.

“You’re supposed to go to school at a certain age, get married at a certain age, produce children, retire and grow old. You’re supposed to live until you are 80, be happy, die and make way for the young.”

Vita-More sees technology as freeing us from these societal and biological constraints. Why can’t we choose who we are beyond the body we were born with? Scholars on the sociology of the early Web showed that Cyberspace became a place for this precise form of expression. Maybe technology will continue to provide a platform for this reinvention of what it is to be human.

Maybe, where we’re going, we won’t need bodies.

Digital heaven

Nell Watson’s job is to think about the future and she says: “I often wonder if, since we could be digitised from the inside out – not in the next 10 years but sometime in this century – we could create a kind of digital heaven or playground where our minds will be uploaded and we could live with our friends and family away from the perils of the physical world.

“It wouldn’t really matter if our bodies suddenly stopped functioning, it wouldn’t be the end of the world. What really matters is that we could still live on.”

In other words you could simply upload to a new, perhaps synthetic, body.

As a futurist with Singularity University (SU), a Silicon Valley-based corporation that is part university, part business incubator, Watson, in her own words, is “someone who looks at the world today and projects into the future; who tries to figure out what current trends mean in terms of the future of technology, society and how these two things intermingle”.

She talks about existing technologies that are already changing our bodies and our minds: “There are experiments using DNA origami. It’s a new technique that came out a few years ago and uses the natural folding abilities of DNA to create little Lego blocks out of DNA on a tiny, tiny scale. You can create logic gates – the basic components of computers – out of these things.

“These are being used experimentally today to create nanobots that can go inside the bloodstream and destroy leukaemia cells, and in trials they have already cured two people of leukaemia. It is not science fiction: it is fact.”

Nanobots are also able to carry out distributed computing i.e. communicate with each other, inside living things, she says, explaining that this has been done successfully with cockroaches.

Recording everything

“The cockroach essentially has an on-board computer and if you scale this up to humans and optimise it there is no reason why we can’t have our smartphones inside our bodies instead of carrying them around,” she says.

This on-board AI travelling around our bloodstream would act as a co-pilot: seeing what you see, experiencing what you experience, recording everything and maybe even mapping every single neuron in your brain while it’s at it. And with a digitised copy of your brain you (whatever ‘you’ is) could, in theory, be uploaded to the cloud.

Does this mean that we could never be disconnected from the web, ever again? What if your ‘internal smartphone’ is hacked? Could our thoughts be monitored?

Humans have become so dependent on our smartphones and so used to sharing our data with third parties, that this ‘co-pilot’ inside us might be all too readily accepted without deeper consideration.

Already, novel technologies are undermining privacy to an alarming degree, says O’Brolcháin.

“In a world without privacy, there is a great risk of censorship and self-censorship. Ultimately, this affects people’s autonomy – their ability to decide what sort of life they want to lead for themselves, to develop their own conception of the good life.

“This is one of the great ironies of the current wave of technologies – they are born of individualistic societies and often defended in the name of individual rights but might create a society that can no longer protect individual autonomy,” he warns.

Okay, so an invincible body and a super brain have their downsides but what about technology that expands our consciousness, making us wiser, nicer, all-round better folks? Could world peace be possible if we enhanced our morality?

“If you take a look at humanity you can see fighting, wars, terrorism, anger. Television shows are full of violence, society places an emphasis on wealth and greed. I think part of the transhumanist scope is [to offset this with] intentional acts of kindness,” says Vita-More, who several times during our interview makes the point that technology alone cannot evolve to make a better world unless humanity evolves alongside.

Vita-More dismisses the notion of enhancement for enhancement’s sake, a nod to the grinder movement of DIY body-hacking, driven mostly by curiosity.

Examples include implanting magnets into the fingertips to detect magnetic waves or sticking an RFID chip into your arm as UK professor Kevin Warwick did, allowing him to pass through security doors with a wave of his hand.

Moral enhancements

Along the same lines as Vita-More’s thinking, O’Brolcháin says “some philosophers argue that moral enhancements will be necessary if enhancements are not to be used for malevolent ends”.

“Moral enhancement may result in people who are less greedy, less aggressive, more concerned with addressing serious global issues like climate change,” he muses.

But the difficulty is deciding on what is moral. After all, he says, the ‘good’ groups like Isis want to promote is vastly at odds with the values of Ireland. So who gets to decide what moral enhancements are developed? Perhaps they will come with the latest internal smartphone upgrade or installed at birth by government.

Technology does make life better and it is an exciting time for robotics, artificial intelligence and nanotechnology. But humans have a long way to go to before we work out how we can co-exist with the future we are building right now.

http://www.irishtimes.com/business/in-a-future-brave-new-world-will-it-be-possible-to-live-forever-1.2498427