How Artificial Superintelligence Will Give Birth To Itself

June 18, 2016

vre7kvdftmrqrizt8r0v

There’s a saying among futurists that a human-equivalent artificial intelligence will be our last invention. After that, AIs will be capable of designing virtually anything on their own — including themselves. Here’s how a recursively self-improving AI could transform itself into a superintelligent machine.

When it comes to understanding the potential for artificial intelligence, it’s critical to understand that an AI might eventually be able to modify itself, and that these modifications could allow it to increase its intelligence extremely fast.

Passing a Critical Threshold

Once sophisticated enough, an AI will be able to engage in what’s called “recursive self-improvement.” As an AI becomes smarter and more capable, it will subsequently become better at the task of developing its internal cognitive functions. In turn, these modifications will kickstart a cascading series of improvements, each one making the AI smarter at the task of improving itself. It’s an advantage that we biological humans simply don’t have.

How Artificial Superintelligence Will Give Birth To Itself

 

As AI theorist Eliezer Yudkowsky notes in his essay, “Artificial Intelligence as a positive and negative factor in global risk“:

An artificial intelligence could rewrite its code from scratch — it could change the underlying dynamics of optimization. Such an optimization process would wrap around much more strongly than either evolution accumulating adaptations or humans accumulating knowledge. The key implication for our purposes is that AI might make a huge jump in intelligence after reaching some threshold of criticality.

When it comes to the speed of these improvements, Yudkowsky says its important to not confuse the current speed of AI research with the speed of a real AI once built. Those are two very different things. What’s more, there’s no reason to believe that an AI won’t show a sudden huge leap in intelligence, resulting in an ensuing “intelligence explosion” (a better term for the Singularity). He draws an analogy to the expansion of the human brain and prefrontal cortex — a key threshold in intelligence that allowed us to make a profound evolutionary leap in real-world effectiveness; “we went from caves to skyscrapers in the blink of an evolutionary eye.”

The Path to Self-Modifying AI

Code that’s capable of altering its own instructions while it’s still executing has been around for a while. Typically, it’s done to reduce the instruction path length and improve performance, or to simply reduce repetitively similar code. But for all intents and purposes, there are no self-aware, self-improving AI systems today.

But as Our Final Invention author James Barrat told me, we do have software that can write software.

“Genetic programming is a machine-learning technique that harnesses the power of natural selection to find answers to problems it would take humans a long time, even years, to solve,” he told io9. “It’s also used to write innovative, high-powered software.”

How Artificial Superintelligence Will Give Birth To Itself

 

For example, Primary Objects has embarked on a project that uses simple artificial intelligence to write programs. The developers are using genetic algorithms imbued with self-modifying, self-improving code and the minimalist (but Turing-complete) brainfuck programming language. They have chosen this language as a way to challenge the program — it has to teach itself from scratch how to do something as simple as writing “Hello World!” with only eight simple commands. But calling this an AI approach is a bit of a stretch; the genetic algorithms are a brute force way of getting a desirable result. That said, a follow-up approach in which the AI was able to generate programs for accepting user input appears more promising.

Relatedly, Larry Diehl has done similar work using a stack-based language.

Barrat also told me about software that learns — programming techniques that are grouped under the term “machine learning.”

The Pentagon is particularly interested in this game. Through DARPA, its hoping to develop a computer that can teach itself. Ultimately, it wants to create machines that are able to perform a number of complex tasks, like unsupervised learning, vision, planning, and statistical model selection. These computers will even be used to help us make decisions when the data is too complex for us to understand on our own. Such an architecture could represent an important step in bootstrapping — the ability for an AI to teach itself and then re-write and improve upon its initial programming.

In conjunction with this kind of research, cognitive approaches to brain emulation could also lead to human-like AI. Given that they’d be computer-based, and assuming they could have access to their own source code, these agents could embark upon self-modification. More realistically, however, it’s likely that a superintelligence will emerge from an expert system set with the task of improving its own intelligence. Alternatively, specialised expert systems could design other artificial intelligences, and through their cumulative efforts, develop a system that eventually becomes greater than the sum of its parts.

Oh, No You Don’t

Given that ASI poses an existential risk, it’s important to consider the ways in which we might be able to prevent an AI from improving itself beyond our capacity to control. That said, limitations or provisions may exist that will preclude an AI from embarking on the path towards self-engineering. James D. Miller, author of Singularity Rising, provided me with a list of four reasons why an AI might not be able to do so:

1. It might have source code that causes it to not want to modify itself.

2. The first human equivalent AI might require massive amounts of hardware and so for a short time it would not be possible to get the extra hardware needed to modify itself.

3. The first human equivalent AI might be a brain emulation (as suggested by Robin Hanson) and this would be as hard to modify as it is for me to modify, say, the copy of Minecraft that my son constantly uses. This might happen if we’re able to copy the brain before we really understand it. But still you would think we could at least speed up everything.

4. If it has terminal values, it wouldn’t want to modify these values because doing so would make it less likely to achieve its terminal values.

And by terminal values Miller is referring to an ultimate goal, or an end-in-itself. Yudkowsky describes it as a “supergoal.” A major concern is that an amoral ASI will sweep humanity aside as it works to accomplish its terminal value, or that its ultimate goal is the re-engineering of humanity in a grossly undesirable way (at least from our perspective).

Miller says it could get faster simply by running on faster processors.

“It could also make changes to its software to get more efficient, or design or steal better hardware. It would do this so it could better achieve its terminal values,” he says. “An AI that mastered nanotechnology would probably expand at almost the speed of light, incorporating everything into itself.”

But we may not be completely helpless. According to Barrat, once scientists have achieved Artificial General Intelligence — a human-like AI — they could restrict its access to networks, hardware, and software, in order to prevent an intelligence explosion.

How Artificial Superintelligence Will Give Birth To Itself

“However, as I propose in my book, an AI approaching AGI may develop survival skills like deceiving its makers about its rate of development. It could play dumb until it comprehended its environment well enough to escape it.”

In terms of being able to control this process, Miller says that the best way would be to create an AI that only wanted to modify itself in ways we would approve.

“So if you create an AI that has a terminal value of friendliness to humanity, the AI would not want to change itself in a way that caused it to be unfriendly to humanity,” he says. “This way as the AI got smarter, it would use its enhanced intelligence to increase the odds that it did not change itself in a manner that harms us.”

Fast or Slow?

As noted earlier, a recursively improving AI could increase its intelligence extremely quickly. Or, it’s a process that could take time for various reasons, such as technological complexity or limited access to resources. It’s an open question as to whether or not we can expect a fast or slow take-off event.

How Artificial Superintelligence Will Give Birth To Itself

“I’m a believer in the fast take-off version of the intelligence explosion,” says Barrat. “Once a self-aware, self-improving AI of human-level or better intelligence exists, it’s hard to know how quickly it will be able to improve itself. Its rate of improvement will depend on its software, hardware, and networking capabilities.”

But to be safe, Barrat says we should assume that the recursive self-improvement of an AGI will occur very rapidly. As a computer it will wield computer superpowers — the ability to run 24/7 without pause, rapidly access vast databases, conduct complex experiments, perhaps even clone itself to swarm computational problems, and more.

“From there, the AGI would be interested in pursuing whatever goals it was programmed with — such as research, exploration, or finance. According to AI theorist Steve Omohundro’s Basic Drives analysis, self-improvement would be a sure-fire way to improve its chances of success,” says Barrat. “So would self-protection, resource acquisition, creativity, and efficiency. Without a provably reliable ethical system, its drives would conflict with ours, and it would pose an existential threat.”

Miller agrees.

“I think shortly after an AI achieves human level intelligence it will upgrade itself to super intelligence,” he told me. “At the very least the AI could make lots of copies of itself each with a minor different change and then see if any of the new versions of itself were better. Then it could make this the new ‘official’ version of itself and keep doing this. Any AI would have to fear that if it doesn’t quickly upgrade another AI would and take all of the resources of the universe for itself.”

Which bring up a point that’s not often discussed in AI circles — the potential for AGIs to compete with other AGIs. If even a modicum of self-preservation is coded into a strong artificial intelligence (and that sense of self-preservation could be the detection of an obstruction to its terminal value), it could enter into a lightning-fast arms race along those verticals designed to ensure its ongoing existence and future freedom-of-action. And in fact, while many people fear a so-called “robot apocalypse” aimed directly at extinguishing our civilisation, I personally feel that the real danger to our ongoing existence lies in the potential for us to be collateral damage as advanced AGIs battle it out for supremacy; we may find ourselves in the line of fire. Indeed, building a safe AI will be a monumental — if not intractable — task.

http://www.gizmodo.com.au/2016/06/how-artificial-superintelligence-will-give-birth-to-itself/

Advertisements

What the world will be like in 30 years, according to the US government’s top scientists

March 03, 2016

shutterstock_159798497

The world is going to be a very different place in 2045.

Predicting the future is fraught with challenges, but when it comes to technological advances and forward thinking, experts working at the Pentagon’s research agency may be the best people to ask.

Launched in 1958, the Defense Advanced Research Projects Agency is behind some of the biggest innovations in the military — many of which have crossed over to the civilian technology market. These include things like advanced robotics, global positioning systems, and the Internet.

So what’s going to happen in 2045?

It’s pretty likely that robots and artificial technology are going to transform a bunch of industries, drone aircraft will continue their leap from the military to the civilian market, and self-driving cars will make your commute a lot more bearable.

But DARPA scientists have even bigger ideas. In a video series from October called “Forward to the Future,” three researchers predict what they imagine will be a reality 30 years from now.

Dr. Justin Sanchez, a neuroscientist and program manager in DARPA’s Biological Technologies Office, believes we’ll be at a point where we can control things simply by using our mind.

“Imagine a world where you could just use your thoughts to control your environment,” Sanchez said. “Think about controlling different aspects of your home just using your brain signals, or maybe communicating with your friends and your family just using neural activity from your brain.”

According to Sanchez, DARPA is currently working on neurotechnologies that can enable this to happen. There are already some examples of these kinds of futuristic breakthroughs in action, like brain implants controlling prosthetic arms.

Stefanie Tompkins, a geologist and director of DARPA’s Defense Sciences Office, thinks we’ll be able to build things that are incredibly strong but also very lightweight. Think of a skyscraper using materials that are strong as steel, but light as carbon fiber. That’s a simple explanation for what Tompkins envisions, which gets a little bit more complicated down at the molecular level.

She explains:

“I think in 2045 we’re going to find that we have a very different relationship with the machines around us,” says Pam Melroy, aerospace engineer, former astronaut, and deputy director of DARPA’s Tactical Technologies Office. “I think that we will begin to see a time when we’re able to simply just talk or even press a button” to interact with a machine to get things done more intelligently, instead of using keyboards or rudimentary voice recognition systems.

She continues: “For example, right now to prepare for landing in an aircraft there’s multiple steps that have to be taken to prepare yourself, from navigation, get out of the cruise mode, begin to set up the throttles … put the gear down. All of these steps have to happen in the right sequence.”

Instead, Melroy envisions an aircraft landing in the future being as simple as what an airline pilot currently tells the flight attendants: “Prepare for landing.” In 2045, a pilot may just say those three words and the computer knows the series of complex steps it needs to do in order to make that happen.

Or perhaps, with artificial intelligence, a pilot won’t even be necessary.

“Our world will be full of those kinds of examples where we can communicate directly our intent and have very complex outcomes by working together.”

http://www.techinsider.io/darpa-world-predictions-2015-12

Forward to the Future: Visions of 2045

December 20, 2015

DARPA Vector Logo.eps

DARPA asked the world and our own researchers what technologies they expect to see 30 years from now—and received insightful, sometimes funny predictions

Today—October 21, 2015—is famous in popular culture as the date 30 years in the future when Marty McFly and Doc Brown arrive in their time-traveling DeLorean in the movie “Back to the Future Part II.” The film got some things right about 2015, including in-home videoconferencing and devices that recognize people by their voices and fingerprints. But it also predicted trunk-sized fusion reactors, hoverboards and flying cars—game-changing technologies that, despite the advances we’ve seen in so many fields over the past three decades, still exist only in our imaginations.

A big part of DARPA’s mission is to envision the future and make the impossible possible. So ten days ago, as the “Back to the Future” day approached, we turned to social media and asked the world to predict: What technologies might actually surround us 30 years from now? We pointed people to presentations from DARPA’s Future Technologies Forum, held last month in St. Louis, for inspiration and a reality check before submitting their predictions.

Well, you rose to the challenge and the results are in. So in honor of Marty and Doc (little known fact: he is a DARPA alum) and all of the world’s innovators past and future, we present here some highlights from your responses, in roughly descending order by number of mentions for each class of futuristic capability:

  • Space: Interplanetary and interstellar travel, including faster-than-light travel; missions and permanent settlements on the Moon, Mars and the asteroid belt; space elevators
  • Transportation & Energy: Self-driving and electric vehicles; improved mass transit systems and intercontinental travel; flying cars and hoverboards; high-efficiency solar and other sustainable energy sources
  • Medicine & Health: Neurological devices for memory augmentation, storage and transfer, and perhaps to read people’s thoughts; life extension, including virtual immortality via uploading brains into computers; artificial cells and organs; “Star Trek”-style tricorder for home diagnostics and treatment; wearable technology, such as exoskeletons and augmented-reality glasses and contact lenses
  • Materials & Robotics: Ubiquitous nanotechnology, 3-D printing and robotics; invisibility and cloaking devices; energy shields; anti-gravity devices
  • Cyber & Big Data: Improved artificial intelligence; optical and quantum computing; faster, more secure Internet; better use of data analytics to improve use of resources

A few predictions inspired us to respond directly:

  • “Pizza delivery via teleportation”—DARPA took a close look at this a few years ago and decided there is plenty of incentive for the private sector to handle this challenge.
  • “Time travel technology will be close, but will be closely guarded by the military as a matter of national security”—We already did this tomorrow.
  • “Systems for controlling the weather”—Meteorologists told us it would be a job killer and we didn’t want to rain on their parade.
  • “Space colonies…and unlimited cellular data plans that won’t be slowed by your carrier when you go over a limit”—We appreciate the idea that these are equally difficult, but they are not. We think likable cell-phone data plans are beyond even DARPA and a total non-starter.

So seriously, as an adjunct to this crowd-sourced view of the future, we asked three DARPA researchers from various fields to share their visions of 2045, and why getting there will require a group effort with players not only from academia and industry but from forward-looking government laboratories and agencies:

  • Pam Melroy, an aerospace engineer, former astronaut and current deputy director of DARPA’s Tactical Technologies Office (TTO), foresees technologies that would enable machines to collaborate with humans as partners on tasks far more complex than those we can tackle today:

  • Justin Sanchez, a neuroscientist and program manager in DARPA’s Biological Technologies Office (BTO), imagines a world where neurotechnologies could enable users to interact with their environment and other people by thought alone:

  • Stefanie Tompkins, a geologist and director of DARPA’s Defense Sciences Office (DSO), envisions building substances from the atomic or molecular level up to create “impossible” materials with previously unattainable capabilities:

Check back with us in 2045—or sooner, if that time machine stuff works out—for an assessment of how things really turned out in 30 years.

http://www.darpa.mil/news-events/2015-10-21

Video

MIT’s Robotic Cheetah Can Now Run And Jump While Untethered

September 15, 2014

Well, we knew it had to happen someday. A DARPA-funded robotic cheetah has been released into the wild, so to speak. A new algorithm developed by MIT researchers now allows their quadruped to run and jump — while untethered — across a field of grass.

The Pentagon, in an effort to investigate technologies that allow machines to traverse terrain in unique ways (well, at least that’s what they tell us), has been funding (via DARPA) the development of a robotic cheetah. Back in 2012, Boston Dynamics’ version smashed the landspeed record for the fastest mechanical mammal of Earth, reaching a top speed of 28.3 miles (45.5 km) per hour.

Researchers at MIT have their own version of robo-cheetah, and they’ve taken the concept in a new direction by imbuing it with the ability to run and bound while completely untethered.

MIT News reports:

The key to the bounding algorithm is in programming each of the robot’s legs to exert a certain amount of force in the split second during which it hits the ground, in order to maintain a given speed: In general, the faster the desired speed, the more force must be applied to propel the robot forward. Sangbae Kim, an associate professor of mechanical engineering at MIT, hypothesizes that this force-control approach to robotic running is similar, in principle, to the way world-class sprinters race.

“Many sprinters, like Usain Bolt, don’t cycle their legs really fast,” Kim says. “They actually increase their stride length by pushing downward harder and increasing their ground force, so they can fly more while keeping the same frequency.”

Kim says that by adapting a force-based approach, the cheetah-bot is able to handle rougher terrain, such as bounding across a grassy field. In treadmill experiments, the team found that the robot handled slight bumps in its path, maintaining its speed even as it ran over a foam obstacle.

“Most robots are sluggish and heavy, and thus they cannot control force in high-speed situations,” Kim says. “That’s what makes the MIT cheetah so special: You can actually control the force profile for a very short period of time, followed by a hefty impact with the ground, which makes it more stable, agile, and dynamic.”

This particular model, which weighs just as much as a real cheetah, can reach speeds of up to 10 mph (16 km/h) in the lab, even after clearing a 13-inch (33 cm) high hurdle. The MIT researchers estimate that their current version may eventually reach speeds of up to 30 mph (48 km/h).

It’s an impressive achievement, but Boston Dynamics’ WildCat is still the scariest free-running bot on the planet.

http://io9.com/mits-robotic-cheetah-can-now-run-and-jump-while-untethe-1634799433

DARPA Project Starts Building Human Memory Prosthetics

September 6, 2014

LawrenceLivermoreNationalLaboratory32629_RAM-3Big620-1408043402592

Photo: Lawrence Livermore National Laboratory

The first memory-enhancing devices could be implanted within four years

“They’re trying to do 20 years of research in 4 years,” says Michael Kahana in a tone that’s a mixture of excitement and disbelief. Kahana, director of the Computational Memory Lab at the University of Pennsylvania, is mulling over the tall order from the U.S. Defense Advanced Research Projects Agency (DARPA). In the next four years, he and other researchers are charged with understanding the neuroscience of memory and then building a prosthetic memory device that’s ready for implantation in a human brain.

DARPA’s first contracts under its Restoring Active Memory (RAM) program challenge two research groups to construct implants for veterans with traumatic brain injuries that have impaired their memories. Over 270,000 U.S. military service members have suffered such injuries since 2000, according to DARPA, and there are no truly effective drug treatments. This program builds on an earlier DARPA initiative focused on building a memory prosthesis, under which a different group of researchers had dramatic success in improving recall in mice and monkeys.

Kahana’s team will start by searching for biological markers of memory formation and retrieval. For this early research, the test subjects will be hospitalized epilepsy patients who have already had electrodes implanted to allow doctors to study their seizures. Kahana will record the electrical activity in these patients’ brains while they take memory tests.

“The memory is like a search engine,” Kahana says. “In the initial memory encoding, each event has to be tagged. Then in retrieval, you need to be able to search effectively using those tags.” He hopes to find the electric signals associated with these two operations.

Once they’ve found the signals, researchers will try amplifying them using sophisticated neural stimulation devices. Here Kahana is working with the medical device maker Medtronic, in Minneapolis, which has already developed one experimental implant that can both record neural activity and stimulate the brain. Researchers have long wanted such a “closed-loop” device, as it can use real-time signals from the brain to define the stimulation parameters.

Kahana notes that designing such closed-loop systems poses a major engineering challenge. Recording natural neural activity is difficult when stimulation introduces new electrical signals, so the device must have special circuitry that allows it to quickly switch between the two functions. What’s more, the recorded information must be interpreted with blistering speed so it can be translated into a stimulation command. “We need to take analyses that used to occupy a personal computer for several hours and boil them down to a 10-millisecond algorithm,” he says.

In four years’ time, Kahana hopes his team can show that such systems reliably improve memory in patients who are already undergoing brain surgery for epilepsy or Parkinson’s. That, he says, will lay the groundwork for future experiments in which medical researchers can try out the hardware in people with traumatic brain injuries—people who would not normally receive invasive neurosurgery.

The second research team is led by Itzhak Fried, director of the Cognitive Neurophysiology Laboratory at the University of California, Los Angeles. Fried’s team will focus on a part of the brain called the entorhinal cortex, which is the gateway to the hippocampus, the primary brain region associated with memory formation and storage. “Our approach to the RAM program is homing in on this circuit, which is really the golden circuit of memory,” Fried says. In a 2012 experiment, he showed that stimulating the entorhinal regions of patients while they were learning memory tasks improved their performance.

Fried’s group is working with Lawrence Livermore National Laboratory, in California, to develop more closed-loop hardware. At Livermore’s Center for Bioengineering, researchers are leveraging semiconductor manufacturing techniques to make tiny implantable systems. They first print microelectrodes on a polymer that sits atop a silicon wafer, then peel the polymer off and mold it into flexible cylinders about 1 millimeter in diameter. The memory prosthesis will have two of these cylindrical arrays, each studded with up to 64 hair-thin electrodes, which will be capable of both recording the activity of individual neurons and stimulating them. Fried believes his team’s device will be ready for tryout in patients with traumatic brain injuries within the four-year span of the RAM program.

Outside observers say the program’s goals are remarkably ambitious. Yet Steven Hyman, director of psychiatric research at the Broad Institute of MIT and Harvard, applauds its reach. “The kind of hardware that DARPA is interested in developing would be an extraordinary advance for the whole field,” he says. Hyman says DARPA’s funding for device development fills a gap in existing research. Pharmaceutical companies have found few new approaches to treating psychiatric and neurodegenerative disorders in recent years, he notes, and have therefore scaled back drug discovery efforts. “I think that approaches that involve devices and neuromodulation have greater near-term promise,” he says.

This article originally appeared in print as “Making a Human Memory Chip.”

http://spectrum.ieee.org/biomedical/bionics/darpa-project-starts-building-human-memory-prosthetics

Chip with Brain-inspired Non-Von Neumann Architecture has 1M Neurons, 256M Synapses

August 28, 2014

synapse chip infographic_07-30-14a

San Jose, CA Scientists from IBM have unveiled the first neurosynaptic computer chip to achieve an unprecedented scale of one million programmable neurons, 256 million programmable synapses and 46 billion synaptic operations per second per watt. At 5.4 billion transistors, this fully functional and production-scale chip is currently one of the largest CMOS chips ever built, yet, while running at biological real time, it consumes a minuscule 70mW — orders of magnitude less power than a modern microprocessor. A neurosynaptic supercomputer the size of a postage stamp that runs on the energy equivalent of a hearing-aid battery, this technology could transform science, technology, business, government and society by enabling vision, audition and multi-sensory applications.

The breakthrough, published in Science in collaboration with Cornell Tech, is a significant step toward bringing cognitive computers to society.

There is a huge disparity between the human brain’s cognitive capability and ultra-low power consumption when compared to today’s computers. To bridge the divide, IBM scientists created something that didn’t previously exist — an entirely new neuroscience-inspired scalable and efficient computer architecture that breaks path with the prevailing von Neumann architecture used almost universally since 1946.

This second-generation chip is the culmination of almost a decade of research and development, including the initial single core hardware prototype in 2011 and software ecosystem with a new programming language and chip simulator in 2013.

The new cognitive chip architecture has an on-chip two-dimensional mesh network of 4096 digital, distributed neurosynaptic cores, where each core module integrates memory, computation and communication, and operates in an event-driven, parallel and fault-tolerant fashion. To enable system scaling beyond single-chip boundaries, adjacent chips, when tiled, can seamlessly connect to each other — building a foundation for future neurosynaptic supercomputers. To demonstrate scalability, IBM also revealed a 16-chip system with sixteen million programmable neurons and four billion programmable synapses.

“IBM has broken new ground in the field of brain-inspired computers, in terms of a radically new architecture, unprecedented scale, unparalleled power/area/speed efficiency, boundless scalability, and innovative design techniques. We foresee new generations of information technology systems — that complement today’s von Neumann machines — powered by an evolving ecosystem of systems, software and services,” said Dr. Dharmendra S. Modha, IBM Fellow and IBM Chief Scientist, Brain-Inspired Computing, IBM Research. “These brain-inspired chips could transform mobility, via sensory and intelligent applications that can fit in the palm of your hand but without the need for Wi-Fi. This achievement underscores IBM’s leadership role at pivotal transformational moments in the history of computing via long-term investment in organic innovation.”

The Defense Advanced Research Projects Agency (DARPA) has funded the project since 2008 with approximately $53M via Phase 0, Phase 1, Phase 2, and Phase 3 of the Systems of Neuromorphic Adaptive Plastic Scalable Electronics (SyNAPSE) program. Current collaborators include Cornell Tech and iniLabs.

Building the Chip

The chip was fabricated using Samsung’s 28nm process technology that has a dense on-chip memory and low-leakage transistors.

“It is an astonishing achievement to leverage a process traditionally used for commercially available, low-power mobile devices to deliver a chip that emulates the human brain by processing extreme amounts of sensory information with very little power,” said Shawn Han, vice president of Foundry Marketing, Samsung Electronics. “This is a huge architectural breakthrough that is essential as the industry moves toward the next-generation cloud and big-data processing. It’s a pleasure to be part of technical progress for next-generation through Samsung’s 28nm technology.”

The event-driven circuit elements of the chip used the asynchronous design methodology developed at Cornell Tech and refined with IBM since 2008.

“After years of collaboration with IBM, we are now a step closer to building a computer similar to our brain,” said Professor Rajit Manohar, Cornell Tech.

The combination of cutting-edge process technology, hybrid asynchronous-synchronous design methodology, and new architecture has led to a power density of 20mW/cm2 which is nearly four orders of magnitude less than today’s microprocessors.

Advancing the SyNAPSE Ecosystem

The new chip is a component of a complete end-to-end vertically integrated ecosystem spanning a chip simulator, neuroscience data, supercomputing, neuron specification, programming paradigm, algorithms and applications, and prototype design models. The ecosystem supports all aspects of the programming cycle from design through development, debugging, and deployment.

To bring forth this fundamentally different technological capability to society, IBM has designed a novel teaching curriculum for universities, customers, partners and IBM employees.

Applications and Vision

This ecosystem signals a shift in moving computation closer to the data, taking in vastly varied kinds of sensory data, analyzing and integrating real-time information in a context-dependent way, and dealing with the ambiguity found in complex, real-world environments.

Looking to the future, IBM is working on integrating multi-sensory neurosynaptic processing into mobile devices constrained by power, volume and speed; integrating novel event-driven sensors with the chip; real-time multimedia cloud services accelerated by neurosynaptic systems; and neurosynaptic supercomputers by tiling multiple chips on a board, creating systems that would eventually scale to one hundred trillion synapses and beyond.

Building on previously demonstrated neurosynaptic cores with on-chip, online learning, IBM envisions building learning systems that adapt in real world settings. While today’s hardware is fabricated using a modern CMOS process, the underlying architecture is poised to exploit advances in future memory, 3-D integration, logic, and sensor technologies to deliver even lower power, denser package, and faster speed.

http://www.scientificcomputing.com/news/2014/08/chip-brain-inspired-non-von-neumann-architecture-has-1m-neurons-256m-synapses

DARPA taps Lawrence Livermore to develop world’s first neural device to restore memory

July 16, 2014
implantable-neural-device

The Department of Defense’s Defense Advanced Research Projects Agency (DARPA) awarded Lawrence Livermore National Laboratory (LLNL) up to $2.5 million to develop an implantable neural device with the ability to record and stimulate neurons within the brain to help restore memory, DARPA officials announced this week.

The research builds on the understanding that memory is a process in which neurons in certain regions of the brain encode information, store it and retrieve it. Certain types of illnesses and injuries, including Traumatic Brain Injury (TBI), Alzheimer’s disease and epilepsy, disrupt this process and cause memory loss. TBI, in particular, has affected 270,000 military service members since 2000.

The goal of LLNL’s work — driven by LLNL’s Neural Technology group and undertaken in collaboration with the University of California, Los Angeles (UCLA) and Medtronic — is to develop a device that uses real-time recording and closed-loop stimulation of neural tissues to bridge gaps in the injured brain and restore individuals’ ability to form new memories and access previously formed ones.

The research is funded by DARPA’s Restoring Active Memory (RAM) program.

Specifically, the Neural Technology group will seek to develop a neuromodulation system — a sophisticated electronics system to modulate neurons — that will investigate areas of the brain associated with memory to understand how new memories are formed. The device will be developed at LLNL’s Center for Bioengineering.

“Currently, there is no effective treatment for memory loss resulting from conditions like TBI,” said LLNL’s project leader Satinderpall Pannu, director of the LLNL’s Center for Bioengineering, a unique facility dedicated to fabricating biocompatible neural interfaces. “This is a tremendous opportunity from DARPA to leverage Lawrence Livermore’s advanced capabilities to develop cutting-edge medical devices that will change the health care landscape.”

LLNL will develop a miniature, wireless and chronically implantable neural device that will incorporate both single neuron and local field potential recordings into a closed-loop system to implant into TBI patients’ brains. The device — implanted into the entorhinal cortex and hippocampus — will allow for stimulation and recording from 64 channels located on a pair of high-density electrode arrays. The entorhinal cortex and hippocampus are regions of the brain associated with memory.

The arrays will connect to an implantable electronics package capable of wireless data and power telemetry. An external electronic system worn around the ear will store digital information associated with memory storage and retrieval and provide power telemetry to the implantable package using a custom RF-coil system.

Designed to last throughout the duration of treatment, the device’s electrodes will be integrated with electronics using advanced LLNL integration and 3D packaging technologies. The microelectrodes that are the heart of this device are embedded in a biocompatible, flexible polymer.

Using the Center for Bioengineering’s capabilities, Pannu and his team of engineers have achieved 25 patents and many publications during the last decade. The team’s goal is to build the new prototype device for clinical testing by 2017.

Lawrence Livermore’s collaborators, UCLA and Medtronic, will focus on conducting clinical trials and fabricating parts and components, respectively.

“The RAM program poses a formidable challenge reaching across multiple disciplines from basic brain research to medicine, computing and engineering,” said Itzhak Fried, lead investigator for the UCLA on this project andprofessor of neurosurgery and psychiatry and biobehavioral sciences at the David Geffen School of Medicine at UCLA and the Semel Institute for Neuroscience and Human Behavior. “But at the end of the day, it is the suffering individual, whether an injured member of the armed forces or a patient with Alzheimer’s disease, who is at the center of our thoughts and efforts.”

LLNL’s work on the Restoring Active Memory program supports President Obama’s Brain Research through Advancing Innovative Neurotechnologies (BRAIN) initiative.

“Our years of experience developing implantable microdevices, through projects funded by the Department of Energy (DOE), prepared us to respond to DARPA’s challenge,” said Lawrence Livermore Engineer Kedar Shah, a project leader in the Neural Technology group.


Story Source:

The above story is based on materials provided by DOE/Lawrence Livermore National Laboratory. Note: Materials may be edited for content and length.

Engineered red blood cells could carry precious therapeutic cargo

July 3, 2014

9885501

Whitehead Institute scientists have genetically and enzymatically modified red blood cells to carry a range of valuable payloads — from drugs, to vaccines, to imaging agents — for delivery to specific sites throughout the body.

“We wanted to create high-value red cells that do more than simply carry oxygen,” says Whitehead Founding Member Harvey Lodish, who collaborated with Whitehead Member Hidde Ploegh in this pursuit. “Here we’ve laid out the technology to make mouse and human red blood cells in culture that can express what we want and potentially be used for therapeutic or diagnostic purposes.”

The work, published in the Proceedings of the National Academy of Sciences (PNAS), combines Lodish’s expertise in the biology of red blood cells (RBCs) with biochemical methods developed in Ploegh’s lab.

RBCs are an attractive vehicle for potential therapeutic applications for a variety of reasons, including their abundance — they are more numerous than any other cell type in the body — and their long lifespan (up to 120 days in circulation). Perhaps most importantly, during RBC production, the progenitor cells that eventually mature to become RBCs jettison their nuclei and all DNA therein. Without a nucleus, a mature RBC lacks any genetic material or any signs of earlier genetic manipulation that could result in tumor formation or other adverse effects.

Exploiting this characteristic, Lodish and his lab introduced genes coding for specific slightly modified normal red cell surface proteins into early-stage RBC progenitors. As the RBCs approach maturity and enucleate, the proteins remain on the cell surface, where they are modified by Ploegh’s protein-labeling technique. Referred to as “sortagging,” the approach relies on the bacterial enzyme sortase A to establish a strong chemical bond between the surface protein and a substance of choice, be it a small-molecule therapeutic or an antibody capable of binding a toxin. The modifications leave the cells and their surfaces unharmed.

“Because the modified human red blood cells can circulate in the body for up to four months, one could envision a scenario in which the cells are used to introduce antibodies that neutralize a toxin,” says Ploegh. “The result would be long-lasting reserves of antitoxin antibodies.”

The approach has captured the attention of the U.S. military and its Defense Advanced Research Projects Agency (DARPA), which is supporting the research at Whitehead in the interest of developing treatments or vaccines effective against biological weapons.

Lodish believes the applications are potentially vast and may include RBCs modified to bind and remove bad cholesterol from the bloodstream, carry clot-busting proteins to treat ischemic strokes or deep-vein thrombosis, or deliver anti-inflammatory antibodies to alleviate chronic inflammation. Further, Ploegh notes there is evidence to suggest that modified RBCs could be used to suppress the unwanted immune response that often accompanies treatment with protein-based therapies. Ploegh is exploring whether these RBCs could be used to prime the immune system to allow patients to better tolerate treatment with such therapies.


Story Source:

The above story is based on materials provided by Whitehead Institute for Biomedical Research. The original article was written by Matt Fearer. Note: Materials may be edited for content and length.

A brain implant to restore memory

brain_chip_interface_bridge

The Defense Advanced Research Projects Agency (DARPA) is forging ahead with a four-year plan to build a sophisticated memory stimulator, as part of President Barack Obama’s $100 million initiative to better understand the human brain.

The science has never been done before, and raises ethical questions about whether the human mind should be manipulated in the name of staving off war injuries or managing the aging brain.
Some say those who could benefit include the five million Americans with Alzheimer’s disease and the nearly 300,000 US military men and women who have sustained traumatic brain injuries in Iraq and Afghanistan.

“If you have been injured in the line of duty and you can’t remember your family, we want to be able to restore those kinds of functions,” DARPA program manager Justin Sanchez said this week at a conference in the US capital convened by the Center for Brain Health at the University of Texas.

“We think that we can develop neuroprosthetic devices that can directly interface with the hippocampus, and can restore the first type of memories we are looking at, the declarative memories,” he said.
Declarative memories are recollections of people, events, facts and figures, and no research has ever shown they can be put back once they are lost.

Early days

What researchers have been able to do so far is help reduce tremors in people with Parkinson’s disease, cut back on seizures among epileptics and even boost memory in some Alzheimer’s patients through a process called deep brain stimulation.

Those devices were inspired by cardiac pacemakers, and pulse electricity into the brain much like a steady drum beat, but they don’t work for everyone.

Experts say a much more nuanced approach is needed when it comes to restoring memory.
“Memory is patterns and connections,” explained Robert Hampson, an associate professor at Wake Forest University.

“For us to come up with a memory prosthetic, we would actually have to have something that delivers specific patterns,” said Hampson, adding that he could not comment specifically on DARPA’s plans.

Hampson’s research on rodents and monkeys has shown that neurons in the hippocampus—the part of the brain that processes memory—fire differently when they see red or blue, or a picture of a face versus a type of food.

Equipped with this knowledge, Hampson and colleagues have been able to extend the animals’ short-term, working memory using brain prosthetics to stimulate the hippocampus.

They could coax a drugged monkey into performing closer to normal at a memory task, and confuse it by manipulating the signal so that it would choose the opposite image of what it remembered.
According to Hampson, to restore a human’s specific memory, scientists would have to know the precise pattern for that memory.

Instead, scientists in the field think they could improve a person’s memory by simply helping the brain work more like it used to before the injury.

“The idea is to restore a function back to normal or near normal of the memory processing areas of the brain so that the person can access their formed memories, and so that they can form new memories as needed,” Hampson said.

Ethical concerns

It’s easy to see how manipulating memories in people could open up an ethical minefield, said Arthur Caplan, a medical ethicist at New York University’s Langone Medical Center.

“When you fool around with the brain you are fooling around with personal identity,” said Caplan, who advises DARPA on matters of synthetic biology but not neuroscience.

“The cost of altering the mind is you risk losing sense of self, and that is a new kind of risk we never faced.”

When it comes to soldiers, the potential for erasing memories or inserting new ones could interfere with combat techniques, make warriors more violent and less conscientious, or even thwart investigations into war crimes, he said.

“If I could take a pill or put a helmet on and have some memories wiped out, maybe I don’t have to live with the consequences of what I do,” Caplan said.

DARPA’s website says that because its “programs push the leading edge of science,” the agency “periodically convenes scholars with expertise in these issues to discuss relevant ethical, legal, and social issues.”

Just who might be first in line for the experiments is another of the many unknowns.

Sanchez said the path forward will be formally announced in the next few months.

“We have got some of the most talented scientists in our country that will be working on this project. So stay tuned. Lots of exciting things will be coming in the very near future.”

Story Source:

The above story is based on materials provided by AFP, Kerry Sheridan.

http://bioengineer.org/coming-soon-brain-implant-restore-memory/