Scientists create circuit board modeled on the human brain

April 29, 2014


Human brain and circuits illustration (stock image). Stanford scientists have developed faster, more energy-efficient microchips based on the human brain — 9,000 times faster and using significantly less power than a typical PC. Credit: © agsandrew / Fotolia

Boahen and his team have developed Neurogrid, a circuit board consisting of 16 custom-designed “Neurocore” chips. Together these 16 chips can simulate 1 million neurons and billions of synaptic connections. The team designed these chips with power efficiency in mind. Their strategy was to enable certain synapses to share hardware circuits. The result was Neurogrid — a device about the size of an iPad that can simulate orders of magnitude more neurons and synapses than other brain mimics on the power it takes to run a tablet computer.

Stanford scientists have developed faster, more energy-efficient microchips based on the human brain — 9,000 times faster and using significantly less power than a typical PC. This offers greater possibilities for advances in robotics and a new way of understanding the brain. For instance, a chip as fast and efficient as the human brain could drive prosthetic limbs with the speed and complexity of our own actions.

Stanford scientists have developed a new circuit board modeled on the human brain, possibly opening up new frontiers in robotics and computing.

For all their sophistication, computers pale in comparison to the brain. The modest cortex of the mouse, for instance, operates 9,000 times faster than a personal computer simulation of its functions.

Not only is the PC slower, it takes 40,000 times more power to run, writes Kwabena Boahen, associate professor of bioengineering at Stanford, in an article for the Proceedings of the IEEE.

“From a pure energy perspective, the brain is hard to match,” says Boahen, whose article surveys how “neuromorphic” researchers in the United States and Europe are using silicon and software to build electronic systems that mimic neurons and synapses.

Boahen and his team have developed Neurogrid, a circuit board consisting of 16 custom-designed “Neurocore” chips. Together these 16 chips can simulate 1 million neurons and billions of synaptic connections. The team designed these chips with power efficiency in mind. Their strategy was to enable certain synapses to share hardware circuits. The result was Neurogrid — a device about the size of an iPad that can simulate orders of magnitude more neurons and synapses than other brain mimics on the power it takes to run a tablet computer.

The National Institutes of Health funded development of this million-neuron prototype with a five-year Pioneer Award. Now Boahen stands ready for the next steps — lowering costs and creating compiler software that would enable engineers and computer scientists with no knowledge of neuroscience to solve problems — such as controlling a humanoid robot — using Neurogrid.

Its speed and low power characteristics make Neurogrid ideal for more than just modeling the human brain. Boahen is working with other Stanford scientists to develop prosthetic limbs for paralyzed people that would be controlled by a Neurocore-like chip.

“Right now, you have to know how the brain works to program one of these,” said Boahen, gesturing at the $40,000 prototype board on the desk of his Stanford office. “We want to create a neurocompiler so that you would not need to know anything about synapses and neurons to able to use one of these.”

Brain ferment

In his article, Boahen notes the larger context of neuromorphic research, including the European Union’s Human Brain Project, which aims to simulate a human brain on a supercomputer. By contrast, the U.S. BRAIN Project — short for Brain Research through Advancing Innovative Neurotechnologies — has taken a tool-building approach by challenging scientists, including many at Stanford, to develop new kinds of tools that can read out the activity of thousands or even millions of neurons in the brain as well as write in complex patterns of activity.

Zooming from the big picture, Boahen’s article focuses on two projects comparable to Neurogrid that attempt to model brain functions in silicon and/or software.

One of these efforts is IBM’s SyNAPSE Project — short for Systems of Neuromorphic Adaptive Plastic Scalable Electronics. As the name implies, SyNAPSE involves a bid to redesign chips, code-named Golden Gate, to emulate the ability of neurons to make a great many synaptic connections — a feature that helps the brain solve problems on the fly. At present a Golden Gate chip consists of 256 digital neurons each equipped with 1,024 digital synaptic circuits, with IBM on track to greatly increase the numbers of neurons in the system.

Heidelberg University’s BrainScales project has the ambitious goal of developing analog chips to mimic the behaviors of neurons and synapses. Their HICANN chip — short for High Input Count Analog Neural Network — would be the core of a system designed to accelerate brain simulations, to enable researchers to model drug interactions that might take months to play out in a compressed time frame. At present, the HICANN system can emulate 512 neurons each equipped with 224 synaptic circuits, with a roadmap to greatly expand that hardware base.

Each of these research teams has made different technical choices, such as whether to dedicate each hardware circuit to modeling a single neural element (e.g., a single synapse) or several (e.g., by activating the hardware circuit twice to model the effect of two active synapses). These choices have resulted in different trade-offs in terms of capability and performance.

In his analysis, Boahen creates a single metric to account for total system cost — including the size of the chip, how many neurons it simulates and the power it consumes.

Neurogrid was by far the most cost-effective way to simulate neurons, in keeping with Boahen’s goal of creating a system affordable enough to be widely used in research.

Speed and efficiency

But much work lies ahead. Each of the current million-neuron Neurogrid circuit boards cost about $40,000. Boahen believes dramatic cost reductions are possible. Neurogrid is based on 16 Neurocores, each of which supports 65,536 neurons. Those chips were made using 15-year-old fabrication technologies.

By switching to modern manufacturing processes and fabricating the chips in large volumes, he could cut a Neurocore’s cost 100-fold — suggesting a million-neuron board for $400 a copy. With that cheaper hardware and compiler software to make it easy to configure, these neuromorphic systems could find numerous applications.

For instance, a chip as fast and efficient as the human brain could drive prosthetic limbs with the speed and complexity of our own actions — but without being tethered to a power source. Krishna Shenoy, an electrical engineering professor at Stanford and Boahen’s neighbor at the interdisciplinary Bio-X center, is developing ways of reading brain signals to understand movement. Boahen envisions a Neurocore-like chip that could be implanted in a paralyzed person’s brain, interpreting those intended movements and translating them to commands for prosthetic limbs without overheating the brain.

A small prosthetic arm in Boahen’s lab is currently controlled by Neurogrid to execute movement commands in real time. For now it doesn’t look like much, but its simple levers and joints hold hope for robotic limbs of the future.

Of course, all of these neuromorphic efforts are beggared by the complexity and efficiency of the human brain.

In his article, Boahen notes that Neurogrid is about 100,000 times more energy efficient than a personal computer simulation of 1 million neurons. Yet it is an energy hog compared to our biological CPU.

“The human brain, with 80,000 times more neurons than Neurogrid, consumes only three times as much power,” Boahen writes. “Achieving this level of energy efficiency while offering greater configurability and scale is the ultimate challenge neuromorphic engineers face.”

Story Source:

The above story is based on materials provided by Stanford University. The original article was written by Tom Abate. Note: Materials may be edited for content and length.

Journal Reference:

  1. Ben Varkey Benjamin, Peiran Gao, Emmett McQuinn, Swadesh Choudhary, Anand R. Chandrasekaran, Jean-Marie Bussat, Rodrigo Alvarez-Icaza, John V. Arthur, Paul A. Merolla, Kwabena Boahen. Neurogrid: A Mixed-Analog-Digital Multichip System for Large-Scale Neural Simulations. Proceedings of the IEEE, 2014; 1 DOI: 10.1109/JPROC.2014.2313565

The new technologies that will change human civilization as we know it

April 29, 2014


Where are technologies heading in the next 30 years ? How will they affect our lifestyle and human society ?

Most adults alive today grew up without the Internet or mobile phones, let alone smartphones and tablets with voice commands and apps for everything. These new technologies have altered our lifestyle in a way few of us could have imagined a few decades ago. But have we reached the end of the line ? What else could turn up that could make our lives so much more different ? Faster computers ? More gadgets ? It is in fact so much more than that. Technologies have embarked on an exponential growth curve and we are just getting started. In 10 years we will look back on our life today and wonder how we could have lived with such primitive technology. The gap will be bigger than between today and the 1980’s. Get ready because you are in for a rough ride.

Artificial Intelligence (AI), Supercomputers & Robotics

Ray Kurzweil, Google’s director of engineering, predicts that by 2029 computer will exhibit intelligent behaviour equivalent to that of a human, and that by 2045 computers will be a billion times more powerful than all of the human brains on Earth. Once computers can fully simulate a human brain and surpass it, it will cause an “intelligence explosion” that will radically change civilization. The rate of innovation will progress exponentially, so much that it will become impossible to foresee the future course of human history. This point in time is called the singularity. Experts believe that it will happen in the middle of the 21st century, perhaps as early as 2030, but the median value of predictions is 2040.

Let’s start with cognitive computing. IBM’s Watson computer is already capable of reading a million books a second and answering questions posed in natural language. In 2011 Watson easily defeated former champions Brad Rutter and Ken Jennings at the TV game show Jeopardy!, reputedly one of the most difficult quiz competitions in the world. Watson’s abilities are not merely limited to finding the relevant facts and answers. It can also make jokes and clever puns. Most remarkably, Watson can provide better medical diagnostics than any human medical doctor, give financial advice, as well as generate or evaluate all kinds of scientific hypotheses based on a huge amount of data. Computer power increases in average 100 fold every 10 years, which means 10,000 fold after 20 years, and 1 million fold after 30 years. Imagine what computers will be able to do by then.

The X Prize Foundation, chaired by Peter Diamandis, co-founder of Singularity University in the Silicon Valley, manages incentivized competitions to bring about radical breakthroughs for the benefit of humanity. One of the current competitions, the Nokia Sensing XCHALLENGE, aims at developing a smartphone-like device that can test vitals like cholesterol, blood pressure, heart rate or allergies, analyse your DNA for genetic risks, diagnose medical conditions, and predict potential diseases or the likelihood of a stroke. All this without seeing a doctor. The device could be used by you or your relatives anywhere, anytime. All this is possible thanks to highly sensitive electronic sensors and powerful AI.

Google is working on an AI that will be able to read and understand any document, and learn the content of all books in the world. It will be able to answer any question asked by any user. This omniscient AI will eventually become people’s first source of knowledge, replacing schools, books and even human interactions. Just wonder about anything and the computer will provide you with the answer and explain it to you in a way you can easily understand, based on your current knowledge.

Once AI reaches the same level of intelligence as a human brain, or exceeds it, intelligent robots will be able to do a majority of human jobs. Robots already manufacture most products. Soon they will also build roads and houses, replace human staff in supermarkets and shops, serve and perhaps even cook food in restaurants, take care of the sick and the elderly. The best doctors, even surgeons, will be robots.

It might still be a decade or two before human-like androids start walking the streets among us and working for us. But driverless cars, pioneered by Google and Tesla, could be introduced as early as 2016, and could become the dominant form of vehicles in developed countries by 2025. The advantages of autonomous cars are so overwhelming (less stress and exhaustion, fewer accidents, smoother traffic) that very few people will want to keep traditional cars. That is why the transition could happen as fast as, if not faster than the shift from analog phones to smartphones. Robo-Taxis are coming soon and could in time replace human taxi drivers. All cars and trains will eventually be entirely driven by computers.

AI will translate documents, answer customer support questions, complete administrative tasks, and teach kids and adults alike. It is estimated that 40 to 50% of service jobs will be done by AI in 2025. Creative jobs aren’t immune either, as computers will soon surpass humans in creativity too. There could still be human artists, but artistic value will drop to zero when any design or art can be produced on demand and on measure by AI in a few seconds.

Once computer graphics and AI simulation of human behaviours become so realistic that we can’t tell if a person in a video is real or not, Hollywood won’t need to use real actors anymore, but will be able to create movie stars that don’t exist – and the crazy thing is no one will notice the difference !

3-D Printing

3D printers are the biggest upheaval in manufacturing since the industrial revolution. Not only can we print objects in three dimensions, they can now be printed in practically any material, not just plastics, but also metals, concrete, fabrics, and even food. Better still, they can be printed in multiple materials at once. High-quality 3D printers can copy electronic chips in the tiniest detail and have a functional chip. High-tech vehicles like the Koenigsegg’s One:1 (the world’s fastest car) or EDAG’s Genesis are already being made by 3D Printing. Even houses will be 3D-printed, for a fraction of the costs of traditional construction.

In a near future we won’t need to go shopping to buy new products. We will just select them online, perhaps tweak a bit their design, size or colour to our tastes and needs, then we will just 3D print them at home. More jobs going down the drain ? Not really. Retail jobs were already going to be taken by intelligent robots anyway. The good news is that it will considerably reduce our carbon footprint by cutting unnecessary transport from distant factories in China or other parts of the world. Everything will be “home-made”, literally. Since any material can be re-used, or ‘recycled’ in a 3D printer, it will also dramatically reduce waste.

3D printing is also good news for medicine. Doctors can now make customized prosthetics, joint replacements, dental work and hearing aids.


The other advances in robotics, AI, 3-D printing and nanotechnologies all converge in the field of bioengineering. Human cyborgs aren’t science-fiction anymore. It’s already happening.

  • There are artificial hand with real feeling controlled directly by the brain thanks to a nerve interface converting electric impulses in the nervous system into electronic signals for the robotic prosthesis. From that point on, any improvement is possible, like this drummer who got an extra bionic arm.
  • Electronic membranes can keep the heart beating forever.
  • Microchips implanted into the brain can restore vision in blind people and hearing in deaf people. Soon such chips will allow bionic humans to see and hear better than humans in their natural state. Equipped with one of these, humans will be able to see ultraviolets and infrareds, hear ultrasounds like dogs, echolocate like bats, and perhaps even eventually understand animal languages, including the whale vocalization. The potential for improvements is unlimited.
  • We are on the verge of developing telepathic abilities. Placing microchips on the brains of two individuals, then connecting them with one another through the internet, one person can hear what the other hears directly in their brains. Studies with rats went further. Microchips implanted in their motor cortices effectively caused one rat to remotely control the movements of the another rat in a separate room.
  • Neural prostheses have been used to repair a damaged hippocampus inside a monkey’s brain, and could be used in a near future to repair various types of brain damages in human beings too.
  • Robotic exoskeletons like Iron Man will augment our physical capacities tremendously. The advantage of these exoskeletons is that they can be easily removed and don’t require permanent changes to our body. Researchers at Stanford University are currently working on Stickybot, a gecko robot capable of climbing smooth surfaces, such as glass, acrylic and whiteboard using directional adhesive. It’s only a matter of time (years, not decades) before a gecko suit enables humans to climb buildings like Spiderman. And what next ?

Stem cells & Bioprinting

Regenerative medicine offers even more promises than artificial limbs and body parts. What if instead of having a robotic arm, you could regrow completely your original arm ? Sounds impossible ? It isn’t. Lizard regrow their tails. Axolotls regrow severed legs. We now understand how they do it: stem cells. These pluripotent undifferentiated cells have the power to repair any body part. Using organ culture, stem cells can regrow any organ as fresh as new through. In the future it will be possible to regrow limbs or organs directly on a person, as if the body was simply healing itself.

Combing 3-D printing and stem cell regeneration paves the way to the printing of human organs, a field known as bioprinting (read articles on the topic in New Scientist and The Economist).


Genetics has progressed tremendously too over the last 15 years. From the sequencing of the first full human genome in 2003, we have now entered the era of personal genomics, gene therapy and synthetic life, and could be approaching the age of genetically enhanced humans.

Gene therapy is perhaps the most revolutionary of all the medical advances, as it will effectively allow to fix any disease-causing gene and to engineer humans that are better adapated to the modern nutrition, life rythmn, and technology-dominated lifestyle. Not only will all diseases and neuropsychological problems with a genetic cause disappear, but humans will also become more resistant to stress, fatigue and allergens, and could choose to boost their potential mental faculties and physical abilities, creating “superhumans”. This is known as transhumanism.

Gene therapy also permits genetic modifications for purely cosmetic reasons, such as changing one’s skin, hair or eye pigmentation. Gene therapy can be done over and over again, switching back or refining earlier modifications if necessary, just as one would edit text on a computer. Once the human genome is fully understood, we could even imagine applications that let people customize their physical appearance of a virtual avatar of themselves, then transcribe these changes to their DNA. This is the age of customizable humans, or rather the age of customizable life forms.

Vertical farming

Ecologist Dickson Despommier of Columbia University came up with the idea of using skyscrapers in New York for agricultural production, eventually founding the Vertical Farm Project. The virtues of vertical farming are manifold. Food can be produced in optimal conditions inside purposely-built skyscrapers, maximizing the amount of sunlight for photosynthesis. By controlling the inside temperature, and the amount of water and nutrients each plant receives, indoor farming can produce crops year-round ultiplying by a factor from 4 to 6 the productivity compared to traditional farming. What’s more all this is possible without using pesticides since skyscrapers are a closed ecosystem of their own, free of insects or rodents. Additionally, vertical farms free up agricultural land, which in turn prevents deforestation and allows for reforestation and the safeguard of the environment.

The end of the capitalist economy

Ironically it is the extreme success of the capitalist economy that will lead to its demise. The very nature of competitive markets that drives productivity up and brings marginal costs down, eventually to near zero, will make goods and services nearly free much sooner than we think. Accelerating factors include Moore’s law of exponential growth in digital technologies and the fast development of 3-D printing. The Internet alone has already had a huge impact in providing billions of people around the world with an amazing range of free services, including for example online higher education such as Khan Academy.


The first step is providing free ultrafast Internet to all the world. Google and Facebook are both working on different ways of achieving this, starting with developing countries where Internet connections are extremely sparse today, notably in Africa. Google’s Project Loon plans to acheive this by launching high-altitude balloons into the stratosphere, while Facebook wants to build flying drones and satellites to beam Internet around the world. 5G mobile networks (coming around 2020) will be so fast (downloading a full HD movie in one second) that cable Internet connections will disappear. The merger that is under way between TV, computers, tablets, smartphones and game consoles will very soon result in a single universal type of device being used everywhere, all connected via 5G networks. In other words, telephone, cable TV and Internet Service Providers will all go out of business, as all TVs and phones will be connected through free mobile networks.

By 2035, humanity is likely to have achieved free electricity for all the world, mostly thanks to the exponential efficiency and decreasing prices to harness solar energy, but also thanks to 4th generation nuclear reactors and later fusion power.

The Internet of Things will connect all the electric and electronic devices in the world and optimally manage energy supply through a smart-grid known as the Enernet, expected to become a reality around 2030.

Over the coming decades the economy is going to be transformed by the rise of the Collaborative Commons, i.e. peer production coordinated (usually with the aid of the Internet) into large, meaningful projects mostly without traditional hierarchical organization. Almost any consumer product will be downloadable online and 3-D printed at extremely low cost at home, which ultimately will lead to the end of capitalism and the start of an unprecedented era of abundance, as Peter Diamandis of Singularity University convincingly explains in his remarkable book.

Toward the Singularity

As amazing as all this seems, keep in mind that all these advances in bioengineering, genetics, robotics and 3-D printing are barely the what is being developed now and will become available to us within the next decade (horizon 2025). This isn’t the singularity yet. Once the singularity has been reached, in 25 to 40 years, this is when everything will change beyond our wildest dreams (or nightmares).

This article was originally posted on


TED talk: “When creative machines overtake man: Jürgen Schmidhuber at TEDxLausanne”

Machine intelligence is improving rapidly, to the point that the scientist of the future may not even be human! In fact, in more and more fields, learning machines are already outperforming humans.

Artificial intelligence expert Jürgen Schmidhuber isn’t able to predict the future accurately, but he explains how machines are getting creative, why 40’000 years of Homo sapiens-dominated history are about to end soon, and how we can try to make the best of what lies ahead.

IBM invents ’3D nanoprinter’ for microscopic objects

April 26, 2014

“With our new technique, we achieve very high resolution at 10 nanometers at greatly reduced cost and complexity. In particular, by controlling the amount of material evaporated, 3D relief patterns can also be produced at the unprecedented accuracy of merely one nanometer in a vertical direction. Now it’s up to the imagination of scientists and engineers.”

IBM scientists have invented a tiny “chisel” with a nano-sized heatable silicon tip that creates patterns and structures on a microscopic scale.

The tip, similar to the kind used in atomic force microscopes, is attached to a bendable cantilever that scans the surface of the substrate material with the accuracy of one nanometer.

Unlike conventional 3D printers, by applying heat and force, the nanosized tip can remove (rather than add) material based on predefined patterns, thus operating like a “nanomilling” machine with ultra-high precision.

By the end 2014, IBM hopes to begin exploring the use of this technology for its research with graphene.

“To create more energy-efficient clouds and crunch Big Data faster we need a new generation of technologies including new transistors, but before going into mass production, new techniques are needed for prototyping below 30 nanometers,” said Dr. Armin Knoll, a physicist at IBM Research – Zurich.

“With our new technique, we achieve very high resolution at 10 nanometers at greatly reduced cost and complexity. In particular, by controlling the amount of material evaporated, 3D relief patterns can also be produced at the unprecedented accuracy of merely one nanometer in a vertical direction. Now it’s up to the imagination of scientists and engineers.”

Other applications include nano-sized security tags to prevent the forgery of documents like currency, passports and priceless works of art, and quantum computing and communications (the nano-sized tip could be used to create high quality patterns to control and manipulate light at unprecedented precision).

The NanoFrazor

IBM has licensed this technology to a startup based in Switzerland called SwissLitho, which is bringing the technology to market under the name NanoFrazor.

Several weeks ago the firm shipped its first NanoFrazor to McGill University’s Nanotools Microfab, where scientists and students will use the tool’s unique fabrication capabilities to experiment with ideas for designing novel nano-devices.

To promote the new technology, scientists etched a microscopic National Geographic Kids magazine cover in 10 minutes onto a polymer. The resulting magazine cover is so small at 11 x 14 micrometers that 2,000 can fit on a grain of salt.

Today (April 25), IBM claimed its ninth GUINNESS WORLD RECORDS title for the Smallest Magazine Cover at the USA Science & Engineering Festival in Washington, D.C. Visible through a Zeiss microscope, the cover will be on display there on April 26 and 27.


Warp Drive Research Key to Interstellar Travel

April 25, 2014


As any avid Star Trek fan can tell you, the eccentric physicist Zefram Cochrane invented the warp-drive engine in the year 2063. It wasn’t easy. Cochrane had to contend with evil time-traveling aliens who were determined to stop him from building the faster-than-light propulsion system (see the 1996 movie Star Trek: First Contact for details). But in the end he succeeded, and centuries later his warp drive powered the interstellar voyages of the starship Enterprise.What Star Trek fans may not know is that a physicist in the real world—specifically, at NASA’s Johnson Space Center in Houston—is investigating the feasibility of building a real warp-drive engine. Harold “Sonny” White, head of the center’s advanced propulsion program, has assembled a tabletop experiment designed to create tiny distortions in spacetime, the malleable fabric of the universe. If the experiment is successful, it may eventually lead to the development of a system that could generate a bubble of warped spacetime around a spacecraft. Instead of increasing the craft’s speed, the warp drive would distort the spacetime along its path, allowing it to sidestep the laws of physics that prohibit faster-than-light travel. Such a spacecraft could cross the vast distances between stars in just a matter of weeks.

For readers and writers of science fiction, this is extraordinary news. It doesn’t really matter that other physicists scoff at White’s idea, arguing that it’s impossible to alter spacetime in this way. Nor does it matter that NASA has allocated only $50,000, a mere smidgeon of the space agency’s $18 billion budget, to the warp-drive research. What makes White’s project so exciting is the immensity of the challenge. It’s heartening to know that even in this era of fiscal belt-tightening, the federal government is willing to place a small bet on the big dream of interstellar travel.

A surprising number of scientists, engineers and amateur space enthusiasts fervently believe in this dream. They’ve shared their hopes and hypotheses at academic conferences. They’ve founded organizations—the 100 Year Starship project, the Tau Zero Foundation, Icarus Interstellar—that seek to lay the groundwork for an unmanned interstellar mission that could be launched by the end of the century. Their ardor has grown in recent years as astronomers have detected a slew of Earthlike planets orbiting stars that are relatively near our sun. A few dozen of these worlds occupy the so-called “Goldilocks zone” around their stars—they’re neither too hot nor too cold to support life. If further observations confirm the existence of a habitable, idyllic planet in our corner of the galaxy, how could we resist sending an interstellar probe to explore this strange new world?

The problem is getting the spacecraft there in a reasonable amount of time. Believe it or not, NASA already has a probe that’s crossing the space between stars: Voyager 1, the plucky 1,600-pound craft launched in 1977 to investigate Jupiter, Saturn and their moons. After completing its primary mission the probe zipped past the outer planets, and in 2012 it left the solar system and entered interstellar space. Voyager has traveled almost 12 billion miles since its launch and is now zooming away from us at 38,610 miles per hour. But even at that blistering speed it would take at least 70,000 years to reach any of the nearby stars that might harbor habitable planets. Researchers need to make some serious breakthroughs in spacecraft propulsion to get there faster.

Although White and a few other scientists are tantalized by the possibility of warp drive, most of the interstellar enthusiasts have focused their attention on technologies that are less hypothetical. Icarus Interstellar, for example, is coordinating a study of a proposed mission that would use fusion power—the energy produced by slamming atomic nuclei together—to propel the spacecraft. Nuclear fusion is what gives the hydrogen bomb its bang, and if the energy is properly controlled and harnessed it could accelerate a probe to phenomenal speeds, thousands of times faster than Voyager 1. But researchers have been trying to build a fusion power plant for the past fifty years without much success. The technology hasn’t proved itself on Earth yet, and it’s certainly not ready to be installed in a spacecraft.

Another big problem is interstellar dust. Although the dust grains in deep space are microscopic, they’ll cause plenty of damage to a probe that’s barreling into them at millions of miles per hour. The spacecraft would have to be equipped with heavy shielding, which would increase the amount of fuel needed to accelerate the craft. And then there’s the need to decelerate the probe before it reaches its destination. There’s no point in sending a spacecraft on a hundred-year journey to a nearby star if it’s going to whiz right past the star’s habitable planets. During the later stages of its voyage the probe would have to turn its engines around and fire them in the opposite direction to slow itself down. But then the spacecraft would need to carry an even heavier load of fuel.

The complications seem as endless as space itself. The tremendous difficulty of interstellar flight may help explain the famous paradox first noted by physicist Enrico Fermi in 1950: if intelligent life is common in the universe, where are all the aliens? Perhaps extraterrestrials have never visited Earth because it’s just too hard to get here.

Nevertheless, the dream of interstellar travel remains stubbornly alive. Last September the 100 Year Starship project held a symposium on the topic just a month after Icarus Interstellar hosted its own conference. At a time when NASA is struggling to fund all its priorities—building a new launch system for its astronauts, sending new probes to Mars—planning for an interstellar mission may seem absurdly premature. But advocates such as Jill Tarter, who pioneered the effort to hunt for radio signals from extraterrestrial civilizations, argue that exploring other star systems is essential to humanity’s long-term survival. As long as the human race is confined to Earth we’re at high risk of extinction from a planetary catastrophe—a nuclear war, a pandemic, an asteroid impact and so on. The only other world in our solar system that comes even close to being habitable is Mars, and it would take hundreds of years of climate engineering to make the Red Planet livable for humans.

So the ultimate fate of our species may lie among the stars. Perhaps in a thousand years or so our civilization will look something like Star Trek’s United Federation of Planets. To reach that point, though, we need to adopt the motto of the starship Enterprise. We have “to boldly go where no man has gone before.”

About the Author: Mark Alpert is the author of The Furies, a new science thriller from Thomas Dunne Books/St. Martin’s Press. His earlier thrillers—Final Theory, The Omega Theory and Extinction—have been published in 23 languages. Follow on Twitter @AlpertMark.


Revealed: Scientists ‘edit’ DNA to correct adult genes and cure diseases


Jennifer Doudna, of the University of California, Berkeley, who was one of the co-discoverers of the Crispr technique, said Professor Anderson’s study is a “fantastic advance” because it demonstrates that it is possible to cure adult animals living with a genetic disorder.

A genetic disease has been cured in living, adult animals for the first time using a revolutionary genome-editing technique that can make the smallest changes to the vast database of the DNA molecule with pinpoint accuracy.

Scientists have used the genome-editing technology to cure adult laboratory mice of an inherited liver disease by correcting a single “letter” of the genetic alphabet which had been mutated in a vital gene involved in liver metabolism.

A similar mutation in the same gene causes the equivalent inherited liver disease in humans – and the successful repair of the genetic defect in laboratory mice raises hopes that the first clinical trials on patients could begin within a few years, scientists said.

The success is the latest achievement in the field of genome editing. This has been transformed by the discovery of Crispr, a technology that allows scientists to make almost any DNA changes at precisely defined points on the chromosomes of animals or plants. Crispr – pronounced “crisper” – was initially discovered in 1987 as an immune defence used by bacteria against invading viruses. Its powerful genome-editing potential in higher animals, including humans, was only fully realised in 2012 and 2013 when scientists showed that it can be combined with a DNA-sniping enzyme called Cas9 and used to edit the human genome.

Correcting genetic code graphic

Correcting genetic code graphic

Since then there has been an explosion of interest in the technology because it is such a simple method of changing the individual letters of the human genome – the 3 billion “base pairs” of the DNA molecule – with an accuracy equivalent to correcting a single misspelt word in a 23-volume encyclopaedia.

In the latest study, scientists at the Massachusetts Institute of Technology (MIT) used Crispr to locate and correct the single mutated DNA base pair in a liver gene known as FAH, which can lead to a fatal build-up of the amino acid tyrosine in humans and has to be treated with drugs and a special diet.

The researchers effectively cured mice suffering from the disease by altering the genetic make-up of about a third of their liver cells using the Crispr technique, which was delivered by high-pressure intravenous injections.

“We basically showed you could use the Crispr system in an animal to cure a genetic disease, and the one we picked was a disease in the liver which is very similar to one found in humans,” said Professor Daniel Anderson of MIT, who led the study.

“The disease is caused by a single point mutation and we showed that the Crispr system can be delivered in an adult animal and result in a cure. We think it’s an important proof of principle that this technology can be applied to animals to cure disease,” Professor Anderson told The Independent. “The fundamental advantage is that you are repairing the defect, you are actually correcting the DNA itself,” he said. “What is exciting about this approach is that we can actually correct a defective gene in a living adult animal.”

Jennifer Doudna, of the University of California, Berkeley, who was one of the co-discoverers of the Crispr technique, said Professor Anderson’s study is a “fantastic advance” because it demonstrates that it is possible to cure adult animals living with a genetic disorder.

“Obviously there would be numerous hurdles before such an approach could be used in people, but the simplicity of the approach, and the fact that it worked, really are very exciting,” Professor Doudna said.

“I think there will be a lot of progress made in the coming one to two years in using this approach for therapeutics and other real-world applications,” she added.

Delivering Crispr safely and efficiently to affected human cells is seen as one of the biggest obstacles to its widespread use in medicine.

Feng Zhang, of the Broad Institute at MIT, said that high-pressure injections are probably too dangerous to be used clinically, which is why he is working on ways of using Crispr to correct genetic faults in human patients with the help of adeno-associated viruses, which are known to be harmless.

Other researchers are also working on viruses to carry the Crispr technology to diseased cells – similar viral delivery of genes has already had limited success in conventional gene therapy.

Dr Zhang said that Crispr can also be used to create better experimental models of human diseases by altering the genomes of experimental animals as well as human cells growing in the laboratory.

Professor Craig Mello of the University of Massachusetts Medical School said that delivering Crispr to the cells of the human brain and other vital organs will be difficult. “Crispr therapies will no doubt be limited for the foreseeable future,” he said.


Anthony Atala: Printing a human kidney


Surgeon Anthony Atala demonstrates an early-stage experiment that could someday solve the organ-donor problem: a 3D printer that uses living cells to output a transplantable kidney. Using similar technology, Dr. Atala’s young patient Luke Massella received an engineered bladder 10 years ago; we meet him onstage.

Cloaked DNA nanodevices survive pilot mission

April 23, 2014


Such DNA nanorobots may themselves sound like science fiction, but they already exist. In 2012 Wyss Institute researchers reported in Science that they had built a nanorobot that uses logic to detect a target cell, then reveals an antibody that activates a “suicide switch” in leukemia or lymphoma cells.

It’s a familiar trope in science fiction: In enemy territory, activate your cloaking device. And real-world viruses use similar tactics to make themselves invisible to the immune system. Now scientists at Harvard’s Wyss Institute for Biologically Inspired Engineering have mimicked these viral tactics to build the first DNA nanodevices that survive the body’s immune defenses.

The results pave the way for smart DNA nanorobots that could use logic to diagnose cancer earlier and more accurately than doctors can today; target drugs to tumors, or even manufacture drugs on the spot to cripple cancer, the researchers report in the April 22 online issue of ACS Nano.

“We’re mimicking virus functionality to eventually build therapeutics that specifically target cells,” said Wyss Institute Core Faculty member William Shih, Ph.D., the paper’s senior author. Shih is also an Associate Professor of Biological Chemistry and Molecular Pharmacology at Harvard Medical School and Associate Professor of Cancer Biology at the Dana-Farber Cancer Institute.

The same cloaking strategy could also be used to make artificial microscopic containers called protocells that could act as biosensors to detect pathogens in food or toxic chemicals in drinking water. DNA is well known for carrying genetic information, but Shih and other bioengineers are using it instead as a building material. To do this, they use DNA origami — a method Shih helped extend from 2D to 3D. In this method, scientists take a long strand of DNA and program it to fold into specific shapes, much as a single sheet of paper is folded to create various shapes in the traditional Japanese art.

Shih’s team assembles these shapes to build DNA nanoscale devices that might one day be as complex as the molecular machinery found in cells. For example, they are developing methods to build DNA into tiny robots that sense their environment, calculate how to respond, then carry out a useful task, such as performing a chemical reaction or generating mechanical force or movement.

Such DNA nanorobots may themselves sound like science fiction, but they already exist. In 2012 Wyss Institute researchers reported in Science that they had built a nanorobot that uses logic to detect a target cell, then reveals an antibody that activates a “suicide switch” in leukemia or lymphoma cells.For a DNA nanodevice to successfully diagnose or treat disease, it must survive the body’s defenses long enough to do its job. But Shih’s team discovered that DNA nanodevices injected into the bloodstream of mice are quickly digested.

“That led us to ask, ‘How could we protect our particles from getting chewed up?'” Shih said.Nature inspired the solution. The scientists designed their nanodevices to mimic a type of virus that protects its genome by enclosing it in a solid protein case, then layering on an oily coating identical to that in membranes that surround living cells. That coating, or envelope, contains a double layer (bilayer) of phospholipid that helps the viruses evade the immune system and delivers them to the cell interior.

“We suspected that a virus-like envelope around our particles could solve our problem,” Shih said.

To coat DNA nanodevices with phospholipid, Steve Perrault, Ph.D., a Wyss Institute Technology Development fellow in Shih’s group and the paper’s lead author, first folded DNA into a virus-sized octahedron. Then, he took advantage of the precision-design capabilities of DNA nanotechnology, building in handles to hang lipids, which in turn directed the assembly of a single bilayer membrane surrounding the octahedron.

Under an electron microscope, the coated nanodevices closely resembled an enveloped virus.

Perrault then demonstrated that the new nanodevices survived in the body. He did that by loading them with fluorescent dye, injecting them into mice, and using whole-body imaging to see what parts of the mouse glowed. Just the bladder glowed in mice that received uncoated nanodevices, which meant that the animals broke them down quickly and were ready to excrete their contents. But the animals’ entire body glowed for hours when they received the new, coated nanodevices. This showed that nanodevices remained in the bloodstream as long as effective drugs do. The coated devices also evade the immune system. Levels of two immune-activating molecules were at least 100-fold lower in mice treated with coated nanodevices as opposed to uncoated nanodevices.

In the future, cloaked nanorobots could activate the immune system to fight cancer or suppress the immune system to help transplanted tissue become established.

“Activating the immune response could be useful clinically or it might be something to avoid,” Perrault said. “The main point is that we can control it.”

“Patients with cancer and other diseases would benefit enormously from precise, molecular-scale tools to simultaneously diagnose and treat diseased tissues, and making DNA nanoparticles last in the body is a huge step in that direction,” said Wyss Institute Founding Director Don Ingber, M.D., Ph.D.

This work was funded by the National Institutes of Health, the U.S. Army Research Laboratory’s Army Research Office, and the Wyss Institute at Harvard University.



NASA Leading the Path to Mars

April 22, 2014


Artist’s Concept of a Solar Electric Propulsion System


Engineers and scientists around the country are working hard to develop the technologies astronauts will use to one day live and work on Mars, and safely return home and the Humans to Mars Summit this week is bringing together the best minds to share ideas about the path ahead.  NASA will be leading the charge.

Last week, our solar system put on quite a show.  An alignment of Earth, moon and sun, produced a rare and spectacular blood moon lunar eclipse.  In addition, Mars made its closest approach to Earth since 2007.  And even as Mars drew tantalizingly close to Earth, NASA is drawing nearer to our goal of a human mission to the Red Planet.  This week, April 22-24, NASA joins with the non-profit group, Explore Mars, and more than 1,500 leaders from government, academia, and business at the Humans to Mars (H2M) Summit 2014 at George Washington University to discuss the value, challenges and status of America’s path to Mars.

While NASA has been on a path to Mars for decades with our earlier Mars rovers and orbiters, a critical national policy statement in support of our strategy was made on April 15, 2010 during a visit by President Obama to Kennedy Space Center where he challenged the nation to send humans to an asteroid by 2025 and to Mars in the 2030s.  Since then, NASA has been developing the capabilities to meet those goals through a bipartisan space exploration plan agreed to by the administration and Congress and embraced by the international space community.  While humans have been fascinated with Mars since the beginning of time, there are a number of very tangible reasons why we need to learn more about our closest planetary neighbor.  For one thing, Mars’ formation and evolution are comparable to Earth’s and we know that at one time Mars had conditions suitable for life.  What we learn about the Red Planet may tell us more about our own home planet’s history and future and help us answer a fundamental human question – does life exist beyond Earth?

While robotic explorers have studied Mars for more than 40 years, NASA’s path for the human exploration of Mars begins in low-Earth orbit aboard the International Space Station (ISS) our springboard to the exploration of deep space.  Astronauts aboard the ISS are helping us learn how to safely execute extended missions deeper into space.  We are guaranteed this unique orbiting outpost for at least another decade by the Administration’s commitment to extend the ISS until at least 2024.  This means an expanded market for private space companies, more groundbreaking research and science discovery in micro-gravity and opportunities to live, work and learn in space over longer periods of time.

Our next step is deep space, where NASA will send the first mission to capture and redirect an asteroid to orbit the moon.  Astronauts aboard the Orion spacecraft will explore the asteroid in the 2020s, returning to Earth with samples. This experience in human spaceflight beyond low-Earth orbit will help NASA test new systems and capabilities – such as Solar Electric Propulsion – we’ll need to support a human mission to Mars.  Beginning in 2017, NASA’s powerful Space Launch System (SLS) rocket will enable these “proving ground” missions to test new capabilities.  Human missions to Mars will rely on Orion and an evolved version of SLS that will be the most powerful launch vehicle ever flown.

A fleet of robotic spacecraft and rovers already are on and around Mars, dramatically increasing our knowledge about the Red Planet and paving the way for future human explorers.  The Mars Science Laboratory Curiosity rover measured radiation on the way to Mars and is sending back radiation data from the surface.  This data will help us plan how to protect the astronauts who will explore Mars.  Future missions like the Mars 2020 rover, seeking the signs of past life, also will demonstrate new technologies that could help astronauts survive on Mars.

Engineers and scientists around the country are working hard to develop the technologies astronauts will use to one day live and work on Mars, and safely return home and the Humans to Mars Summit this week is bringing together the best minds to share ideas about the path ahead.  NASA will be leading the charge.

It is important to remember that NASA sent humans to the moon by setting a goal that seemed beyond our reach.   In that same spirit, we have made a human mission to Mars the centerpiece of our next big leap into the unknown.  The challenge is huge, but we are making real progress today as a radiation monitor on the Curiosity rover records the Martian radiation environment that our crews will experience; advanced entry, descent and landing technologies needed for landing on Mars are ready for entry speed testing high-above the waters of the Pacific Ocean in June; Orion is finishing preparation for a heat shield test in December; and flight hardware for the heavy lift rocket necessary for Mars missions begins manufacture in New Orleans.  The future of space exploration is bright, and we are counting on the support of Congress, the scientific community and the American people to help us realize our goals.

‘Chaperone’ compounds offer new approach to Alzheimer’s treatment

April 21, 2014


“Our findings identify a novel class of pharmacologic agents that are designed to treat neurologic disease by targeting a defect in cell biology, rather than a defect in molecular biology,” said Scott Small, MD, the Boris and Rose Katz Professor of Neurology.  “This approach may prove to be safer and more effective than conventional treatments for neurologic disease, which typically target single proteins.”

A team of researchers from Columbia University Medical Center (CUMC), Weill Cornell Medical College, and Brandeis University has devised a wholly new approach to the treatment of Alzheimer’s disease involving the so-called retromer protein complex. Retromer plays a vital role in neurons, steering amyloid precursor protein (APP) away from a region of the cell where APP is cleaved, creating the potentially toxic byproduct amyloid-beta, which is thought to contribute to the development of Alzheimer’s.

Using computer-based virtual screening, the researchers identified a new class of compounds, called pharmacologic chaperones, that can significantly increase retromer levels and decrease amyloid-beta levels in cultured hippocampal neurons, without apparent cell toxicity. The study was published today in the online edition of the journal Nature Chemical Biology.

“Our findings identify a novel class of pharmacologic agents that are designed to treat neurologic disease by targeting a defect in cell biology, rather than a defect in molecular biology,” said Scott Small, MD, the Boris and Rose Katz Professor of Neurology, Director of the Alzheimer’s Disease Research Center in the Taub Institute for Research on Alzheimer’s Disease and the Aging Brain at CUMC, and a senior author of the paper. “This approach may prove to be safer and more effective than conventional treatments for neurologic disease, which typically target single proteins.”

In 2005, Dr. Small and his colleagues showed that retromer is deficient in the brains of patients with Alzheimer’s disease. In cultured neurons, they showed that reducing retromer levels raised amyloid-beta levels, while increasing retromer levels had the opposite effect. Three years later, he showed that reducing retromer had the same effect in animal models, and that these changes led to Alzheimer’s-like symptoms. Retromer abnormalities have also been observed in Parkinson’s disease.

In discussions at a scientific meeting, Dr. Small and co-senior authors Gregory A. Petsko, DPhil, Arthur J. Mahon Professor of Neurology and Neuroscience in the Feil Family Brain and Mind Research Institute and Director of the Helen and Robert Appel Alzheimer’s Disease Research Institute at Weill Cornell Medical College, and Dagmar Ringe, PhD, Harold and Bernice Davis Professor in the Departments of Biochemistry and Chemistry and in the Rosenstiel Basic Medical Sciences Research Center at Brandeis University, began wondering if there was a way to stabilize retromer (that is, prevent it from degrading) and bolster its function. “The idea that it would be beneficial to protect a protein’s structure is one that nature figured out a long time ago,” said Dr. Petsko. “We’re just learning how to do that pharmacologically.”

Other researchers had already determined retromer’s three-dimensional structure. “Our challenge was to find small molecules—or pharmacologic chaperones—that could bind to retromer’s weak point and stabilize the whole protein complex,” said Dr. Ringe.

This was accomplished through computerized virtual, or in silico, screening of known chemical compounds, simulating how the compounds might dock with the retromer protein complex. (In conventional screening, compounds are physically tested to see whether they interact with the intended target, a costlier and lengthier process.) The screening identified 100 potential retromer-stabilizing candidates, 24 of which showed particular promise. Of those, one compound, called R55, was found to significantly increase the stability of retromer when the complex was subjected to heat stress.

The researchers then looked at how R55 affected neurons of the hippocampus, a key brain structure involved in learning and memory. “One concern was that this compound would be toxic,” said Dr. Diego Berman, assistant professor of clinical pathology and cell biology at CUMC and a lead author. “But R55 was found to be relatively non-toxic in mouse neurons in cell culture.”

More important, a subsequent experiment showed that the compound significantly increased retromer levels and decreased amyloid-beta levels in cultured neurons taken from healthy mice and from a mouse model of Alzheimer’s. The researchers are currently testing the clinical effects of R55 in the actual mouse model .

“The odds that this particular compound will pan out are low, but the paper provides a proof of principle for the efficacy of retromer pharmacologic chaperones,” said Dr. Petsko. “While we’re testing R55, we will be developing chemical analogs in the hope of finding compounds that are more effective.”

Story Source:

The above story is based on materials provided by Columbia University Medical Center. Note: Materials may be edited for content and length.

Journal Reference:

  1. Vincent J Mecozzi, Diego E Berman, Sabrina Simoes, Chris Vetanovetz, Mehraj R Awal, Vivek M Patel, Remy T Schneider, Gregory A Petsko, Dagmar Ringe, Scott A Small. Pharmacological chaperones stabilize retromer to limit APP processing. Nature Chemical Biology, 2014; DOI: 10.1038/nchembio.1508