MIT’s AlterEgo headset can read words you say in your head

May 20, 2018

I don’t want to alarm you, but robots can now read your mind. Kind of.

AlterEgo is a new headset developed by MIT Media Lab. You strap it to your face. You talk to it. It talks to you. But no words are said. You say things in your head, like “what street am I on,” and it reads the signals your brain sends to your mouth and jaw, and answers the question for you.

Check out this handy explainer video MIT Media Lab made that shows some of the potential of AlterEgo:

So yes, according to MIT Media Lab, you may soon be able to control your TV with your mind.

The institution explained in its announcement that AlterEgo communicates with you through bone-conduction headphones, which circumvent the ear canal by transmitting sound vibrations through your face bones. Freaky. This, MIT Media Lab said, makes it easier for AlterEgo to talk to you while you’re talking to someone else.

Plus, in trials involving 15 people, AlterEgo had an accurate transcription rate of 92 percent.

Arnav Kapur, the graduate student who lead AlterEgo’s development, describes it as an “intelligence-augmentation device.”

“We basically can’t live without our cellphones, our digital devices,” said Pattie Maes, Kapur’s thesis advisor at MIT Media Lab. “But at the moment, the use of those devices is very disruptive.

“So, my students and I have for a very long time been experimenting with new form factors and new types of experience that enable people to still benefit from all the wonderful knowledge and services that these devices give us, but do it in a way that lets them remain in the present.”

This article was originally published by:


Mapping connections of single neurons using a holographic light beam

November 18, 2017

Controlling single neurons using optogenetics (credit: the researchers)

Researchers at MIT and Paris Descartes University have developed a technique for precisely mapping connections of individual neurons for the first time by triggering them with holographic laser light.

The technique is based on optogenetics (using light to stimulate or silence light-sensitive genetically modified protein molecules called “opsins” that are embedded in specific neurons). Current optogenetics techniques can’t isolate individual neurons (and their connections) because the light strikes a relatively large area — stimulating axons and dendrites of other neurons simultaneously (and these neurons may have different functions, even when nearby).

The new technique stimulates only the soma (body) of the neuron, not its connections. To achieve that, the researchers combined two new advances: an optimized holographic light-shaping microscope* and a localized, more powerful opsin protein called CoChR.

Two-photon computer-generated holography (CGH) was used to create three-dimensional sculptures of light that envelop only a target cell, using a conventional pulsed laser coupled with a widefield epifluorescence imaging system. (credit: Or A. Shemesh et al./Nature Nanoscience)

The researchers used an opsin protein called CoChR, which generates a very strong electric current in response to light, and fused it to a small protein that directs the opsin into the cell bodies of neurons and away from axons and dendrites, which extend from the neuron body, forming “somatic channelrhodopsin” (soCoChR). This new opsin enabled photostimulation of individual cells (regions of stimulation are highlighted by magenta circles) in mouse cortical brain slices with single-cell resolution and with less than 1 millisecond temporal (time) precision — achieving connectivity mapping on intact cortical circuits without crosstalk between neurons. (credit: Or A. Shemesh et al./Nature Nanoscience)

In the new study, by combining this approach with new ““somatic channelrhodopsin” opsins that cluster in the cell body, the researchers showed they could stimulate individual neurons with not only precise spatial control but also great control over the timing of the stimulation. When they target a specific neuron, it responds consistently every time, with variability that is less than one millisecond, even when the cell is stimulated many times in a row.

“For the first time ever, we can bring the precision of single-cell control toward the natural timescales of neural computation,” says Ed Boyden, an associate professor of brain and cognitive sciences and biological engineering at MIT, and a member of MIT’s Media Lab and McGovern Institute for Brain Research. Boyden is co-senior author with Valentina Emiliani, a research director at France’s National Center for Scientific Research (CNRS) and director of the Neurophotonics Laboratory at Paris Descartes University, of a study that appears in the Nov. 13 issue of Nature Neuroscience.

Mapping neural connections in real time

Using this technique, the researchers were able to stimulate single neurons in brain slices and then measure the responses from cells that are connected to that cell. This may pave the way for more precise diagramming of the connections of the brain, and analyzing how those connections change in real time as the brain performs a task or learns a new skill.

Optogenetics was co-developed in 2005 by Ed Boyden (credit: MIT)

One possible experiment, Boyden says, would be to stimulate neurons connected to each other to try to figure out if one is controlling the others or if they are all receiving input from a far-off controller.

“It’s an open question,” he says. “Is a given function being driven from afar, or is there a local circuit that governs the dynamics and spells out the exact chain of command within a circuit? If you can catch that chain of command in action and then use this technology to prove that that’s actually a causal link of events, that could help you explain how a sensation, or movement, or decision occurs.”

As a step toward that type of study, the researchers now plan to extend this approach into living animals. They are also working on improving their targeting molecules and developing high-current opsins that can silence neuron activity.

The research was funded by the National Institutes of Health, France’s National Research Agency, the Simons Foundation for the Social Brain, the Human Frontiers Science Program, John Doerr, the Open Philanthropy Project, the Howard Hughes Medical Institute, and the Defense Advanced Research Projects Agency.

* Traditional holography is based on reproducing, with light, the shape of a specific object, in the absence of that original object. This is achieved by creating an “interferogram” that contains the information needed to reconstruct an object that was previously illuminated by a reference beam. In computer-generated holography, the interferogram is calculated by a computer without the need of any original object. Combined with two-photon excitation, CGH can be used to refocus laser light to precisely illuminate a cell or a defined group of cells in the brain.

Abstract of Temporally precise single-cell-resolution optogenetics

Optogenetic control of individual neurons with high temporal precision within intact mammalian brain circuitry would enable powerful explorations of how neural circuits operate. Two-photon computer-generated holography enables precise sculpting of light and could in principle enable simultaneous illumination of many neurons in a network, with the requisite temporal precision to simulate accurate neural codes. We designed a high-efficacy soma-targeted opsin, finding that fusing the N-terminal 150 residues of kainate receptor subunit 2 (KA2) to the recently discovered high-photocurrent channelrhodopsin CoChR restricted expression of this opsin primarily to the cell body of mammalian cortical neurons. In combination with two-photon holographic stimulation, we found that this somatic CoChR (soCoChR) enabled photostimulation of individual cells in mouse cortical brain slices with single-cell resolution and <1-ms temporal precision. We used soCoChR to perform connectivity mapping on intact cortical circuits.

10 Breakthrough technologies for 2015 – MIT review

January 23, 2016


Not all breakthroughs are created equal. Some arrive more or less as usable things; others mainly set the stage for innovations that emerge later, and we have to estimate when that will be. But we’d bet that every one of the milestones on this list will be worth following in the coming years.

Read more:

Researchers coax human stem cells to form complex tissues

January 23, 2016


A new technique for programming human stem cells to produce different types of tissue on demand may ultimately allow personalized organs to be grown for transplant patients.

The technique, which also has near-term implications for growing organ-like tissues on a chip, was developed by researchers at MIT and is unveiled in a study published today in the journal Nature Communications.

Growing organs on demand, using derived from patients themselves, could eliminate the lengthy wait that people in need of a transplant are often forced to endure before one becomes available.

It could also reduce the risk of a patient’s immune system rejecting the transplant, since the tissue would be grown from the patient’s own cells, according to Ron Weiss, professor of biological engineering at MIT, who led the research.

“Imagine that there is a patient with liver complications,” Weiss says. “We could take skin cells from that person and then [convert] them into stem cells, and then genetically program them to make the liver tissue, and transplant that into the patient.”

A rudimentary organ

The researchers developed the new technique while investigating whether they could use stem cells to produce pancreatic beta cells for treating patients with diabetes.

In order to do this, the researchers needed to devise a means to convert stem cells into on demand.

As a first step in this process, they took human induced pluripotent stem (IPS) cells—stem cells generated from adult fibroblast, or —and converted them into “endoderm,” one of the three primary cell types in a developing organism. Endoderm, mesoderm, and ectoderm make up the three so-called germ layers that contribute to nearly all of the different cell types in the body. “They are the first real step of [cell] differentiation,” Weiss says.

The researchers developed a method to use a type of small molecule called dox to induce the IPS cells to express a protein known as GATA6. This protein can convert IPS cells into endoderm.

Rather than immediately attempting to convert these endoderm cells into though, the paper’s lead author, Patrick Guye, a former postdoc in Weiss’ lab and currently laboratory head with Sanofi-Aventis in Frankfurt, Germany, then decided to allow the cells to continue growing, to monitor their progress.

After two weeks, the researchers found that the endoderm, and some mesoderm also present in the cell culture, had matured further, to form a liver “bud,” or small, rudimentary liver.

“We observed the development of many cells types found in the fetal liver, including the development of blood vessel-like networks, various mesenchymal precursors, and the formation of early red and within our liver-like tissue,” Guye says. “This is especially exciting, as the process looks very similar if not identical to what is happening in the early liver bud in vivo, that is, in our own development.”

What’s more, the researchers discovered that only those IPS cells that had been exposed to more of the genetic programming, and had therefore gone on to produce more GATA6, became . Alongside these were IPS cells that did not make much GATA6, which went on to form ectoderm instead, and then further matured to become early telencephalon, or forebrain.

By controlling how much GATA6 the cells expressed, the researchers were able to determine how much liver bud and how much forebrain tissue was generated, Weiss says.

This suggests that the technique could be used to produce not just individual tissue types, but different combinations of tissue, he says.

“The fact that we are able to produce endoderm, mesoderm, and ectoderm gives us great hope that we can take each of these germ layers and hopefully grow any kind of tissue we want,” he says.


While it is likely to be some time before the technique can be used to generate transplant organs, it could be used almost immediately to grow different human tissue on which to test new drugs, Weiss says.

Using human stem cell-derived organ tissue to test new treatments could be far more reliable than testing on animals, since different species may react differently to a drug, he says.

The technique could also allow clinicians to carry out patient-specific drug testing. “If you are not sure whether you will have complications from taking a particular drug, then before you take it you could try it out on your own liver-on-a-chip,” Weiss says.

Similarly, the organ-on-a-chip could be used to monitor the interaction between different drugs that people may be taking.

“As people age, some are taking 10, 15, or 20 drugs together, and it’s impossible for the pharmaceutical companies to test all of these combinations for every individual. But we would be able to test that out,” he says. “That is something that can be done now.”

In addition to these therapeutic applications, the technique could allow researchers to gain a better understanding of the development of different types of tissue, such as the liver and neurons.

The paper reveals some intrinsic mechanisms underlying the interactions of stem cells during liver development, and provides a useful model that sheds light on the complex process of embryogenesis, says Bing Song, a professor of engineering at Cardiff University in the UK, who was not involved in the research.

“In my field, which is combining genetically modified stem cells and physical stimulation (electrical and magnetic) to cure spinal cord injuries and degenerative disease, the paper has given me some very useful ideas,” he says.

The researchers now hope to investigate whether they can use the technique to grow other organs on demand, such as a pancreas.

5 Awesome Inventions That Came out of MIT This Year

December 20, 2015


We’re coming down the homestretch for 2015, and now is the time when most folks like to reflect on all of the things they’re thankful for. In the campus innovation space, that basically means MIT. Being one of the most technologically progressive universities in the world, MIT has the longstanding reputation of churning out life-changing innovations as if it were a cake walk.

This past year has been no different. Here are our picks of 5 inventions coming out of MIT that are sure to impact the world, making us very thankful to have the Cambridge-based university in our corner.

Personalized heart models

If everyone’s bodies were true to textbook anatomy, doctors’ jobs would be a breeze. Unfortunately, that’s not the case. So all patients going in for heart surgery will have slightly different nuances to their cardiac makeup. As you can imagine, these procedures present some high stakes, so it’s not great for surgeons to be surprised in the OR.

Taking part of the guesswork out of cardiac surgeries is a system developed by MIT and Boston Children’s Hospital. Doctors can now scan an individual patient’s heart and 3D print a personalized model within a matter of hours. As a result, surgeons can plan for specific surgeries, knowing exactly what they’ll be looking at. This project is still in the works, but once it’s further along, researchers maintain doctors will be able to do simulated surgeries before the fact, significantly reducing risks on the operating table.

A microbot that swims and self-destructs

What’s better than robots? Tiny robots. A team at MIT have developed an “origami” robot that measures only a centimeter in length. But just because this invention is small, don’t think the technology behind it won’t make a big impact.

MIT researchers’ miniature origami robot. Photo credit: Christine Daniloff/MIT.

This robot folds itself from a sheet of plastic when exposed to heat. It can then move about, almost like a super insect. It can swim, climb inclines and even carrying objects twice its weight (which, granted, is not that much). It’s all powered from innovative external controls that the MIT researchers created using a magnetic field.

If you don’t think any of that’s impressive, check this out: The origami robot can self-destruct by deteriorating all on its own.

Ingestible sensors to take your vitals

Every time you go to the doctor, the first thing they do is hook you up to a multitude of machines. If you’ve ever sat there thinking that there has to be a better way to take your vitals, you’re right. A group at MIT developed an ingestible device that accurately takes all of the essential readings – heart and breathing rates, for example – from the comfort of your GI tract.

While health care providers might not be handing out these devices like candy, they’ll probably soon be using them for certain people. For example, patients with chronic illnesses that need regular monitoring, soldiers and athletes could be benefiting from the technology in the future.

A robot with human-like reflexes

Robots aren’t known for their grace and poise. In fact, they’re generally considered to be clunky and clumsy. That’s until now.

While several teams at MIT are working on different ways to make robots more suitable to navigate our fragile, human world, HERMES is one of the most remarkable. It’s an upright and bipedal device, but that’s not what makes it most impressive. With the development of a balance feedback interface, the robot is able to make note of shifting weight and adjust accordingly. As a result, HERMES has human-like reflexes. It’s not autonomous, though, as it requires someone to operate it, acting as a marionette.

Microchip-delivered drugs

While this is technically a spinout of MIT – developed by two professors who founded the startup Microchips Biotech – it still counts. Rather than subjecting people with chronic illnesses to continuous rounds of medication, shots and treatments, this company has come up with a way to deliver crucial drugs via a microchip.

These devices are implanted in the body and are able to release medications over extended periods of time. For example, patients with cancer, MS and diabetes can have one of these microchips put in and receive automatic treatment for as long as 6 years. That means no more pills or in-hospital treatments.

Injected into the body, self-healing nanogel acts as customized long-term drug supply

February 24, 2015

These scanning electron microscopy images, taken at different magnifications, show the structure of new hydrogels made of nanoparticles interacting with long polymer chains (credit: Eric A. Appel et al./Nature Communications)

MIT chemical engineers have designed a new type of self-healing hydrogel that can be injected through a syringe to supply one or two different drugs at a time.

In theory, gels could be useful for delivering drugs for treating cancer, macular degeneration, or heart disease because they can be molded into specific shapes and designed to release their payload in a specific location over a specified time period. However, current gels are not very practical because they must be implanted surgically.

In contrast, the new gel consists of a mesh network of nanoparticles made of polymers entwined within strands of another polymer, such as cellulose.  “Now you have a gel that can change shape when you apply stress to it, and then, importantly, it can re-heal when you relax those forces. That allows you to squeeze it through a syringe or a needle and get it into the body without surgery,” says Mark Tibbitt, a postdoc at MIT’s Koch Institute for Integrative Cancer Research and one of the lead authors of a paper describing the gel in Nature Communications on Thursday Feb. 19.

Koch Institute postdoc Eric Appel is also a lead author of the paper, and the paper’s senior author is Robert Langer, the David H. Koch Institute Professor at MIT.

Another limitation of hydrogels for biomedical uses — such as making soft contact lenses — is that they are traditionally formed by irreversible chemical linkages between polymers, so their shape cannot easily be altered.

How to create a self-assembling gel

So the MIT team set out to create a gel that could survive strong mechanical forces, known as shear forces, yet capable of reforming itself. Other researchers have created such gels by engineering proteins that self-assemble into hydrogels, but this approach requires complex biochemical processes. The MIT team wanted to design something simpler.

The MIT approach relies on a combination of two readily available components. One is a type of nanoparticle formed of PEG-PLA copolymers, first developed in Langer’s lab decades ago and now commonly used to package and deliver drugs. To form the new hydrogel, the researchers mixed these particles with a polymer — in this case, cellulose.

Each polymer chain forms weak bonds with many nanoparticles, producing a loosely woven lattice or network of polymers and nanoparticles. Because each attachment point is fairly weak, the bonds are able to break apart under mechanical stress, such as when injected through a syringe. When these shear forces are over, the polymers and nanoparticles reassemble, forming new attachments with different partners and healing the gel.

Using two components to form the gel also gives the researchers the opportunity to deliver two different drugs at the same time. PEG-PLA nanoparticles have an inner core that is ideally suited to carry hydrophobic (water-incompatible) small-molecule drugs, which include many chemotherapy drugs. Meanwhile, the polymers, which exist in a watery solution, can carry hydrophilic (water-compatible) molecules such as proteins, including antibodies and growth factors.

Long-term drug delivery

In this study, the researchers showed that the gels survived injection under the skin of mice and successfully released two drugs, one hydrophobic and one hydrophilic, over several days.

This type of gel offers an important advantage over injecting a liquid solution of drug-delivery nanoparticles: Such a solution will immediately disperse throughout the body, while the gel stays in place after injection, allowing the drug to be targeted to a specific tissue and avoiding toxic reactions elsewhere. Furthermore, the properties of each gel component can be tuned so the drugs they carry are released at different rates, allowing them to be tailored for different uses.

Treating eye, heart, and cancer issues

The researchers are now looking into using the gel to deliver anti-angiogenesis (anti-blood-vessel-forming) drugs to treat macular degeneration. Currently, patients receive these drugs, which cut off the growth of blood vessels that interfere with sight, as an injection into the eye once a month (try not to visualize that). The MIT team envisions that the new gel could be programmed to deliver these drugs over several months, reducing the frequency of injections.

Another potential application for the gels is delivering drugs, such as growth factors, that could help repair damaged heart tissue after a heart attack.

The researchers are also pursuing the possibility of using this gel to deliver cancer drugs to kill tumor cells that get left behind after surgery. In that case, the gel would be loaded with a chemical that lures cancer cells toward the gel, as well as a chemotherapy drug that would kill them. This could help eliminate the residual cancer cells that often form new tumors following surgery.

“Removing the tumor leaves behind a cavity that you could fill with our material, which would provide some therapeutic benefit over the long term in recruiting and killing those cells,” Appel says. “We can tailor the materials to provide us with the drug-release profile that makes it the most effective at actually recruiting the cells.”

The research was funded by the Wellcome Trust, the Misrock Foundation, the Department of Defense, and the National Institutes of Health.

Abstract of Self-assembled hydrogels utilizing polymer–nanoparticle interactions

Mouldable hydrogels that flow on applied stress and rapidly self-heal are increasingly utilized as they afford minimally invasive delivery and conformal application. Here we report a new paradigm for the fabrication of self-assembled hydrogels with shear-thinning and self-healing properties employing rationally engineered polymer–nanoparticle (NP) interactions. Biopolymer derivatives are linked together by selective adsorption to NPs. The transient and reversible interactions between biopolymers and NPs enable flow under applied shear stress, followed by rapid self-healing when the stress is relaxed. We develop a physical description of polymer–NP gel formation that is utilized to design biocompatible gels for drug delivery. Owing to the hierarchical structure of the gel, both hydrophilic and hydrophobic drugs can be entrapped and delivered with differential release profiles, both in vitro and in vivo. The work introduces a facile and generalizable class of mouldable hydrogels amenable to a range of biomedical and industrial applications.




High-speed drug screening

October 11, 2014


MIT engineers have devised a way to rapidly test hundreds of different drug-delivery vehicles in living animals, making it easier to discover promising new ways to deliver a class of drugs called biologics, which includes antibodies, peptides, RNA, and DNA, to human patients.

In a study appearing in the journal Integrative Biology, the researchers used this technology to identify materials that can efficiently deliver RNA to zebrafish and also to rodents.

This type of high-speed screen could help overcome one of the major bottlenecks in developing disease treatments based on biologics: how to find safe and effective ways to deliver them.

“Biologics is the fastest growing field in biotech, because it gives you the ability to do highly predictive designs with unique targeting capabilities,” says senior author Mehmet Fatih Yanik, an associate professor of electrical engineering and computer science and biological engineering. “However, delivery of biologics to diseased tissues is challenging, because they are significantly larger and more complex than conventional drugs.

Automating large-scale studies 

Zebrafish are commonly used to model human diseases, in part because their larvae are transparent, making it easy to see the effects of genetic mutations or drugs.

In 2010, Yanik’s team developed a technology for rapidly moving zebrafish larvae to an imaging platform, orienting them correctly, and imaging them. This kind of automated system makes it possible to do large-scale studies because analyzing each larva takes less than 20 seconds, compared with the several minutes it would take for a scientist to evaluate the larvae by hand.

For this new study, Yanik’s team developed a new technology to inject RNA carried by nanoparticles called lipidoids. These fatty molecules have shown promise as delivery vehicles for RNA interference, a process that allows disease-causing genes to be turned off with small strands of RNA.

Yanik’s group tested about 100 lipidoids that had not performed well in tests of RNA delivery in cells grown in a lab dish. They designed each lipidoid to carry RNA expressing a fluorescent protein, allowing them to easily track RNA delivery, and injected the lipidoids into the spinal fluid of the zebrafish.

To automate that process, the zebrafish were oriented either laterally or dorsally once they arrived on the viewing platform. Once the larvae were properly aligned, they were immobilized by a hydrogel. Then, the lipidoid-RNA complex was automatically injected, guided by a computer vision algorithm. The system can be adapted to target any organ, and the process takes about 14 seconds per fish.

A few hours after injection, the researchers imaged the zebrafish to see if they displayed any fluorescent protein in the brain, indicating whether the RNA successfully entered the brain tissue, was taken up by the cells, and expressed the desired protein.

The researchers found that several lipidoids that had not performed well in cultured cells did deliver RNA efficiently in the zebrafish model. They next tested six randomly selected best- and worst-performing lipidoids in rats and found that the correlation between performance in rats and in zebrafish was 97 percent, suggesting that zebrafish are a good model for predicting drug-delivery success in mammals.

The idea is to identify useful drug delivery nanoparticles using this miniaturized system.

New leads

The researchers are now using what they learned about the most successful lipidoids identified in this study to try to design even better possibilities. “If we can pick up certain design features from the screens, it can guide us to design larger combinatorial libraries based on these leads,” Yanik says.

Yanik’s lab is currently using this technology to find delivery vehicles that can carry biologics across the blood-brain barrier — a very selective barrier that makes it difficult for drugs or other large molecules to enter the brain through the bloodstream.

The research was funded by the National Institutes of Health, the Packard Award in Science and Engineering, Sanofi Pharmaceuticals, Foxconn Technology Group, and the Hertz Foundation.

Abstract of Organ-targeted high-throughput in vivo biologics screen identifies materials for RNA delivery

Therapies based on biologics involving delivery of proteins, DNA, and RNA are currently among the most promising approaches. However, although large combinatorial libraries of biologics and delivery vehicles can be readily synthesized, there are currently no means to rapidly characterize them in vivo using animal models. Here, we demonstrate high-throughput in vivo screening of biologics and delivery vehicles by automated delivery into target tissues of small vertebrates with developed organs. Individual zebrafish larvae are automatically oriented and immobilized within hydrogel droplets in an array format using a microfluidic system, and delivery vehicles are automatically microinjected to target organs with high repeatability and precision. We screened a library of lipid-like delivery vehicles for their ability to facilitate the expression of protein-encoding RNAs in the central nervous system. We discovered delivery vehicles that are effective in both larval zebrafish and rats. Our results showed that the in vivo zebrafish model can be significantly more predictive of both false positives and false negatives in mammals than in vitro mammalian cell culture assays. Our screening results also suggest certain structure–activity relationships, which can potentially be applied to design novel delivery vehicles.


MIT’s Robotic Cheetah Can Now Run And Jump While Untethered

September 15, 2014

Well, we knew it had to happen someday. A DARPA-funded robotic cheetah has been released into the wild, so to speak. A new algorithm developed by MIT researchers now allows their quadruped to run and jump — while untethered — across a field of grass.

The Pentagon, in an effort to investigate technologies that allow machines to traverse terrain in unique ways (well, at least that’s what they tell us), has been funding (via DARPA) the development of a robotic cheetah. Back in 2012, Boston Dynamics’ version smashed the landspeed record for the fastest mechanical mammal of Earth, reaching a top speed of 28.3 miles (45.5 km) per hour.

Researchers at MIT have their own version of robo-cheetah, and they’ve taken the concept in a new direction by imbuing it with the ability to run and bound while completely untethered.

MIT News reports:

The key to the bounding algorithm is in programming each of the robot’s legs to exert a certain amount of force in the split second during which it hits the ground, in order to maintain a given speed: In general, the faster the desired speed, the more force must be applied to propel the robot forward. Sangbae Kim, an associate professor of mechanical engineering at MIT, hypothesizes that this force-control approach to robotic running is similar, in principle, to the way world-class sprinters race.

“Many sprinters, like Usain Bolt, don’t cycle their legs really fast,” Kim says. “They actually increase their stride length by pushing downward harder and increasing their ground force, so they can fly more while keeping the same frequency.”

Kim says that by adapting a force-based approach, the cheetah-bot is able to handle rougher terrain, such as bounding across a grassy field. In treadmill experiments, the team found that the robot handled slight bumps in its path, maintaining its speed even as it ran over a foam obstacle.

“Most robots are sluggish and heavy, and thus they cannot control force in high-speed situations,” Kim says. “That’s what makes the MIT cheetah so special: You can actually control the force profile for a very short period of time, followed by a hefty impact with the ground, which makes it more stable, agile, and dynamic.”

This particular model, which weighs just as much as a real cheetah, can reach speeds of up to 10 mph (16 km/h) in the lab, even after clearing a 13-inch (33 cm) high hurdle. The MIT researchers estimate that their current version may eventually reach speeds of up to 30 mph (48 km/h).

It’s an impressive achievement, but Boston Dynamics’ WildCat is still the scariest free-running bot on the planet.

MRI sensor allows neuroscientists to map neural activity with molecular precision

May 4, 2014

This is the first time anyone has been able to map neural signals with high precision over large brain regions in living animals, offering a new window on brain function, says Jasanoff, who is also an associate member of MIT’s McGovern Institute for Brain Research.

Anne Trafton | MIT News Office

Launched in 2013, the national BRAIN Initiative aims to revolutionize our understanding of cognition by mapping the activity of every neuron in the human brain, revealing how brain circuits interact to create memories, learn new skills, and interpret the world around us.

Before that can happen, neuroscientists need new tools that will let them probe the brain more deeply and in greater detail, says Alan Jasanoff, an MIT associate professor of biological engineering. “There’s a general recognition that in order to understand the brain’s processes in comprehensive detail, we need ways to monitor neural function deep in the brain with spatial, temporal, and functional precision,” he says.

Jasanoff and colleagues have now taken a step toward that goal: They have established a technique that allows them to track neural communication in the brain over time, using magnetic resonance imaging (MRI) along with a specialized molecular sensor. This is the first time anyone has been able to map neural signals with high precision over large brain regions in living animals, offering a new window on brain function, says Jasanoff, who is also an associate member of MIT’s McGovern Institute for Brain Research.

His team used this molecular imaging approach, described in the May 1 online edition of Science, to study the neurotransmitter dopamine in a region called the ventral striatum, which is involved in motivation, reward, and reinforcement of behavior. In future studies, Jasanoff plans to combine dopamine imaging with functional MRI techniques that measure overall brain activity to gain a better understanding of how dopamine levels influence neural circuitry.

“We want to be able to relate dopamine signaling to other neural processes that are going on,” Jasanoff says. “We can look at different types of stimuli and try to understand what dopamine is doing in different brain regions and relate it to other measures of brain function.”

Tracking dopamine

Dopamine is one of many neurotransmitters that help neurons to communicate with each other over short distances. Much of the brain’s dopamine is produced by a structure called the ventral tegmental area (VTA). This dopamine travels through the mesolimbic pathway to the ventral striatum, where it combines with sensory information from other parts of the brain to reinforce behavior and help the brain learn new tasks and motor functions. This circuit also plays a major role in addiction.

To track dopamine’s role in neural communication, the researchers used an MRI sensor they had previously designed, consisting of an iron-containing protein that acts as a weak magnet. When the sensor binds to dopamine, its magnetic interactions with the surrounding tissue weaken, which dims the tissue’s MRI signal. This allows the researchers to see where in the brain dopamine is being released. The researchers also developed an algorithm that lets them calculate the precise amount of dopamine present in each fraction of a cubic millimeter of the ventral striatum.

After delivering the MRI sensor to the ventral striatum of rats, Jasanoff’s team electrically stimulated the mesolimbic pathway and was able to detect exactly where in the ventral striatum dopamine was released. An area known as the nucleus accumbens core, known to be one of the main targets of dopamine from the VTA, showed the highest levels. The researchers also saw that some dopamine is released in neighboring regions such as the ventral pallidum, which regulates motivation and emotions, and parts of the thalamus, which relays sensory and motor signals in the brain.

Each dopamine stimulation lasted for 16 seconds and the researchers took an MRI image every eight seconds, allowing them to track how dopamine levels changed as the neurotransmitter was released from cells and then disappeared. “We could divide up the map into different regions of interest and determine dynamics separately for each of those regions,” Jasanoff says.

He and his colleagues plan to build on this work by expanding their studies to other parts of the brain, including the areas most affected by Parkinson’s disease, which is caused by the death of dopamine-generating cells. Jasanoff’s lab is also working on sensors to track other neurotransmitters, allowing them to study interactions between neurotransmitters during different tasks.

The paper’s lead author is postdoc Taekwan Lee. Technical assistant Lili Cai and postdocs Victor Lelyveld and Aviad Hai also contributed to the research, which was funded by the National Institutes of Health and the Defense Advanced Research Projects Agency.