RoboBrain: The World’s First Knowledge Engine For Robots

December 28, 2014

One of the most exciting changes influencing modern life is the ability to search and interact with information on a scale that has never been possible before. All this is thanks to a convergence of technologies that have resulted in services such as Google Now, Siri, Wikipedia and IBM’s Watson supercomputer.

This gives us answers to a wide range of questions on almost any topic simply by whispering a few words into a smart phone or typing a few characters into a laptop. Part of what makes this possible is that humans are good at coping with ambiguity. So the answer to a simple question such as “how to make cheese on toast” can result in very general instructions that an ordinary person can easily follow.

For robots, the challenge is quite different. These machines require detailed instructions even for the simplest task. For example, a robot asking a search engine “how to bring sweet tea from the kitchen” is unlikely to get the detail it needs to carry out the task since it requires all kinds of incidental knowledge such as the idea that cups can hold liquid (but not when held upside down), that water comes from taps and can be heated in a kettle or microwave, and so on.

The truth is that if robots are ever to get useful knowledge from search engines, these databases will have to contain a much more detailed description of every task that they might need to carry out.

Enter Ashutosh Saxena at Stanford University in Palo Alto and a number of pals, who have set themselves the task of building such knowledge engine for robots.

These guys have already begun creating a kind of Google for robots that can be freely accessed by any device wishing to carry out a task. At the same time, the database gathers new information about these tasks as robots perform them, thereby learning as it goes.  They call their new knowledge engine RoboBrain.

The team have taken on a number of challenges in designing RoboBrain. For a start, robots have many different types of sensors and designs so the information has to be stored in a way that is useful for any kind of machine. The knowledge engine should be able to respond to a variety of different types of questions posed by robots in different ways. And it should be able to gather knowledge from different sources, such as the World Wide Web and by crawling existing knowledge bases such as WordNet, ImageNet, Freebase and OpenCyc.

What’s more, Saxena and co want Robobrain to be a collaborative effort that links up with existing services. To that end, the team has already partnered with services such as Tell Me Dave, a start-up aiming to allow robots to understand natural language instructions, and PlanIt, a way for robots to plan paths using crowdsourced information.

“As more and more researchers contribute knowledge to RoboBrain, it will not only make their robots perform better but we also believe this will be beneficial for the robotics community at large,” say Saxena and co. They have set up a website called RoboBrain.me to act as a gateway and to promote the idea.

Setting up a knowledge engine of this kind is no easy task. Saxena and co have approached it as a problem of network theory in which the knowledge is represented as a directed graph. The nodes in this graph can be a variety of different things such as an image, text, video, haptic data or a learned concept, such as a “container”.

RoboBrain then accepts new information in the form of a set of edges that link a subset of nodes together. For example, the idea that a “sitting human can use a mug” might link the nodes for mug, cup and sitting human with concepts such as “being able to use”.

Any robot that queries the database for this term, or something like it, can then download the set of edges and nodes that represent it.

This is more than just a neat idea. Saxena and co have already begun to build the database and use it to allow robots to plan certain actions, like navigating indoors or moving cooking ingredients around.

They show how one of their own robots uses RoboBrain to move an egg carton to the other end of a table. Since eggs are fragile, they have to be handled carefully, something that the robot can learn by querying RoboBrain.

An important part of the project is to apply knowledge learned in one situation to other situations. So the same technique for handling eggs could also be used for handling other fragile objects, such as light bulbs.

The team have big plans for the future. For instance, they would like to expand the knowledge base to include even larger knowledge sources, such as online videos. A robot that could query online “how-to” videos could then learn how to perform a wide variety of household tasks.

That’s interesting work with important potential to change the way that robots learn on a grand scale. Online knowledge bases have had a remarkable impact on the way humans think about the world around them and how they interact with it.

It is certainly not beyond belief that RoboBrain might have a similar impact for our electronic cousins.

New pulsed-magnetic method uses nanorods to deliver drugs deeply in the body

December 15, 2014

A new technique to magnetically deliver drug-carrying nanorods to deep targets in the body using fast-pulsed magnetic fields could transform the way deep-tissue tumors and other diseases are treated, say researchers at the University of Maryland (UMD) and Bethesda-based Weinberg Medical Physics LLC (WMP).

Instead of surgery or systemically administered treatments (such as chemotherapy), the use of magnetic nanoparticles as drug carriers could potentially allow clinicians to use external magnets to focus therapy to the precise locations of a disease within a patient, such as inoperable deep tumors or sections of the brain that have been damaged by trauma, vascular, or degenerative diseases.

So for years, researchers have worked with magnetic nanoparticles loaded with drugs or genes to develop noninvasive techniques to direct therapies and diagnostics to targets in the body.

However, due to the physics of magnetic forces, particles otherwise unaided could only be attracted to a magnet, not concentrated into points distant from the magnet face. So in clinical trials, magnets held outside the body have only been able to concentrate treatment to targets at or just below the skin surface, the researchers say.

“What we have shown experimentally is that by exploiting the physics of nanorods we can use fast-pulsed magnetic fields to focus the particles to a deep target between the magnets,” said UMD Institute for Systems Research Professor Benjamin Shapiro.

Pulsed magnetic fields 

These pulsed magnetic fields allowed the team to reverse the usual behavior of magnetic nanoparticles. Instead of a magnet attracting the particles, they showed that an initial magnetic pulse can orient the rod-shaped particles without pulling them, and then a subsequent pulse can push the particles before the particles can reorient. By repeating the pulses in sequence, the particles were focused to locations between the electromagnets.

The study, published last week in Nano Letters, shows that using this method, ferromagnetic nanorods carrying drugs or molecules could be concentrated to arbitrary deep locations between magnets.

https://www.youtube.com/watch?v=q-hC3kaDdoo

“This technology could enable a new therapeutic modality that combines the spatial precision of traditional image-guided radiation with the biochemical specificity of molecular medicine,” said Dr. John R. Adler, Vice President and Chief of New Clinical Applications for Varian Medical Systems.

The researchers are now working to demonstrate this method in vivo to prove its therapeutic potential and have launched IronFocus Medical, Inc., a startup company established to commercialize their invention.

The fast magnetic fields were developed with funding from Small Business Innovation Research grants awarded by the National Cancer Institute; National Heart, Lung and Blood Institute; and the National Institute for Neurological Disorders and Stroke. Funding to develop the correct sequence of magnetic pulses was provided by the National Science Foundation.

Baby steps towards molecular robots

December 13, 2014

Baby steps towards molecular robots

A walking molecule, so small that it cannot be observed directly with a microscope, has been recorded taking its first nanometre-sized steps.

It’s the first time that anyone has shown in real time that such a tiny object – termed a ‘small molecule walker’ – has taken a series of steps. The breakthrough, made by Oxford University chemists, is a significant milestone on the long road towards developing ‘nanorobots’.

‘In the future we can imagine tiny machines that could fetch and carry cargo the size of individual molecules, which can be used as building blocks of more complicated molecular machines; imagine tiny tweezers operating inside cells,’ said Dr Gokce Su Pulcu of Oxford University’s Department of Chemistry. ‘The ultimate goal is to use molecular walkers to form nanotransport networks,’ she says.

However, before nanorobots can run they first have to walk. As Su explains, proving this is no easy task.

For years now researchers have shown that moving machines and walkers can be built out of DNA. But, relatively speaking, DNA is much larger than small molecule walkers and DNA machines only work in water.

The big problem is that microscopes can only detect moving objects down to the level of 10–20 nanometres. This means that small molecule walkers, whose strides are 1 nanometre long, can only be detected after taking around 10 or 15 steps. It would therefore be impossible to tell with a microscope whether a walker had ‘jumped’ or ‘floated’ to a new location rather than taken all the intermediate steps.

As they report in this week’s Nature Nanotechnology, Su and her colleagues at Oxford’s Bayley Group took a new approach to detecting a walker’s every step in . Their solution? To build a walker from an arsenic-containing molecule and detect its motion on a track built inside a nanopore.

Nanopores are already the foundation of pioneering DNA sequencing technology developed by the Bayley Group and spinout company Oxford Nanopore Technologies. Here, tiny protein pores detect molecules passing through them. Each base disrupts an electric current passed through the nanopore by a different amount so that the DNA base ‘letters’ (A, C, G or T) can be read.

In this new research, they used a nanopore containing a track formed of five ‘footholds’ to detect how a walker was moving across it.

‘We can’t ‘see’ the walker moving, but by mapping changes in the ionic current flowing through the pore as the molecule moves from foothold to foothold we are able to chart how it is stepping from one to the other and back again,’ Su explains.

To ensure that the walker doesn’t float away, they designed it to have ‘feet’ that stick to the track by making and breaking chemical bonds. Su says: ‘It’s a bit like stepping on a carpet with glue under your shoes: with each step the walker’s foot sticks and then unsticks so that it can move to the next foothold.’ This approach could make it possible to design a machine that can walk on a variety of surfaces.

It’s quite an achievement for such a tiny machine but, as Su is the first to admit, there are many more challenges to be overcome before programmable are a reality.

‘At the moment we don’t have much control over which direction the walker moves in; it moves pretty randomly,’ Su tells me. ‘The protein track is a bit like a mountain slope; there’s a direction that’s easier to walk in so walkers will tend to go this way. We hope to be able to harness this preference to build tracks that direct a walker where we want it to go.’

The next challenge after that will be for a walker to make itself useful by, for instance, carrying a cargo: there’s already space for it to carry a molecule on its ‘head’ that it could then take to a desired location to accomplish a task.

Su comments: ‘We should be able to engineer a surface where we can control the movement of these walkers and observe them under a microscope through the way they interact with a very thin fluorescent layer. This would make it possible to design chips with different stations with walkers ferrying cargo between these stations; so the beginnings of a nanotransport system.’

These are the first tentative baby steps of a new technology, but they promise that there could be much bigger strides to come.

http://phys.org/news/2014-12-baby-molecular-robots.html#jCp

Software equal to or better than humans at cataloging published science data

December 10, 2014

Computer-generated genus-level diversity curves (credit: Shanan E. Peters et al./PLOS ONE)

A computer system called PaleoDeepDive has equaled (or bested in some categories) scientists at the complex task of extracting data from scientific publications and placing it in a database that catalogs the results of tens of thousands of individual studies.

The development, described in the current issue of PLoS, marks a milestone in the quest to rapidly and precisely summarize, collate, and index the vast output of scientists around the globe, says first author Shanan Peters, a professor of geoscience at UW-Madison.

Peters and colleagues set up the face-off between their new machine reading system and the human scientists who had manually entered data into the Paleobiology Database. This repository, compiled by hundreds of researchers, is the destination for data from paleontology studies funded by the National Science Foundation and other agencies internationally.

A probabilistic approach

The knowledge produced by paleontologists is fragmented into hundreds of thousands of publications. Yet many research questions require what Peters calls a “synthetic approach: For example, how many species were on the planet at any given time?”

PaleoDeepDive mimics the human activities needed to assemble the Paleobiology Database. “We extracted the same data from the same documents and put it into the exact same structure as the human researchers, allowing us to rigorously evaluate the quality of our system, and the humans,” Peters says.

Instead of trying to divine the single correct meaning, the tactic was to “to look at the entire problem of extraction as a probabilistic problem,” says Christopher Ré, who guided the software development while a UW professor of computer sciences.

Computers often have trouble deciphering even simple-sounding statements, Ré says. Ré imagines a study containing the terms “Tyrannosaurus rex” and “Alberta, Canada.” Is Alberta where the fossil was found, or where it is stored? “We take a more relaxed approach: There is some chance that these two are related in this manner, and some chance they are related in that manner.”

Schematic representation of the PaleoDeepDive workflow (credit: Shanan E. Peters et al./PLOS ONE)

In these large-data tasks, PaleoDeepDive has a major advantage, Peters says. “Information that was manually entered into the Paleobiology Database by humans cannot be assessed or enhanced without going back to the library and re-examining original documents. Our machine system, on the other hand, can extend and improve results essentially on the fly as new information is added.”

Further advantages can result from improvements in the computer tools. “As we get more feedback and data, it will do a better job across the board.”

The project required a million hours of computer time. It also required access to tens of thousands of articles, saysJacquelyn Crinion, assistant director of licensing and acquisitions services at the UW–Madison General Library System. And the download volume threatened logjams in document delivery. Eventually, Elsevier gave the UW-Madison team broad access to 10,000 downloads per week.

As text- and data-mining takes off, Crinion says the library system and publishers will adapt. “The challenge for all of us is to provide specialized services for researchers while continuing to meet the core needs of the vast majority of our customers.”

The Paleobiology Database has already generated hundreds of studies about the history of life, Peters says. “Ultimately, we hope to have the ability to create a computer system that can do almost immediately what many geologists and paleontologists try to do on a smaller scale over a lifetime: read a bunch of papers, arrange a bunch of facts, and relate them to one another in order to address big questions.”

Abstract of A Machine Reading System for Assembling Synthetic Paleontological Databases

Many aspects of macroevolutionary theory and our understanding of biotic responses to global environmental change derive from literature-based compilations of paleontological data. Existing manually assembled databases are, however, incomplete and difficult to assess and enhance with new data types. Here, we develop and validate the quality of a machine reading system, PaleoDeepDive, that automatically locates and extracts data from heterogeneous text, tables, and figures in publications. PaleoDeepDive performs comparably to humans in several complex data extraction and inference tasks and generates congruent synthetic results that describe the geological history of taxonomic diversity and genus-level rates of origination and extinction. Unlike traditional databases, PaleoDeepDive produces a probabilistic database that systematically improves as information is added. We show that the system can readily accommodate sophisticated data types, such as morphological data in biological illustrations and associated textual descriptions. Our machine reading approach to scientific data integration and synthesis brings within reach many questions that are currently underdetermined and does so in ways that may stimulate entirely new modes of inquiry.

references:

Australian researchers set new world record in solar-energy efficiency

December 8, 2014

University of New South Wales (UNSW) solar researchers have converted more than 40% of the sunlight hitting a solar system into electricity, “the highest efficiency ever reported for sunlight conversion into electricity,” UNSW Scientia Professor and Director of the Australian Centre for Advanced Photovoltaics (ACAP) Professor Martin Green said. “We used commercial solar cells, but in a new way, so these efficiency improvements are readily accessible to the solar industry,” added Mark Keevers, PhD, the UNSW solar scientist who managed the project. The researchers used Concentrated photovoltaic (CPV) technology, which uses optics such as lenses or curved mirrors to concentrate a large amount of sunlight in solar power towers. A key part of the prototype’s design is the use of a custom optical bandpass filter to capture sunlight that is normally wasted by commercial solar cells on towers and convert it to electricity at a higher efficiency than the solar cells themselves ever could. Such filters reflect particular wavelengths of light while transmitting others. The 40% efficiency milestone is the latest in a line of achievements by UNSW solar researchers spanning four decades. These include the first photovoltaic system to convert sunlight to electricity with more than 20% efficiency in 1989; the new result doubles this performance. The PS10 Solar Power Plant in Spain concentrates sunlight from a field of heliostats onto a central solar power tower, generating 11 megaWatts (credit: Abengoa Solar, S.A.)

“The new results are based on the use of focused sunlight, and are particularly relevant to photovoltaic power towers being developed in Australia,” Professor Green said. Compact power towers are being developed by Australian company RayGen Resources, which provided design and technical support for the high efficiency prototype. Another partner in the research was Spectrolab, a US-based company that provided some of the cells used in the project. The 40% efficiency achievement is outlined in a paper expected to be published soon by the Progress in Photovoltaics journal. It will also be presented at the Australian PV Institute’s Asia-Pacific Solar Research Conference, which begins at UNSW today (Monday December 8). The result was achieved in outdoor tests in Sydney and independently confirmed by the National Renewable Energy Laboratory (NREL) at their outdoor test facility in the U.S. The work was funded by the Australian Renewable Energy Agency (ARENA) and supported by the Australia–US Institute for Advanced Photovoltaics (AUSIAPV)

http://www.kurzweilai.net/australian-researchers-set-new-world-record-in-solar-energy-efficiency

Barrier-breaking drug may lead to spinal cord injury treatments

December 7, 2014

images

The results demonstrate how fundamental laboratory research may lead to new therapies.

“We’re very excited at the possibility that millions of people could, one day, regain movements lost during spinal cord injuries,” said Jerry Silver, Ph.D., professor of neurosciences, Case Western Reserve University School of Medicine, Cleveland, and a senior investigator of the study published in Nature.

Every year, tens of thousands of people are paralyzed by spinal cord injuries. The injuries crush and sever the long axons of spinal cord nerve cells, blocking communication between the brain and the body and resulting in paralysis below the injury.

On a hunch, Bradley Lang, Ph.D., the lead author of the study and a graduate student in Dr. Silver’s lab, came up with the idea of designing a drug that would help axons regenerate without having to touch the healing spinal cord, as current treatments may require.

“Originally this was just a side project we brainstormed in the lab,” said Dr. Lang.

After spinal cord injury, axons try to cross the injury site and reconnect with other cells but are stymied by scarring that forms after the injury. Previous studies suggested their movements are blocked when the protein tyrosine phosphatase sigma (PTP sigma), an enzyme found in axons, interacts with chondroitin sulfate proteoglycans, a class of sugary proteins that fill the scars.

Dr. Lang and his colleagues designed a drug called ISP to block the enzyme and facilitate the drug’s entry into the brain and spinal cord. Injections of the drug under the skin of paralyzed rats near the injury site partially restored axon growth and improved movements and bladder functions.

“There are currently no drug therapies available that improve the very limited natural recovery from spinal cord injuries that patients experience,” said Lyn Jakeman, Ph.D., a program director at the NIH’s National Institute of Neurological Disorders and Stroke, Bethesda, MD. “This is a great step towards identifying a novel agent for helping people recover.”

Initially, the goal of the study was to understand how interactions between PTP sigma and chondroitin sulfate proteoglycans prevent axon growth. Drugs were designed to mimic the shape of a critical part of PTP sigma, called the wedge. Different designs were tested on neurons grown in petri dishes alongside impenetrable barriers of proteoglycans. Treatment with ISP freed axon growth.

“It was amazing. The axons kept growing and growing,” said Dr. Silver.

Next the researchers tested the potential of the drug on a rat model of spinal cord injury. For seven weeks they injected rats with the drug or a placebo near the site of injury. A few weeks later the rats that received the drug showed improvements in walking and urinating while the placebo treatments had no effect. The results suggested the drug passed into the brain and spinal cord.

When the researchers looked at the spinal cords under a microscope they found that the drug induced sprouting of axons that use the neurochemical serotonin to communicate. The sprouting axons were seen below the injury site. Treating some rats with a blocker of serotonin communication partially reversed the beneficial effects of ISP injections, suggesting the newly sprouting axons helped the rats recover.

The ISP drug did not cause spinal cord axons known to control movements to cross the scar and reconnect with brain neurons above the injury site. Dr. Silver and his colleagues think this means the ISP-induced sprouting helped the rats recover by increasing the signal sent by the few remaining intact axons.

“This is very promising. We now have an agent that may work alone or in combination with other treatments to improve the lives of many,” said Dr. Silver. He and his colleagues are seeking to test the ISP drug in preclinical trials.


Story Source:

The above story is based on materials provided by NIH/National Institute of Neurological Disorders and Stroke. Note: Materials may be edited for content and length.

http://www.sciencedaily.com/releases/2014/12/141203142540.htm

U.S. spy agency predicts a very transhuman future by 2030

December 6, 2014

18854u42kexexjpg

The National Intelligence Council has just released its much anticipated forecasting report, a 140-page document that outlines major trends and technological developments we should expect in the next 20 years. Among their many predictions, the NIC foresees the end of U.S. global dominance, the rising power of individuals against states, a growing middle class that will increasingly challenge governments, and ongoing shortages in water, food and energy. But they also envision a future in which humans have been significantly modified by their technologies — what will herald the dawn of the transhuman era.

This work brings to mind the National Science Foundation’s groundbreaking 2003 report, Converging Technologies for Improving Human Performance — a relatively early attempt to understand and predict how advanced biotechnologies would impact on the human experience. The NIC’s new report, Global Trends 2030: Alternative Worlds, follows in the same tradition — namely one that doesn’t ignore the potential for enhancement technologies.

In the new report, the NIC describes how implants, prosthetics, and powered exoskeletons will become regular fixtures of human life — what could result in substantial improvements to innate human capacities. By 2030, the authors predict, prosthetics should reach the point where they’re just as good — or even better — than organic limbs. By this stage, the military will increasingly rely on exoskeletons to help soldiers carry heavy loads. Servicemen will also be adminstered psychostimulants to help them remain active for longer periods.

Many of these same technologies will also be used by the elderly, both as a way to maintain more youthful levels of strength and energy, and as a part of their life extension strategies.

Brain implants will also allow for advanced neural interface devices — what will bridge the gap between minds and machines. These technologies will allow for brain-controlled prosthetics, some of which may be able to provide “superhuman” abilities like enhanced strength, speed — and completely new functionality altogether.

Other mods will include retinal eye implants to enable night vision and other previously inaccessible light spectrums. Advanced neuropharmaceuticals will allow for vastly improved working memory, attention, and speed of thought.

“Augmented reality systems can provide enhanced experiences of real-world situations,” the report notes, “Combined with advances in robotics, avatars could provide feedback in the form of sensors providing touch and smell as well as aural and visual information to the operator.”

But as the report notes, many of these technologies will only be available to those who are able to afford them. The authors warn that it could result in a two-tiered society comprising enhanced and nonenhanced persons, a dynamic that would likely require government oversight and regulation.

Smartly, the report also cautions that these technologies will need to be secure. Developers will be increasingly challenged to prevent hackers from interfering with these devices.

Lastly, other technologies and scientific disciplines will have to keep pace to make much of this work. For example, longer-lasting batteries will improve the practicality of exoskeletons. Progress in the neurosciences will be critical for the development of future brain-machine interfaces. And advances in flexible biocompatible electronics will enable improved integration with cybernetic implants.

The entire report can be read here.

http://io9.com/5967896/us-spy-agency-predicts-a-very-transhuman-future-by-2030?utm_content=buffer2cc20&utm_medium=social&utm_source=facebook.com&utm_campaign=buffer

Robotic-leg prosthetic allows amputees to walk normally

December 5, 2014

This Incredibly Advanced Prosthetic Is Powered By Robotics Technology

Wearers of a new robotic leg can walk on a moving treadmill almost as fast as an able-bodied person, said inventor Robert Gregg, PhD, a University of Texas at Dallas professor, who applied robot control theory to enable powered prosthetics to dynamically respond to the wearer’s environment and help amputees walk.

“We borrowed from robot control theory to create a simple, effective new way to analyze the human gait cycle,” said Dr. Robert Gregg, a faculty member in the Erik Jonsson School of Engineering and Computer Science and lead author of the paper. “Our approach resulted in a method for controlling powered prostheses for amputees to help them move in a more stable, natural way than current prostheses.”

Humanoid robots can walk, run, jump and climb stairs autonomously, but modern prosthetics limit similar actions in humans. While prosthetics have been made lighter and more flexible, they fail to mimic the power generated from human muscles in able-bodied individuals.

In contrast. powered prostheses, or robotic legs, have motors to generate force, but lack the intelligence to stably respond to disturbances or changing terrain

Simplifying body motion to one variable

Gregg, an assistant professor of bioengineering and mechanical engineering, proposed a new way to view and study the process of human walking: measuring a single variable that represents the motion of the body: the center of pressure on the foot, which moves from heel to toe through the gait cycle.

“The gait cycle is a complicated phenomenon with lots of joints and muscles working together,” Gregg said. “We used advanced mathematical theorems to simplify the entire gait cycle down to one variable. If you measure that variable, you know exactly where you are in the gait cycle and exactly what you should be doing.”

Gregg first tested his theory on computer models, and then with three above-knee amputee participants at the Rehabilitation Institute of Chicago, an affiliate of Northwestern University. He implemented his algorithms with sensors measuring the center of pressure on a powered prosthesis.

Inputted with only the user’s height, weight and dimension of the residual thigh into his algorithm, the prosthesis was configured for each subject in about 15 minutes. Subjects then walked on the ground and on a treadmill moving at increasing speeds.

“We did not tell the prosthesis that the treadmill speed was increasing. The prosthesis responded naturally just as the biological leg would do,” Gregg said.

The participants were able to move at speeds of more than 1 meter per second; the typical walking speed of fully able-bodied people is about 1.3 meters per second, Gregg said. The participants also reported exerting less energy than with their traditional prostheses.

Gregg said current powered prosthetic devices require a team of physical rehabilitation specialists spending significant amounts of time tuning hundreds of knobs and training each powered leg to the individual wearer. “Our approach unified multiple modes of operation into one and resulted in technology that could help people in the future,” he said. “That and the feedback from participants were very rewarding.”

Gregg said the next step in the research will be to compare results of experiments with robotic legs using both the time paradigm and center of pressure paradigm.

Researchers from the Rehabilitation Institute of Chicago, Northwestern University and the University of New Brunswick were also involved in the study.

The research is available online and in an upcoming print issue of IEEE Transactions on Robotics, The work was funded by the United States Army Medical Research Acquisition Activity, the Burroughs Wellcome Fund, and the National Institutes of Health through the National Institute of Child Health and Human Development.

https://www.youtube.com/watch?v=sl1IXs0j4Ww

Abstract of Virtual Constraint Control of a Powered Prosthetic Leg: From Simulation to Experiments With Transfemoral Amputees

Recent powered (or robotic) prosthetic legs independently control different joints and time periods of the gait cycle, resulting in control parameters and switching rules that can be difficult to tune by clinicians. This challenge might be addressed by a unifying control model used by recent bipedal robots, in whichvirtual constraints define joint patterns as functions of a monotonic variable that continuously represents the gait cycle phase. In the first application of virtual constraints to amputee locomotion, this paper derives exact and approximate control laws for a partial feedback linearization to enforce virtualconstraints on a prosthetic leg. We then encode a human-inspired invariance property called effective shape into virtual constraints for the stance period. After simulating the robustness of the partial feedback linearization to clinically meaningful conditions, we experimentally implement this controlstrategy on a powered transfemoral leg. We report the results of three amputee subjects walking overground and at variable cadences on a treadmill, demonstrating the clinical viability of this novelcontrol approach.

http://www.kurzweilai.net/robotic-leg-prosthetic-allows-amputees-to-walk-normally

Tool to edit DNA revolutionizing research in Boston area

December 3, 2014

                   George Church, a genetics professor at Harvard Madical School

It is a fascinating quirk of nature: Simple bacteria have an immune system with a memory, which allows them to destroy invading viruses they have encountered in the past.

The phenomenon is more than just a scientific curiosity. In just two years, scientists have discovered how to repurpose the simple virus-shredding technique used by bacteria in more complicated creatures, a feat that is now revolutionizing research across the Boston area and beyond.

Using the procedure, which allows them to slice the genome with great precision — to edit it, in effect — scientists can cut and paste DNA synthesized in the laboratory to create animal versions of human cancers or blood cells resistant to HIV.

Local researchers have founded two companies aimed at turning the technique into a therapy to fix errant genes that cause a range of illnesses. Meanwhile, almost every week the method seems to be used for a new purpose: MIT researchers have used it to rapidly engineer mice with liver tumors; Harvard researchers have used it to disrupt genes and lower cholesterol levels in mice; and Children’s Hospital Boston scientists have, using human stem cells, corrected gene mutations that cause a rare blood disorder.

“I’m a technology junkie — an early adopter of all kinds of technology, not just biology, and it’s very rare to see something like this,” said George Church, a professor of genetics at Harvard Medical School whose laboratory did early work showing the technique could be used to “edit” genes in human cells. “As soon as we started playing with it in the lab, it became evident that not only was it something that worked well and worked efficiently, it was super easy to do.”

The technique, called CRISPR — short for “clustered regularly interspaced short palindromic repeats” — was developed in 2012 by scientists in California and in Europe who already are considered shoo-ins for a Nobel Prize. But the rapid progress and wide adoption of the technology have depended critically on the work of local leaders, who have pushed the process forward by proving that it worked in cells of all types, ranging from zebrafish embryos to mouse and human cells.

<br />

The technique has become so ubiquitous that it has entered the casual vernacular of science as a verb; people talking about “CRISPRing” genes they want to tweak or delete.

“It was cool that the bacteria had invented a way for doing this a billion years ago, but it was not at all clear that would be something you could reengineer for the human genome and the mouse genome. There were a lot of reasons it might not work,” said Eric Lander, founding director of the Broad Institute in Cambridge. “It’s extraordinarily powerful as a research tool. Less than 24 months . . . from an idea that might or might not work, to an idea every graduate student knows about and it’s a routine thing.”

For several decades, biologists have known about spotsin the DNA of bacteria where short sequences repeat that seemed to serve no purpose. During the past decade, it became clear that this assemblage of genetic junk was actually useful, providing a simple way for bacteria to remember and slash an invading virus with molecular scissors.

Jennifer Doudna of the University of California Berkeley and Emmanuelle Charpentier, who now works at the Helmholtz Centre for Infection Research in Germany, worked to show the technology could be customized to snip DNA strands at specific spots in a laboratory dish.

“When I first saw that paper, my thought was: If this works, it will really change the game again,” said Dr. J. Keith Joung, associate chief of pathology for research at Massachusetts General Hospital. The question he and others had was whether it would work outside of bacteria.

Within months, Church and Feng Zhang at the Broad Institute had begun to show the power of the technique, using it to cut genes in mouse and human cells. Joung showed that it could work in a living organism,engineering the genomes of zebrafish embryos.

A Cambridge-based company, Editas Medicine, was founded to work toward developing a therapy based on gene-editing. Local scientists also helped found a company called CRISPR Therapeutics with similar aims.

Meanwhile, academic scientists in a wide range of fields have adopted the simple-to-use technique, each using it in service of their own interests.

Cancer researchers, for example, have found that the tool can be used to rapidly create animals with multiple mutations that drive cancers, which might have taken months or years to engineer using older techniques.

“The method is so powerful that virtually every project in my lab is now using this technique,” said Tyler Jacks, director of the Koch Institute for Integrative Cancer Research at the Massachusetts Institute of Technology.

At the Harvard Stem Cell Institute, scientists working on blood stem cells showed earlier this month that it was possible to edit blood-forming stem cells. They were able to delete a gene called CCR5 that the HIV virus needs to infect blood cells.

Derrick Rossi, an associate professor of stem cell and regenerative biology at Harvard who co-led the blood stem cell work, said that previous gene-editing techniques had not seemed versatile or efficient enough to have significant potential for human patients.

“Data with those technologies never suggested to me it was going to be able to reach the type of efficacy that would be clinically translatable, and that’s actually why I got so excited and that’s what drew me to CRISPR,” Rossi said.

Daniel Anderson,a professor of applied biology at MIT’s Koch Institute, showed earlier this year that CRISPR could be used to correct a genetic liver disease in a mouse. That is an early demonstration of CRISPR’s potential to treat diseases caused by genetic mutations.

“Lots of things like that weren’t possible before now are doable,” said Craig Mello, a UMass Medical School biologist and Nobel laureate. Mello sees the technology’s power to transform medicine and to solve some of science’s most enduring mysteries.

“It’s still a huge mystery how we work and just basic biology,” Mello said. “We’re just trying to figure out this amazingly complicated and sophisticated mechanism we call life.”

 

http://www.bostonglobe.com/metro/2014/11/30/tool-easily-edit-dna-transforms-research-holds-potential-for-medicine/X7srGGFardarsWBfCqEL2H/story.html