Google X is working on nanoparticles that swim through your blood, identifying cancer and other diseases

October 30, 2014

http://www.extremetech.com/extreme/193083-google-x-is-working-on-nanoparticles-that-swim-through-your-blood-identifying-cancer-and-other-diseases

Virtual reality, the death of morality, and the perils of making the virtual ever more real

October 12, 2014

neo-red-pill-morpheus-sunglasses-640x353

As the technology that underpins virtual reality develops and the experiences become increasingly more real, I’ve been pondering a particularly morbid thought: When will we have the first VR-induced death? Will a realistic rocket launcher blast in Team Fortress 2 or VR version of Silent Hill give you a heart attack? Will watching the chase sequence in Casino Royale in full VR 3D pump enough adrenaline into your system that your heart beat becomes arrhythmic, eventually leading to death? Will a a VR experience be so realistic that you get so swept up in the moment that you run into a wall or jump out a window?

I’ve always been fascinated by the interrelationship of real and virtual worlds, and how technological advancement has brought them steadily closer and closer together until it can be very hard to discern the virtual from the real. The simplest virtual worlds — those created in your head with your imagination, perhaps with the aid of a good book — are very easily differentiated from reality (by most humans, anyway). Early digital virtual worlds, like EverQuest or Discworld MUD, started to blur the lines with persistence, graphics, and other interactive elements that trigger very real-world reactions (both physical and psychosomatic). And now, as we move into an era of ultra-high-resolution displays, 3D audio, and advanced AI, it’s possible to create some very real virtual worlds indeed.

I don’t think we’ve yet seen someone actually scared to death by a modern 3D/VR setup, but it’s only a matter of time. The precedent hascertainly been set over the last few years, though, especially when it comes to MMOs and other “grindy” games — there have been a handful of cases of people dying of exhaustion because they neglected their basic needs (food, sleep, exercise). In some cases, these people had some kind of underlying condition that made such physically and emotionally intensive experiences more likely to cause death — but as the technology becomes ever more immersive, and designers and architects create games and virtual worlds that are indiscernible from the real thing, I think VR death will be a somewhat regular occurrence.

Even if you don’t agree that VR will scare people to death, at the very least I think we can agree that full VR experiences will be incredibly absorbing. If an MMO like World of Warcraft or Lineage can keep people sitting down for days on end, VR will up the ante considerably. I’m not saying that people will start dropping like flies as soon as the first immersive VR experiences become readily available, but there will definitely be more deaths from exhaustion and users generally not looking after their physical and emotional needs.

Kil'jaedan kill shot (Iron Edge, Delling)

This is before we consider the other inevitable VR-related problems that will be caused by misuse of the technology, irresponsible developers, and dozens of other indirect issues. If an iPod and some headphones can distract someone enough that they walk into the path of some traffic or an oncoming train, imagine the perils of using VR outside the safety of room; even wandering around your house could be dangerous. Despite the relatively low-quality VR produced by Oculus Rift, there are already reports of people experiencing the odd sensation of a fraying, blurring divide between real and virtual that persists for a few minutes after detaching from a VR device. A curious and/or malevolent game developer, after getting a taste for the immersion provided by VR, could easily craft an experience that’s intended to cause mental or physical harm.

Indirectly, but still significantly, a whole host of issues might arise if a significant proportion of the populace are constantly strapped into their VR setup. There have already been a few sad cases of parents being so engrossed by a virtual world that their baby/child died from neglect — or worse – and I’m sure it’ll only get worse as advanced VR tech matures.

http://www.extremetech.com/extreme/190612-virtual-reality-the-death-of-morality-and-the-perils-of-making-the-virtual-more-real

 

Amputees discern familiar sensations across prosthetic hand

October 11, 2014

141008153624-large

Even before he lost his right hand to an industrial accident 4 years ago, Igor Spetic had family open his medicine bottles. Cotton balls give him goose bumps.

Now, blindfolded during an experiment, he feels his arm hairs rise when a researcher brushes the back of his prosthetic hand with a cotton ball.

Spetic, of course, can’t feel the ball. But patterns of electric signals are sent by a computer into nerves in his arm and to his brain, which tells him different. “I knew immediately it was cotton,” he said.

That’s one of several types of sensation Spetic, of Madison, Ohio, can feel with the prosthetic system being developed by Case Western Reserve University and the Louis Stokes Cleveland Veterans Affairs Medical Center.

Spetic was excited just to “feel” again, and quickly received an unexpected benefit. The phantom pain he’d suffered, which he’s described as a vice crushing his closed fist, subsided almost completely. A second patient, who had less phantom pain after losing his right hand and much of his forearm in an accident, said his, too, is nearly gone.

Despite having phantom pain, both men said that the first time they were connected to the system and received the electrical stimulation, was the first time they’d felt their hands since their accidents. In the ensuing months, they began feeling sensations that were familiar and were able to control their prosthetic hands with more — well — dexterity.

To watch a video of the research, click here: http://youtu.be/l7jht5vvzR4.

“The sense of touch is one of the ways we interact with objects around us,” said Dustin Tyler, an associate professor of biomedical engineering at Case Western Reserve and director of the research. “Our goal is not just to restore function, but to build a reconnection to the world. This is long-lasting, chronic restoration of sensation over multiple points across the hand.”

“The work reactivates areas of the brain that produce the sense of touch, said Tyler, who is also associate director of the Advanced Platform Technology Center at the Cleveland VA. “When the hand is lost, the inputs that switched on these areas were lost.”

How the system works and the results will be published online in the journal Science Translational Medicine Oct. 8.

“The sense of touch actually gets better,” said Keith Vonderhuevel, of Sidney, Ohio, who lost his hand in 2005 and had the system implanted in January 2013. “They change things on the computer to change the sensation.

“One time,” he said, “it felt like water running across the back of my hand.”

The system, which is limited to the lab at this point, uses electrical stimulation to give the sense of feeling. But there are key differences from other reported efforts.

First, the nerves that used to relay the sense of touch to the brain are stimulated by contact points on cuffs that encircle major nerve bundles in the arm, not by electrodes inserted through the protective nerve membranes.

Surgeons Michael W Keith, MD and J. Robert Anderson, MD, from Case Western Reserve School of Medicine and Cleveland VA, implanted three electrode cuffs in Spetic’s forearm, enabling him to feel 19 distinct points; and two cuffs in Vonderhuevel’s upper arm, enabling him to feel 16 distinct locations.

Second, when they began the study, the sensation Spetic felt when a sensor was touched was a tingle. To provide more natural sensations, the research team has developed algorithms that convert the input from sensors taped to a patient’s hand into varying patterns and intensities of electrical signals. The sensors themselves aren’t sophisticated enough to discern textures, they detect only pressure.

The different signal patterns, passed through the cuffs, are read as different stimuli by the brain. The scientists continue to fine-tune the patterns, and Spetic and Vonderhuevel appear to be becoming more attuned to them.

Third, the system has worked for 2 ½ years in Spetic and 1½ in Vonderhueval. Other research has reported sensation lasting one month and, in some cases, the ability to feel began to fade over weeks.

A blindfolded Vonderhuevel has held grapes or cherries in his prosthetic hand — the signals enabling him to gauge how tightly he’s squeezing — and pulled out the stems.

“When the sensation’s on, it’s not too hard,” he said. “When it’s off, you make a lot of grape juice.”

Different signal patterns interpreted as sandpaper, a smooth surface and a ridged surface enabled a blindfolded Spetic to discern each as they were applied to his hand. And when researchers touched two different locations with two different textures at the same time, he could discern the type and location of each.

Tyler believes that everyone creates a map of sensations from their life history that enables them to correlate an input to a given sensation.

“I don’t presume the stimuli we’re giving is hitting the spots on the map exactly, but they’re familiar enough that the brain identifies what it is,” he said.

Because of Vonderheuval’s and Spetic’s continuing progress, Tyler is hopeful the method can lead to a lifetime of use. He’s optimistic his team can develop a system a patient could use at home, within five years.

In addition to hand prosthetics, Tyler believes the technology can be used to help those using prosthetic legs receive input from the ground and adjust to gravel or uneven surfaces. Beyond that, the neural interfacing and new stimulation techniques may be useful in controlling tremors, deep brain stimulation and more.


Story Source:

The above story is based on materials provided by Case Western Reserve University. Note: Materials may be edited for content and length.


Journal Reference:

  1. D. W. Tan, M. A. Schiefer, M. W. Keith, J. R. Anderson, J. Tyler, D. J. Tyler. A neural interface provides long-term stable natural touch perception. Science Translational Medicine, 2014; 6 (257): 257ra138 DOI:10.1126/scitranslmed.3008669

New ‘lab-on-a-chip’ could revolutionize early diagnosis of cancer

October 11, 2014

exosome-chip

A new miniaturized biomedical “lab-on-a-chip” testing device for exosomes — molecular messengers between cells — promises faster, earlier, less-invasive diagnosis of cancer, according to its developers at the University of Kansas Medical Center and the University of Kansas Cancer Center.

“A lab-on-a-chip shrinks the pipettes, test tubes and analysis instruments of a modern chemistry lab onto a microchip-sized wafer,” explained Yong Zeng, assistant professor of chemistry at the University of Kansas.

Zeng and his fellow researchers developed the lab-on-a-chip initially for early detection of lung cancer — the number-one cancer killer in the U.S.

Lung cancer is currently detected mostly with an invasive biopsy, after tumors are larger than 3 centimeters in diameter and even metastatic. Using the lab-on-a-chip, lung cancer could be detected much earlier, using only a small drop of a patient’s blood, according to Zeng.

How it works

The prototype lab-on-a-chip is made of a widely used silicone rubber called   polydimethylsiloxane and uses a technique called “on-chip immunoisolation

“We used magnetic beads of 3 micrometers in diameter to pull down the exosomes in plasma samples,” Zeng said. “To avoid other interfering species present in plasma, the bead surface was chemically modified with an antibody that recognizes and binds with a specific target protein — for example, a protein receptor — present on the exosome membrane. The plasma containing magnetic beads then flows through the microchannels on the diagnostic chip in which the beads can be readily collected using a magnet to extract circulating exosomes from the plasma.”

“Our technique provides a general platform to detecting tumor-derived exosomes for cancer diagnosis,” he said. “We’ve also tested for ovarian cancer in this work. In theory, it should be applicable to other types of cancer. Our long-term goal is to translate this technology into clinical investigation of the pathological implication of exosomes in tumor development. Such knowledge would help develop better predictive biomarkers and more efficient targeted therapy to improve the clinical outcome.”

The research by Zeng and his KU colleagues was described in a paper published in the Royal Society of Chemistry journal, and has been awarded a  $640,000 grant from the National Cancer Institute at the National Institutes of Health, intended to further develop the lab-on-a-chip technology.

http://www.kurzweilai.net/new-lab-on-a-chip-could-revolutionize-early-diagnosis-of-cancer?utm_source=KurzweilAI+Weekly+Newsletter&utm_campaign=062176e44c-UA-946742-1&utm_medium=email&utm_term=0_147a5a48c1-062176e44c-282129417

High-speed drug screening

October 11, 2014

MIT-RNA-Screen-02-512x430

MIT engineers have devised a way to rapidly test hundreds of different drug-delivery vehicles in living animals, making it easier to discover promising new ways to deliver a class of drugs called biologics, which includes antibodies, peptides, RNA, and DNA, to human patients.

In a study appearing in the journal Integrative Biology, the researchers used this technology to identify materials that can efficiently deliver RNA to zebrafish and also to rodents.

This type of high-speed screen could help overcome one of the major bottlenecks in developing disease treatments based on biologics: how to find safe and effective ways to deliver them.

“Biologics is the fastest growing field in biotech, because it gives you the ability to do highly predictive designs with unique targeting capabilities,” says senior author Mehmet Fatih Yanik, an associate professor of electrical engineering and computer science and biological engineering. “However, delivery of biologics to diseased tissues is challenging, because they are significantly larger and more complex than conventional drugs.

Automating large-scale studies 

Zebrafish are commonly used to model human diseases, in part because their larvae are transparent, making it easy to see the effects of genetic mutations or drugs.

In 2010, Yanik’s team developed a technology for rapidly moving zebrafish larvae to an imaging platform, orienting them correctly, and imaging them. This kind of automated system makes it possible to do large-scale studies because analyzing each larva takes less than 20 seconds, compared with the several minutes it would take for a scientist to evaluate the larvae by hand.

For this new study, Yanik’s team developed a new technology to inject RNA carried by nanoparticles called lipidoids. These fatty molecules have shown promise as delivery vehicles for RNA interference, a process that allows disease-causing genes to be turned off with small strands of RNA.

Yanik’s group tested about 100 lipidoids that had not performed well in tests of RNA delivery in cells grown in a lab dish. They designed each lipidoid to carry RNA expressing a fluorescent protein, allowing them to easily track RNA delivery, and injected the lipidoids into the spinal fluid of the zebrafish.

To automate that process, the zebrafish were oriented either laterally or dorsally once they arrived on the viewing platform. Once the larvae were properly aligned, they were immobilized by a hydrogel. Then, the lipidoid-RNA complex was automatically injected, guided by a computer vision algorithm. The system can be adapted to target any organ, and the process takes about 14 seconds per fish.

A few hours after injection, the researchers imaged the zebrafish to see if they displayed any fluorescent protein in the brain, indicating whether the RNA successfully entered the brain tissue, was taken up by the cells, and expressed the desired protein.

The researchers found that several lipidoids that had not performed well in cultured cells did deliver RNA efficiently in the zebrafish model. They next tested six randomly selected best- and worst-performing lipidoids in rats and found that the correlation between performance in rats and in zebrafish was 97 percent, suggesting that zebrafish are a good model for predicting drug-delivery success in mammals.

The idea is to identify useful drug delivery nanoparticles using this miniaturized system.

New leads

The researchers are now using what they learned about the most successful lipidoids identified in this study to try to design even better possibilities. “If we can pick up certain design features from the screens, it can guide us to design larger combinatorial libraries based on these leads,” Yanik says.

Yanik’s lab is currently using this technology to find delivery vehicles that can carry biologics across the blood-brain barrier — a very selective barrier that makes it difficult for drugs or other large molecules to enter the brain through the bloodstream.

The research was funded by the National Institutes of Health, the Packard Award in Science and Engineering, Sanofi Pharmaceuticals, Foxconn Technology Group, and the Hertz Foundation.


Abstract of Organ-targeted high-throughput in vivo biologics screen identifies materials for RNA delivery

Therapies based on biologics involving delivery of proteins, DNA, and RNA are currently among the most promising approaches. However, although large combinatorial libraries of biologics and delivery vehicles can be readily synthesized, there are currently no means to rapidly characterize them in vivo using animal models. Here, we demonstrate high-throughput in vivo screening of biologics and delivery vehicles by automated delivery into target tissues of small vertebrates with developed organs. Individual zebrafish larvae are automatically oriented and immobilized within hydrogel droplets in an array format using a microfluidic system, and delivery vehicles are automatically microinjected to target organs with high repeatability and precision. We screened a library of lipid-like delivery vehicles for their ability to facilitate the expression of protein-encoding RNAs in the central nervous system. We discovered delivery vehicles that are effective in both larval zebrafish and rats. Our results showed that the in vivo zebrafish model can be significantly more predictive of both false positives and false negatives in mammals than in vitro mammalian cell culture assays. Our screening results also suggest certain structure–activity relationships, which can potentially be applied to design novel delivery vehicles.

Heated nanoparticles trigger immune systems deactivated by cancer

October 10, 2014

np-therapies-ft-259x273

Researchers at Dartmouth-Hitchcock Norris Cotton Cancer Center have developed a method to use heat with nanoparticles to wake up the immune system so it recognizes and attacks invading cancer cells, according to Steve Fiering, PhD, Norris Cotton Cancer Center researcher and professor of Microbiology and Immunology, and of Genetics at the Geisel School of Medicine at Dartmouth.

The innovation uses a well-known method of killing cancer cells: a metallic nanoparticle containing iron, silver, or gold injected into the cancer cell and then heated externally, using magnetic energy, infrared light, or radio waves;

But that method can’t kill all of the resilient cancer cells. What’s new is the use of heat to trigger the immune system to attack cancer cells — overcoming a tactic used by cancer cells to protect themselves by tricking the immune system into accepting everything as normal, even while cancer cells are dividing and spreading.

Nanoparticles and tumor immunology

This is one of an expanding array of nanoparticle types discussed a review article on  that discusses the confluence of two rapidly developing areas of cancer therapy, nanoparticles and tumor immunology, published in an issue of Wiley’s WIREs Nanomedicine and Nanobiotechnology. (KurzweilAI has also covered this in these news posts.)

Nanoparticles’ small size makes them stealthy enough to penetrate cancer cells with therapeutic agents such as antibodies, drugs, vaccine type viruses, or metallic particles, the authors explain. But nanoparticles can also pack large payloads of a variety of agents that have different effects that activate and strengthen the body’s immune-system response against tumors.

These approaches are still early in development in the laboratory or clinical trials. But “now that efforts to stimulate anti-tumor immune responses are moving from the lab to the clinic, the potential for nanoparticles to be utilized to improve an immune-based therapy approach is attracting a lot of attention from both scientists and clinicians. And clinical usage does not appear too distant,” said Fiering.


Abstract of WIREs Nanomedicine and Nanobiotechnology paper

A variety of strategies, have been applied to cancer treatment and the most recent one to become prominent is immunotherapy. This interest has been fostered by the demonstration that the immune system does recognize and often eliminate small tumors but tumors that become clinical problems block antitumor immune responses with immunosuppression orchestrated by the tumor cells. Methods to reverse this tumor-mediated immunosuppression will improve cancer immunotherapy outcomes. The immunostimulatory potential of nanoparticles (NPs), holds promise for cancer treatment. Phagocytes of various types are an important component of both immunosuppression and immunostimulation and phagocytes actively take up NPs of various sorts, so NPs are a natural system to manipulate these key immune regulatory cells. NPs can be engineered with multiple useful therapeutic features, such as various payloads such as antigens and/or immunomodulatory agents including cytokines, ligands for immunostimulatory receptors or antagonists for immunosuppressive receptors. As more is learned about how tumors suppress antitumor immune responses the payload options expand further. Here we review multiple approaches of NP-based cancer therapies to modify the tumor microenvironment and stimulate innate and adaptive immune systems to obtain effective antitumor immune responses.

Supersensitive nanodevice can detect extremely early cancers

October 5, 2014

140929090253-large

From left, Taylor Bono, Dr. Yongbin Lin, Mollye Sanders and Savannah Kaye discuss the supersensitive nanoprobe sensor they have been developing in a lab in UAH’s Optics Building. Credit: Michael Mercier | UAH

Extremely early detection of cancers and other diseases is on the horizon with a supersensitive nanodevice being developed at The University of Alabama in Huntsville (UAH) in collaboration with The Joint School of Nanoscience and Nanoengineering (JSNN) in Greensboro, NC.

The device is ready for packaging into a lunchbox-size unit that ultimately may use a cellphone app to provide test results.

“We are submitting grant applications with our collaborator Dr. Jianjun Wei, an associate professor at the JSNN, to the National Institutes of Health to fund our future integration work,” says Dr. Yongbin Lin, a research scientist at UAH’s Nano and Micro Devices Center who has been working on the nanodevice at the core of the diagnostic unit for about five years. “In the future, we will do an integration of the system with everything inside a box. If we get funding support, I think that within three to five years it may be realized.”

The sensitivity of the equipment holds promise for finding cancer at a very early stage, even while it is at the small cluster of cells level, says Dr. Lin. “At that stage, it is easier to treat.”

One such test detects minute levels of Interleukin-6 (IL-6) in the bloodstream. IL-6 is secreted by the body’s T-cells and macrophages to stimulate inflammatory and immune responses.

“If you have a cancer, then your basic level of IL-6 will increase,” Dr. Lin says. “A lot of cancers have links to IL-6.” Heightened IL-6 also could signal inflammation indicating the presence of other conditions. The scientists are also developing tests for Prostate Specific Antigen, an indicator of prostate cancer, but the device could be calibrated to test for any protein antigen biomarkers.

Once packaged, the portable device will be ideal for point of care use, Dr. Lin says, providing quick results without the need for a testing laboratory.

“We don’t have to send your blood sample anywhere. We just bring this to your bedside.”

It especially would be a boon for countries that have limited medical facilities and budgets, he says, where the testing equipment could be valuable in fighting disease outbreaks like the Ebola virus in West Africa.

“This could work in that situation,” Dr. Lin says. “We’d just have to find a specific antigen for that virus.”

A nanoprobe that’s 125 microns in diameter with gold nanodots on a 4-micron fiber core is at the heart of the machine.

Each gold nanodot looks like a disc and is 160 nanometers in diameter, says Dr. Lin. That’s too small for the human eye to see — in fact, the nanoprobe has to be assembled using an electron microscope. The probe is coated with a biochemical link so that specific antibodies for the particular test will attach to it.

“We use each antibody because it has the ability to bond to its specific antigens. Once the antibody binds to it, we can test for the amount of antigens present,” Dr. Lin says. That test is based on light refraction from antigens bound to the antibodies on the nanoprobe.

“The properties of the nanoparticles will give you a resonance shift upon a biological binding reaction,” Dr. Lin says. The fiber optic strand on which the sensors are attached directs the resulting light waves to a spectrometer and a computer determines the test result.

“It’s personalized medicine but it’s also a form of preventative medicine,” says Taylor Bono, a UAH senior from Madison who is pursuing a medical career and has helped with the research.

Until the packaging and integration work is funded, the testing equipment resides on the corner of a workbench in a lab in the UAH Optics Building. Early tests involved identifying DNA profiles before the research evolved into antigens.

Bono did some of the early sensitivity testing work along with UAH senior Molly Sanders of Huntsville, who is an undergraduate concurrently working on her master’s degree in biology as part of UAH’s Joint Undergraduate Master’s Program (JUMP).

“Even though I was unfamiliar with physics, I could help with the biological side of it. I never would have known anything about this if I hadn’t had this opportunity,” says Sanders. The two worked together in 2013 with Prostate Specific Antigen solutions to determine the device’s sensitivity. Sanders is the principal author and Bono an author of a paper about this research.

“The most significant aspect of the device medically is that it can detect trace levels of cancer biomarkers in the blood,” Sanders says.

The lab work is now the responsibility of UAH junior Savannah Kaye of Lebanon, Penn. “I test the nanoprobe tip in two different solutions to see if antibodies will stick to it,” she says.

UAH Associate Vice President for Research Dr. Robert Lindquist, the former director of the Center for Applied Optics, was an early contributor to the research, Dr. Lin says. “He is a strong supporter of this project.”


Story Source:

The above story is based on materials provided by University of Alabama Huntsville. The original article was written by Jim Steele. Note: Materials may be edited for content and length.


Journal Reference:

  1. Mollye Sanders, Yongbin Lin, Jianjun Wei, Taylor Bono, Robert G. Lindquist. An enhanced LSPR fiber-optic nanoprobe for ultrasensitive detection of protein biomarkers. Biosensors and Bioelectronics, 2014; 61: 95 DOI: 10.1016/j.bios.2014.05.009
Video

‘When There’s No Reason Something’s Impossible, It Ends Up Being Possible’

October 5, 2014

893691-stephen-wolfram

Stephen Wolfram explains why technically it will absolutely be possible for humans to live forever.

Physicist Stephen Wolfram, who invented Mathematica software and the Wolfram Alpha search engine, says his scientific and business mantras are the same.

Talk:   http://www.inc.com/allison-fass/stephen-wolfram-immortality-humans-live-forever.html

 

Five ways the superintelligence revolution might happen

October 5, 2014

jr2j5t78-1411569521

Biological brains are unlikely to be the final stage of intelligence. Machines already have superhuman strength, speed and stamina – and one day they will have superhuman intelligence. The only reasons this may not occur is if we develop some other dangerous technology first that destroys us, or otherwise fall victim to some existential risk.

But assuming that scientific and technological progress continues, human-level machine intelligence is very likely to be developed. And shortly thereafter, superintelligence.

Predicting how long it will take to develop such intelligent machines is difficult. Contrary to what some reviewers of my book seem to believe, I don’t have any strong opinion about that matter. (It is as though the only two possible views somebody might hold about the future of artificial intelligence are “machines are stupid and will never live up to the hype!” and “machines are much further advanced than you imagined and true AI is just around the corner!”).

A survey of leading researchers in AI suggests that there is a 50% probability that human-level machine intelligence will have been attained by 2050 (defined here as “one that can carry out most human professions at least as well as a typical human”). This doesn’t seem entirely crazy. But one should place a lot of uncertainty on both sides of this: it could happen much sooner or very much later.

Exactly how we will get there is also still shrouded in mystery. There are several paths of development that should get there eventually, but we don’t know which of them will get there first.

Biological inspiration

We do have an actual example of generally intelligent system – the human brain – and one obvious idea is to proceed by trying to work out how this system does the trick. A full understanding of the brain is a very long way off, but it might be possible to glean enough of the basic computational principles that the brain uses to enable programmers to adapt them for use in computers without undue worry about getting all the messy biological details right.

We already know a few things about the working of the human brain: it is a neural network, it learns through reinforcement learning, it has a hierarchical structure to deal with perceptions and so forth. Perhaps there are a few more basic principles that we still need to discover – and that would then enable somebody to clobber together some form of “neuromorphic AI”: one with elements cribbed from biology but implemented in a way that is not fully biologically realistic.

Pure mathematics

Another path is the more mathematical “top-down” approach, which makes little or no use of insights from biology and instead tries to work things out from first principles. This would be a more desirable development path than neuromorphic AI, because it would be more likely to force the programmers to understand what they are doing at a deep level – just as doing an exam by working out the answers yourself is likely to require more understanding than doing an exam by copying one of your classmates’ work.

In general, we want the developers of the first human-level machine intelligence, or the first seed AI that will grow up to be superintelligence, to know what they are doing. We would like to be able to prove mathematical theorems about the system and how it will behave as it rises through the ranks of intelligence.

Brute Force

One could also imagine paths that rely more on brute computational force, such by as making extensive use of genetic algorithms. Such a development path is undesirable for the same reason that the path of neuromorphic AI is undesirable – because it could more easily succeed with a less than full understanding of what is being built. Having massive amounts of hardware could, to a certain extent, substitute for having deep mathematical insight.

We already know of code that would, given sufficiently ridiculous amounts of computing power, instantiate a superintelligent agent. The AIXI model is an example. As best we can tell, it would destroy the world. Thankfully, the required amounts of computer power are physically impossible.

Plagiarising nature

The path of whole brain emulation, finally, would proceed by literally making a digital copy of a particular human mind. The idea would be to freeze or vitrify a brain, chop it into thin slices and feed those slices through an array of microscopes. Automated image recognition software would then extract the map of the neural connections of the original brain. This 3D map would be combined with neurocomputational models of the functionality of the various neuron types constituting the neuropil, and the whole computational structure would be run on some sufficiently capacious supercomputer. This approach would require very sophisticated technologies, but no new deep theoretical breakthrough.

In principle, one could imagine a sufficiently high-fidelity emulation process that the resulting digital mind would retain all the beliefs, desires, and personality of the uploaded individual. But I think it is likely that before the technology reached that level of perfection, it would enable a cruder form of emulation that would yield a distorted human-ish mind. And before efforts to achieve whole brain emulation would achieve even that degree of success, they would probably spill over into neuromorphic AI.

Competent humans first, please

Perhaps the most attractive path to machine superintelligence would be an indirect one, on which we would first enhance humanity’s own biological cognition. This could be achieved through, say, genetic engineering along with institutional innovations to improve our collective intelligence and wisdom.

It is not that this would somehow enable us “to keep up with the machines” – the ultimate limits of information processing in machine substrate far exceed those of a biological cortex however far enhanced. The contrary is instead the case: human cognitive enhancement would hasten the day when machines overtake us, since smarter humans would make more rapid progress in computer science. However, it would seem on balance beneficial if the transition to the machine intelligence era were engineered and overseen by a more competent breed of human, even if that would result in the transition happening somewhat earlier than otherwise.

Meanwhile, we can make the most of the time available, be it long or short, by getting to work on the control problem, the problem of how to ensure that superintelligent agents would be safe and beneficial. This would be a suitable occupation for some of our generation’s best mathematical talent.


The Conversation organised a public question-and-answer session on Reddit in which Nick Bostrom, director of the Future of Humanity Institute at the University of Oxford, talked about developing artificial intelligence and related topics.

http://theconversation.com/five-ways-the-superintelligence-revolution-might-happen-32124