The Fourth Industrial Revolution Is Here

February 25, 2017

The Fourth Industrial Revolution is upon us and now is the time to act.

Everything is changing each day and humans are making decisions that affect life in the future for generations to come.

We have gone from Steam Engines to Steel Mills, to computers to the Fourth Industrial Revolution that involves a digital economy, artificial intelligence, big data and a new system that introduces a new story of our future to enable different economic and human models.

Will the Fourth Industrial Revolution put humans first and empower technologies to give humans a better quality of life with cleaner air, water, food, health, a positive mindset and happiness? HOPE…

Intel Bets It Can Turn Everyday Silicon into Quantum Computing’s Wonder Material

December 18, 2016


Sometimes the solution to a problem is staring you in the face all along. Chip maker Intel is betting that will be true in the race to build quantum computers—machines that should offer immense processing power by exploiting the oddities of quantum mechanics.

Competitors IBM, Microsoft, and Google are all developing quantum components that are different from the ones crunching data in today’s computers. But Intel is trying to adapt the workhorse of existing computers, the silicon transistor, for the task.

Intel has a team of quantum hardware engineers in Portland, Oregon, who collaborate with researchers in the Netherlands, at TU Delft’s QuTech quantum research institute, under a $50 million grant established last year. Earlier this month Intel’s group reported that they can now layer the ultra-pure silicon needed for a quantum computer onto the standard wafers used in chip factories.

This strategy makes Intel an outlier among industry and academic groups working on qubits, as the basic components needed for quantum computers are known. Other companies can run code on prototype chips with several qubits made from superconducting circuits (see “Google’s Quantum Dream Machine”). No one has yet advanced silicon qubits that far.

A quantum computer would need to have thousands or millions of qubits to be broadly useful, though. And Jim Clarke, who leads Intel’s project as director of quantum hardware, argues that silicon qubits are more likely to get to that point (although Intel is also doing some research on superconducting qubits). One thing in silicon’s favor, he says: the expertise and equipment used to make conventional chips with billions of identical transistors should allow work on perfecting and scaling up silicon qubits to progress quickly.

Intel’s silicon qubits represent data in a quantum property called the “spin” of a single electron trapped inside a modified version of the transistors in its existing commercial chips. “The hope is that if we make the best transistors, then with a few material and design changes we can make the best qubits,” says Clarke.

Another reason to work on silicon qubits is that they should be more reliable than the superconducting equivalents. Still, all qubits are error prone because they work on data using very weak quantum effects (see “Google Researchers Make Quantum Components More Reliable”).

The new process that helps Intel experiment with silicon qubits on standard chip wafers, developed with the materials companies Urenco and Air Liquide, should help speed up its research, says Andrew Dzurak, who works on silicon qubits at the University of New South Wales in Australia. “To get to hundreds of thousands of qubits, we will need incredible engineering reliability, and that is the hallmark of the semiconductor industry,” he says.

Companies developing superconducting qubits also make them using existing chip fabrication methods. But the resulting devices are larger than transistors, and there is no template for how to manufacture and package them up in large numbers, says Dzurak.

Chad Rigetti, founder and CEO of Rigetti Computing, a startup working on superconducting qubits similar to those Google and IBM are developing, agrees that this presents a challenge. But he argues that his chosen technology’s head start will afford ample time and resources to tackle the problem.

Google and Rigetti have both said that in just a few years they could build a quantum chip with tens or hundreds of qubits that dramatically outperforms conventional computers on certain problems, even doing useful work on problems in chemistry or machine learning.


We might live in a computer program, but it may not matter

December 18, 2016


By Philip Ball

Are you real? What about me?

These used to be questions that only philosophers worried about. Scientists just got on with figuring out how the world is, and why. But some of the current best guesses about how the world is seem to leave the question hanging over science too.

Several physicists, cosmologists and technologists are now happy to entertain the idea that we are all living inside a gigantic computer simulation, experiencing a Matrix-style virtual world that we mistakenly think is real.

Our instincts rebel, of course. It all feels too real to be a simulation. The weight of the cup in my hand, the rich aroma of the coffee it contains, the sounds all around me – how can such richness of experience be faked?

But then consider the extraordinary progress in computer and information technologies over the past few decades. Computers have given us games of uncanny realism – with autonomous characters responding to our choices – as well as virtual-reality simulators of tremendous persuasive power.

It is enough to make you paranoid.

The Matrix formulated the narrative with unprecedented clarity. In that story, humans are locked by a malignant power into a virtual world that they accept unquestioningly as “real”. But the science-fiction nightmare of being trapped in a universe manufactured within our minds can be traced back further, for instance to David Cronenberg’s Videodrome (1983) and Terry Gilliam’s Brazil (1985).

Over all these dystopian visions, there loom two questions. How would we know? And would it matter anyway?

Elon Musk, CEO of Tesla and SpaceX (Credit: Kristoffer Tripplaar/Alamy)

Elon Musk, CEO of Tesla and SpaceX (Credit: Kristoffer Tripplaar/Alamy)

The idea that we live in a simulation has some high-profile advocates.

In June 2016, technology entrepreneur Elon Musk asserted that the odds are “a billion to one” against us living in “base reality”.

Similarly, Google’s machine-intelligence guru Ray Kurzweil has suggested that “maybe our whole universe is a science experiment of some junior high-school student in another universe”

What’s more, some physicists are willing to entertain the possibility. In April 2016, several of them debated the issue at the American Museum of Natural History in New York, US.

None of these people are proposing that we are physical beings held in some gloopy vat and wired up to believe in the world around us, as in The Matrix.

Instead, there are at least two other ways that the Universe around us might not be the real one.

Cosmologist Alan Guth of the Massachusetts Institute of Technology, US has suggested that our entire Universe might be real yet still a kind of lab experiment. The idea is that our Universe was created by some super-intelligence, much as biologists breed colonies of micro-organisms.

More at:

Microsoft will ‘solve’ cancer within 10 years by ‘reprogramming’ diseased cells

November 14, 2016


Microsoft has vowed to “solve the problem of cancer” within a decade by using ground-breaking computer science to crack the code of diseased cells so they can be reprogrammed back to a healthy state.

In a dramatic change of direction for the technology giant, the company has assembled a “small army” of the world’s best biologists, programmers and engineers who are tackling cancer as if it were a bug in a computer system.

This summer Microsoft opened its first wet laboratory where it will test out the findings of its computer scientists who are creating huge maps of the internal workings of cell networks.

Microsoft opened its first wet laboratory this summer
Microsoft opened its first ‘wet’ laboratory this summer

The researchers are even working on a computer made from DNA which could live inside cells and look for faults in bodily networks, like cancer. If it spotted cancerous chances it would reboot the system and clear out the diseased cells.

Chris Bishop, laboratory director at Microsoft Research, said: “I think it’s a very natural thing for Microsoft to be looking at because we have tremendous expertise in computer science and what is going on in cancer is a computational problem.

“It’s not just an analogy, it’s a deep mathematical insight. Biology and computing are disciplines which seem like chalk and cheese but which have very deep connections on the most fundamental level.”

The biological computation group at Microsoft are developing molecular computers built from DNA which act like a doctor to spot cancer cells and destroy them.

Andrew Philips, head of the group, said: “It’s long term, but… I think it will be technically possible in five to 10 years time to put in a smart molecular system that can detect disease.”

Andrew Philips, head of the group
Andrew Philips, head of the group Credit: Ed Miller

The programming principles and tools group has already developed software that mimics the healthy behavior of a cell, so that it can be compared to that of a diseased cell, to work out where the problem occurred and how it can be fixed.

The Bio Model Analyser software is already being used to help researchers understand how to treat leukemia more effectively.

Dr Jasmin Fisher
Dr Jasmin Fisher believes scientists may be able to control and regulate cancer ‘within a decade’

Dr Jasmin Fisher, senior researcher and an associate professor at Cambridge University, said: “If we are able to control and regulate cancer then it becomes like any chronic disease and then the problem is solved.”

“I think for some of the cancers five years, but definitely within a decade. Then we will probably have a century free of cancer.”

She believes that in the future smart devices will monitor health continually and compare it to how the human body should be operating, so that it can quickly detect problems.

“My own personal vision is that in the morning you wake up, you check your email and at the same time all of our genetic data, our pulse, our sleep patterns, how much we exercised, will be fed into a computer which will check your state of well-being and tell you how prone you are to getting flu, or some other horrible thing,” she added.

“In order to get there we need these kind of computer models which mimic and model the fundamental processes that are happening in our bodies.

“Under normal development cells divide and they die and there is a certain balance, the problems start when that balance is broken and that’s how we had uncontrolled proliferation and tumours.

“If we could have all of that sitting on your personal computer and monitoring your health state then it will alert us when something is coming.”

Improved scanning technology offers hope

Patients undergoing radiotherapy could see treatment slashed from hours to just minutes with a new innovation to quickly map the size of a tumour.

 consultant studying a mammogram showing a womans breast in order check for breast cancer, as experienced radiologists can spot subtle signs of breast cancer in mammogram images in just half a second, a study has found
Experienced radiologists can spot subtle signs of breast cancer in mammogram images in just half a second, a study has found Credit: PA

Currently radiologists must scan a tumour and then painstakingly draw the outline of the cancer on dozens of sections by hand to create a 3D map before treatment, a process which can take up to four hours.

They also must outline nearby important organs to make sure they are protected from the blast of radiation.

But Microsoft engineers have developed a programme which can delineate a tumour within minutes, meaning treatment can happen immediately.

The programme can also show doctors how effective each treatment has been, so the dose can be altered depending on how much the tumour has been shrunk.

“Eyeballing works very well for diagnosing,” said Antonio Criminisi, a machine learning and computer vision expert who heads radiomics research in Microsoft’s Cambridge, UK, lab.

“Expert radiologists can look at an image – say a scan of someone’s brain – and be able to say in two seconds, ‘Yes, there’s a tumor. No, there isn’t a tumor. But delineating a tumour by hand is not very accurate.”

The system could eventually evaluate 3D scans pixel by pixel to tell the radiologist exactly how much the tumor has grown, shrunk or changed shape since the last scan.

It also could provide information about things like tissue density, to give the radiologist a better sense of whether something is more likely a cyst or a tumor. And it could provide more fine-grained analysis of the health of cells surrounding a tumor.

“Doing all of that by eye is pretty much impossible,” added Dr Criminisi.

The images could also be 3D printed so that surgeons could practice a tricky operation, such as removing a hard-to -reach brain tumour, before surgery.

Tiny Flying Robots Are Being Built To Pollinate Crops Instead Of Real Bees

November 14, 2016


Honeybees, which pollinate nearly one-third of the food we eat , have been dying at unprecedented rates because of a mysterious phenomenon known as  colony collapse disorder  (CCD). The situation is so dire that in late June the White House gave a  new task force just 180 days to devise a coping strategy to protect bees and other pollinators. The crisis is generally attributed to a mixture of disease, parasites, and pesticides.

Other scientists are pursuing a different tack: replacing bees. While there’s no perfect solution, modern technology offers hope.

Last year, Harvard University researchers led by engineering professor Robert Wood introduced the first RoboBees, bee-size robots with the ability to lift off the ground and hover midair when tethered to a power supply. The details were published in the journal Science. A coauthor of that report, Harvard graduate student and mechanical engineer Kevin Ma, tells Business Insider that the team is “on the eve of the next big development.” Says Ma: “The robot can now carry more weight.”

The project represents a breakthrough in the field of micro-aerial vehicles. It had previously been impossible to pack all the things needed to make a robot fly onto such a small structure and keep it lightweight.

A Bee-Placement?

The researchers believe that as soon as 10 years from now these RoboBees could artificially pollinate a field of crops, a critical development if the commercial pollination industry cannot recover from severe yearly losses over the past decade.

The White House underscored what’s at stake, noting that the loss of bees and other species “requires immediate attention to ensure the sustainability of our food production systems, avoid additional economic impact on the agricultural sector, and protect the health of the environment.” Honeybees alone contribute more than $15 billion in value to U.S. agricultural crops each year.

But RoboBees are not yet a viable technological solution. First, the tiny bots have to be able to fly on their own and “talk” to one another to carry out tasks like a real honeybee hive.

“RoboBees will work best when employed as swarms of thousands of individuals, coordinating their actions without relying on a single leader,” Wood and colleagues wrote in an article for Scientific American. “The hive must be resilient enough so that the group can complete its objectives even if many bees fail.”

Although Wood wrote that CCD and the threat it poses to agriculture were part of the original inspiration for creating a robotic bee, the devices aren’t meant to replace natural pollinators forever. We still need to focus on efforts to save these vital creatures. RoboBees would serve as “stopgap measure while a solution to CCD is implemented,” the project’s website says.

Harvard’s Kevin Ma spoke to Business Insider about the team’s progress in building the bee-size robot since publishing its Science paper last year.

Following is an edited version of that interview.

Business Insider: Where are you a little over a year after it was announced that the first robotic insect took flight?

Kevin Ma: We’ve been continuing on the path to getting the robot to be completely autonomous, meaning it flies without being tethered and without the need for anyone to drive it. We’ve been building a larger version of the robot so that it can can carry the battery, electronic centers, and all the other things necessary for autonomous flight.

BI: Last month, Greenpeace released a short video that imagines a future in which swarms of robotic bees have been deployed to save our planet after the real insects go extinct. It’s a cautionary story rather than one of technological adaptation. What is your reaction to that?

KM: Having a multitude of options to deal with future problems is important. It’s hard to predict what exact solution we would need in the future. Flexibility is key.

BI: Will robot bees eventually be able to operate like honeybee hives to pollinate commercial crops?

KM: Yes. You could replace a hive of honeybees that would otherwise be working on a field of flowers. They would be able to perform the same task of going from flower to flower picking up and putting down pollen. They wouldn’t have to collect nectar like real bees. They would just be transmitting pollen. But to do this the robots first need to fly on their own and fly very well. In theory, they would just have to come back to something to recharge their batteries. But we’re very early on in working this out.

BI: When can we see RoboBees pollinating flowers?

KM: With continued government funding and research we could see this thing functional in 10 to 15 years.

BI: What’s next?

KM:We’re on the eve of the next big development. Something will be published in the next few months. The robot can now now carry more weight. That’s important for the battery and other electronics and sensors.

Once the robot can stay aloft on its own, we would be working on things like allowing it to perform tasks, increasing its battery life, and making it fly faster. Then there are a whole host of issues to work out dealing with wireless communications.

IBM is one step closer to mimicking the human brain

September 24, 2016


Scientists at IBM have claimed a computational breakthrough after imitating large populations of neurons for the first time.

Neurons are electrically excitable cells that process and transmit information in our brains through electrical and chemical signals. These signals are passed over synapses, specialised connections with other cells.

It’s this set-up that inspired scientists at IBM to try and mirror the way the biological brain functions using phase-change materials for memory applications.

Using computers to try to mimic the human brain is something that’s been theorised for decades due to the challenges of recreating the density and power. Now, for the first time, scientists have created their own “randomly spiking” artificial neurons that can store and process data.

“The breakthrough marks a significant step forward in the development of energy-efficient, ultra-dense integrated neuromorphic technologies for applications in cognitive computing,” the scientists said.

The artificial neurons consist of phase-change materials, including germanium antimony telluride, which exhibit two stable states, an amorphous one (without a clearly defined structure) and a crystalline one (with structure). These materials are also the basis of re-writable Blue-ray but in this system the artificial neurons do not store digital information; they are analogue, just like the synapses and neurons in a biological brain.

The beauty of these powerful phase-change-based artificial neurons, which can perform various computational primitives such as data-correlation detection and unsupervised learning at high speeds, is that they use very little energy – just like human brain.

In a demonstration published in the journal Nature Nanotechnology, the team applied a series of electrical pulses to the artificial neurons, which resulted in the progressive crystallisation of the phase-change material, ultimately causing the neuron to fire.

In neuroscience, this function is known as the integrate-and-fire property of biological neurons. This is the foundation for event-based computation and, in principle, is quite similar to how a biological brain triggers a response when an animal touches something hot, for instance.

Tomas Tuma, co-author of the paper, said the breakthrough could help create a new generation of extremely dense neuromorphic computing systems
Tomas Tuma, co-author of the paper, said the breakthrough could help create a new generation of extremely dense neuromorphic computing systems

As part of the study, the researchers organised hundreds of artificial neurons into populations and used them to represent fast and complex signals. When tested, the artificial neurons were able to sustain billions of switching cycles, which would correspond to multiple years of operation at an update frequency of 100Hz.

The energy required for each neuron update was less than five picojoule and the average power less than 120 microwatts — for comparison, 60 million microwatts power a 60 watt light bulb, IBM’s research paper said.

When exploiting this integrate-and-fire property, even a single neuron can be used to detect patterns and discover correlations in real-time streams of event-based data. “This will significantly reduce the area and power consumption as it will be using tiny nanoscale devices that act as neurons,” IBM scientist and author, Dr. Abu Sebastian told WIRED.

This, IBM believes, could be helpful in the further development of internet of things technologies, especially when developing tiny sensors.

“Populations of stochastic phase-change neurons, combined with other nanoscale computational elements such as artificial synapses, could be a key enabler for the creation of a new generation of extremely dense neuromorphic computing systems,” said Tomas Tuma, co-author of the paper.

This could be useful in sensors collecting and analysing volumes of weather data, for instance, said Sebastian, collected at the edge, in remote locations, for faster and more accurate weather forecasts.

The artificial neurons could also detect patterns in financial transactions to find discrepancies or use data from social media to discover new cultural trends in real time. While large populations of these high-speed, low-energy nano-scale neurons could also be used in neuromorphic co-processors with co-located memory and processing units.

How Jellyfish, Nanobots, and Naked Mole Rats Could Make Humans Immortal

June 18, 2016


Dr. Chris Faulkes is standing in his laboratory, tenderly caressing what looks like a penis. It’s not his penis, nor mine, and it’s definitely not that of the only other man in the room, VICE photographer Chris Bethell. But at four inches long with shrivelled skin that’s veiny and loose, it looks very penis-y. Then, with a sudden squeak, it squirms in his hand as if trying to break free, revealing an enormous set of Bugs Bunny teeth protruding from the tip.

“This,” says Faulkes, “is a naked mole rat, though she does look like a penis with teeth, doesn’t she? Or a saber-tooth sausage. But don’t let her looks fool you—the naked mole rat is the superhero of the animal kingdom.”

I’m with Faulkes in his lab at Queen Mary, University of London. Faulkes is an affable guy with a ponytail, telltale tattoos half-hidden under his T-shirt sleeve, and a couple of silver goth rings on his fingers. A spaghetti-mess of tubes weave about the room, like a giant gerbil maze, through which 12 separated colonies of 200 naked mole rats scurry, scratch, and squeak. What he just said is not hyperbole. In fact, the naked mole rat shares more than just its looks with a penis: Where you might say the penis is nature’s key to creating life, this ugly phallus of a creature could be mankind’s key to eternal life.

“Their extreme and bizarre lifestyle never ceases to amaze and baffle biologists, making them one of the most intriguing animals to study,” says Faulkes, who has devoted the past 30 years of his life to trying to understand how the naked mole rat has evolved into one of the most well-adapted, finely tuned creatures on Earth. “All aspects of their biology seem to inform us about other animals, including humans, particularly when it comes to healthy aging and cancer resistance.”

Similarly sized rodents usually live for about five years. The naked mole rat lives for 30. Even into their late 20s, they hardly seem to age, remaining fit and healthy with robust heartbeats, strong bones, sharp minds, and high fertility. They don’t seem to feel pain, and, unlike other mammals, they almost never get cancer.

In other words, if humans lived as long, relative to body size, as naked mole rats, we would last for 500 years in a 25-year-old’s body. “It’s not a ridiculous exaggeration to suggest we can one day manipulate our own biochemical and metabolic pathways with drugs or gene therapies to emulate those that keep the naked mole rat alive and healthy for so long,” says Faulkes, stroking his animal. “In fact, the naked mole rat provides us the perfect model for human aging research across the board, from the way it resists cancer to the way its social systems prolong its life.”

Over the centuries, a long line of optimists, alchemists, hawkers, and pop stars have hunted various methods of postponing death, from drinking elixirs of youth to sleeping in hyperbaric chambers. The one thing those people have in common is that all of them are dead. Still, the anti-aging industry is bigger than ever. In 2013, its global market generated more than $216 billion. By 2018, it will hit $311 billion, thanks mostly to huge investment from Silicon Valley billionaires and Russian oligarchs who’ve realized the only way they could possibly spend all their money is by living forever. Even Google wants in on the action, with Calico, its $1.5 billion life-extension research center whose brief is to reverse-engineer the biology that makes us old or, as Time magazine put it, to “cure death.” It’s a snowballing market that some are branding “the internet of healthcare.” But on whom are these savvy entrepreneurs placing their bets? After all, the race for immortality has a wide field.

In an office not far from Google’s headquarters in Mountain View, with a beard to his belt buckle and a ponytail to match, British biomedical gerontologist Aubrey De Grey is enjoying the growing clamor about conquering aging, or “senescence,” as he calls it. His charity, the SENS Research Foundation, has enjoyed a bumper few years thanks to a $600,000-a-year investment from Paypal co-founder and immortality motormouth Peter Thiel (“Probably the most extreme form of inequality is between people who are alive and people who are dead”). Though he says the foundation’s $5.75 million annual budget can still “struggle” to support its growing workload.

According to the Cambridge-educated scientist, the fundamental knowledge needed to develop effective anti-aging therapies already exists. He argues that the seven biochemical processes that cause the damage that accumulates during old age have been discovered, and if we can counter them we can, in theory, halt the aging process. Indeed, he not only sees aging as a medical condition that can be cured, but believes that the “first person to live to 1,000 is alive today.” If that sounds like the ramblings of a crackpot weird-beard, hear him out; Dr. De Grey’s run the numbers.

“If you look at the math, it is very straightforward,” he says. “All we are saying here is that it’s quite likely that within the next twenty or thirty years, we will develop medicines that can rejuvenate people faster than time is passing. It’s not perfect yet, but soon we’ll take someone aged sixty and fix them up well enough that they won’t be sixty again, biologically, for another thirty years. In that period, therapies will improve such that we’ll be able to rejuvenate them again, so they won’t be sixty for a third time until they are chronologically one hundred fifty, and so on. If we can stay one step ahead of the problem, people won’t die of aging anymore.”

“Like immortality?” I ask. Dr. De Grey sighs: “That word is the bane of my life. People who use that word are essentially making fun of what we do, as if to maintain an emotional distance from it so as not to get their hopes up. I don’t work on ‘curing death.’ I work on keeping people healthy. And, yes, I understand that success in my work could translate into an important side effect of people living longer. But to ‘cure death’ implies the elimination of all causes, including, say, dying in car accidents. And I don’t think there’s much we could do to survive an asteroid apocalypse.”

So instead, De Grey focuses on the things we can avoid dying from, like hypertension, cancer, Alzeimer’s, and other age-related illnesses. His goal is not immortality but “radical life extension.” He says traditional medicines won’t wind back the hands of our body clocks—we need to manipulate our makeup on a cellular level, like using bacterial enzymes to flush out molecular “garbage” that accumulates in the body, or tinkering with our genetic coding to prevent the growth of cancers, or any other disease.

Chris Faulkes knows of one magic bullet to kill cancer. And, back at Queens, he is making his point by pulling at the skin of a naked mole rat in his hand. “It’s the naked mole rat’s elasticky skin that’s made it cancer-proof,” he says. “The theory—first discovered by a lab in America—is that, as an adaptation to living underground in tight tunnels, they’ve developed a really loose skin so they don’t get stuck or snagged. That elasticity is a result of it producing this gloopy sugar [polysacharide], high-molecular-weight hyaluronan (HMW-HA).”

While humans already have a version of hyaluronan in our bodies that helps heal wounds by encouraging cell division (and, ironically, assist tumor growth), that of the naked mole rat does the opposite. “The hyaluronan in naked mole rats is about six times larger than ours,” says Faulkes. “It interacts with a metabolic pathway, which helps prevent cells from coming together to make tumors.”

But that’s not all: It is believed it may also act to help keep their blood vessels elastic, which, in turn, relieves high blood pressure (hypertension)—a condition that affects one in three people and is known in medical circles as “the silent killer” because most patients don’t even know they have it. “I see no reason why we can’t use this to inform human anti-cancer and aging therapies by manipulating our own hyaluronan system,” says Faulkes.

Then there are the naked mole rat’s cells themselves, which seem to make proteins – the molecular machines that make bodies work—more accurately than ours, preventing age-related illnesses like Alzheimer’s. And the way they handle glucose doesn’t change with age either, reducing their susceptibility to things like diabetes. “Most of the age-related declines you see in the physiology in mammals do not occur in naked mole rats,” adds Faulkes. “We’ve only just begun on the naked mole rat story, and already a whole universe is opening up that could have a major downstream effect on human health. It’s very exciting.”

Of course, the naked mole rat isn’t the only animal scientists are probing to pick the lock of long life. “With a heart rate of 1,000 beats a minute, the tiny hummingbird should be riddled with rogue free radicals [the oxygen-based chemicals that basically make mammals old by gradually destroying DNA, proteins and fat molecules]… but it’s not,” says zoologist Jules Howard, author of Death on Earth: Adventures in Evolution and Mortality. “Then there are pearl mussel larvae that live in the gills of Atlantic salmon and mop up free radicals, and lobsters, which seem to have evolved a protein which repairs the tips of DNA [telomeres], allowing for more cell divisions than most animals are capable of. And we mustn’t forget the 2mm-long C. elegans roundworm. Within these 2mm-long nematodes are genetic mechanisms that can be picked apart like cogs and springs in an attempt to better understand the causes of aging and ultimately death.”

But there is one animal on Earth that may hold the master key to immortality: the Turritopsis dohrnii, or Immortal Jellyfish. Most jellyfish, when they reach the end of life, die and melt into the sea. Not the Turritopsis dohrnii. Instead, the 4mm sea creature sinks to the bottom of the ocean floor, where its body folds in on itself—assuming the jellyfish equivalent of the fetal position—and regenerates back into a baby jellyfish, or polyp, in a rare biological process called transdifferentiation, in which its old cells essentially transform into young cells.

There is just one scientist who has been culturing Turritopsis polyps in his lab consistently. He works alone, without major financing or a staff, in a poky office in Shirahama, a sleepy beach town near Kyoto. Yet professor Shin Kubota has managed to rejuvenate one of his charges 14 times, before a typhoon washed it away. “The Turritopsis dohrnii is a miracle of nature,” he says over the phone. “My ultimate purpose is to understand exactly how they regenerate so we can apply its mechanisms to human beings. You see, very surprisingly, the Turritopsis’s genome is very similar to humans’—much more so than worms. I believe we will have the technology to begin applying this immortal genome to humans very soon.”

How soon? “In 20 years,” he says, a little mischievously. “That is my guess.”

If PKubota really believes his own claim, then he’s got a race on his hands; he’s not the only scientist with a “20-year” prophesy. The acclaimed futurist and computer scientist Ray Kurzweil believes that by the 2030s we’ll have microscopic machines traveling through our bodies, repairing damaged cells and organs, effectively wiping out diseases and making us biologically immortal anyway. “The full realization of nanobots will basically eliminate biological disease and aging,” he told the world a few years back.

It’s a blossoming industry. And, in a state-of-the-art lab at the Bristol Robotics Laboratory, at Bristol University, Dr. Sabine Hauert is on its coalface. She designs swarms of nanobots—each a thousand times smaller than the width of a hair—that can be injected into the bloodstream with a payload of drugs to infiltrate the pores of cancer cells, like millions of tiny Trojan Horses, and destroy them from within. “We can engineer nanoparticles to basically do what we want them to do,” she tells me. “We can change their size, shape, charge, or material and load them with molecules or drugs that they can release in a controlled fashion.”

While she says the technology can be used to combat a whole gamut of different illnesses, Dr. Hauert has trained her crosshairs on cancer. What’s the most effective nano-weapon against malignant tumors? Gold. Millions of swarming golden nanobots that can be dispatched into the bloodstream, where they will seep into the tumor through little holes in its rapidly-growing vessels and lie in wait. “Then,” she says, “if you heat them with an infrared laser they vibrate violently, degrading the tumour’s cells. We can then send in another swarm of nanoparticles decorated with a molecule that’s loaded with a chemotherapy drug, giving a 40-fold increase in the amount of drugs we can deliver. This is very exciting technology that is already having a huge impact on the way we treat cancer, and will do on other diseases in the future.”

The next logical step, as Kurzweil claims, is that we will soon have nanobots permanently circulating in our veins, cleaning and maintaining our bodies indefinitely. They may even replace our organs when they fail. Clinical trials of such technology is already beginning on mice.

The naked mole rat colony in Chris Faulkes’s lab

The oldest mouse ever to live was called Yoda. He lived to the age of four. The oldest ever dog, Bluey, was 29. The oldest flamingo was 83. The oldest human was 122. The oldest clam was 507. The point is, evolution has rewarded species who’ve worked out ways to not get eaten by bigger species—be it learning to fly, developing big brains or forming protective shells. Naked mole rats went underground and learned to work together.

“A mouse is never going to worry about cancer as much as it will about cats,” says Faulkes. “Naked mole rats have no such concerns because they built vast networks of tunnels, developed hierarchies and took up different social roles to streamline productivity. They bought themselves time to evolve into biological marvels.”

At the top of every colony is a queen. Second in rank are her chosen harem of catamites with whom she mates for life. Beneath them are the soldiers and defenders of the realm, the biggest animals around, and at the bottom are the workers who dig tunnels with their teeth or search for tubers, their main food source. They have a toilet chamber, a sleeping chamber, a nursing chamber and a chamber for disposing of the dead. They rarely go above ground and almost never mix with other colonies. “It’s a whole mosaic of different characteristics that have come about through adapting to living in this very extreme ecological niche,” says Faulkes. “All of the weird and wonderful things that contribute to their healthy aging have come about through that. Even their extreme xenophobia helps prevent them being wiped out by infectious diseases.”

Still, the naked mole rat is not perfect. Dr. Faulkes learned this the hard way one morning in March last year, when he turned the light on in his lab to a grisly scene. “Blood was smeared about the perspex walls of a tunnel in colony N,” he says, “and the mangled corpse of one of my mole rats lay lifeless inside.” There was one explanation: A queen had been murdered. “There had been a coup,” he recalls. “Her daughter had decided she wanted to run the colony so she savaged her mother to death to take over. You see, naked mole rats may be immune to death by aging, but they can still be killed, just like you and me.”

That’s the one issue that true immortalists have with the concept of radical life extension: we can still get hit by a bus or murdered. But what if the entire contents of your brain—your memories, beliefs, hopes, and dreams—could be scanned and uploaded onto a mainframe, so when You 1.0 finally does fall down a lift shaft or is killed by a friend, You 2.0 could be fed into a humanoid avatar and rolled out of an immortality factory to pick up where you left off?

Dr. Randall Koene insists You 2.0 would still be you. “What if I were to add an artificial neuron next to every real neuron in your brain and connect it with the same connections that your normal neurons have so that it operates in exactly the same way?” he says. “Then, once I’ve put all these neurons in place, I remove the connections to all the old neurons, one by one, would you disappear?”

More at:

The Brain vs Deep Learning Part I: Computational Complexity — Or Why the Singularity Is Nowhere Near

June 04, 2016


In this blog post I will delve into the brain and explain its basic information processing machinery and compare it to deep learning. I do this by moving step-by-step along with the brains electrochemical and biological information processing pipeline and relating it directly to the architecture of convolutional nets. Thereby we will see that a neuron and a convolutional net are very similar information processing machines. While performing this comparison, I will also discuss the computational complexity of these processes and thus derive an estimate for the brains overall computational power. I will use these estimates, along with knowledge from high performance computing, to show that it is unlikely that there will be a technological singularity in this century.

This blog post is complex as it arcs over multiple topics in order to unify them into a coherent framework of thought. I have tried to make this article as readable as possible, but I might have not succeeded in all places. Thus, if you find yourself in an unclear passage it might become clearer a few paragraphs down the road where I pick up the thought again and integrate it with another discipline.

First I will give a brief overview about the predictions for a technological singularity and topics which are aligned with that. Then I will start the integration of ideas between the brain and deep learning. I finish with discussing high performance computing and how this all relates to predictions about a technological singularity.

The part which compares the brains information processing steps to deep learning is self-contained, and readers which are not interested in predictions for a technological singularity may skip to this part.

Part I: Evaluating current predictions of a technological singularity

There were a lot of headlines recently about predictions that artificial intelligence will reach super-human intelligence as early as 2030 and that this might herald the beginning of human extinction, or at least dramatically altering everyday life. How was this prediction made?

More at:

IBM scientists achieve storage memory breakthrough

June 04, 2016


For the first time, scientists at IBM Research have demonstrated reliably storing 3 bits of data per cell using a relatively new memory technology known as phase-change memory (PCM).

The current landscape spans from venerable DRAM to hard disk drives to ubiquitous flash. But in the last several years PCM has attracted the industry’s attention as a potential universal memory technology based on its combination of read/write speed, endurance, non-volatility and density. For example, PCM doesn’t lose data when powered off, unlike DRAM, and the technology can endure at least 10 million write cycles, compared to an average flash USB stick, which tops out at 3,000 write cycles.

This research breakthrough provides fast and easy storage to capture the exponential growth of data from mobile devices and the Internet of Things.


IBM scientists envision standalone PCM as well as hybrid applications, which combine PCM and flash storage together, with PCM as an extremely fast cache. For example, a mobile phone’s operating system could be stored in PCM, enabling the phone to launch in a few seconds. In the enterprise space, entire databases could be stored in PCM for blazing fast query processing for time-critical online applications, such as financial transactions.

Machine learning algorithms using large datasets will also see a speed boost by reducing the latency overhead when reading the data between iterations.

How PCM Works

PCM materials exhibit two stable states, the amorphous (without a clearly defined structure) and crystalline (with structure) phases, of low and high electrical conductivity, respectively.

To store a ‘0’ or a ‘1’, known as bits, on a PCM cell, a high or medium electrical current is applied to the material. A ‘0’ can be programmed to be written in the amorphous phase or a ‘1’ in the crystalline phase, or vice versa. Then to read the bit back, a low voltage is applied. This is how re-writable Blue-ray Discs store videos.

Previously scientists at IBM and other institutes have successfully demonstrated the ability to store 1 bit per cell in PCM, but today at the IEEE International Memory Workshop in Paris, IBM scientists are presenting, for the first time, successfully storing 3 bits per cell in a 64k-cell array at elevated temperatures and after 1 million endurance cycles.

“Phase change memory is the first instantiation of a universal memory with properties of both DRAM and flash, thus answering one of the grand challenges of our industry,” said Dr. Haris Pozidis, an author of the paper and the manager of non-volatile memory research at IBM Research – Zurich. “Reaching three bits per cell is a significant milestone because at this density the cost of PCM will be significantly less than DRAM and closer to flash.”

To achieve multi-bit storage IBM scientists have developed two innovative enabling technologies: a set of drift-immune cell-state metrics and drift-tolerant coding and detection schemes.

More specifically, the new cell-state metrics measure a physical property of the PCM cell that remains stable over time, and are thus insensitive to drift, which affects the stability of the cell’s electrical conductivity with time. To provide additional robustness of the stored data in a cell over ambient temperature fluctuations a novel coding and detection scheme is employed. This scheme adaptively modifies the level thresholds that are used to detect the cell’s stored data so that they follow variations due to temperature change. As a result, the cell state can be read reliably over long time periods after the memory is programmed, thus offering non-volatility.

“Combined these advancements address the key challenges of multi-bit PCM, including drift, variability, temperature sensitivity and endurance cycling,” said Dr. Evangelos Eleftheriou, IBM Fellow.

The experimental multi-bit PCM chip used by IBM scientists is connected to a standard integrated circuit board. The chip consists of a 2 × 2 Mcell array with a 4- bank interleaved architecture. The memory array size is 2 × 1000 μm × 800 μm. The PCM cells are based on doped-chalcogenide alloy and were integrated into the prototype chip serving as a characterization vehicle in 90 nm CMOS baseline technology.

More information: Aravinthan Athmanathan et al. Multilevel-Cell Phase-Change Memory: A Viable Technology, IEEE Journal on Emerging and Selected Topics in Circuits and Systems (2016). DOI: 10.1109/JETCAS.2016.2528598

M. Stanisavljevic, H. Pozidis, A. Athmanathan, N. Papandreou, T. Mittelholzer, and E. Eleftheriou,”Demonstration of Reliable Triple-Level-Cell (TLC) Phase-Change Memory,” in Proc. International Memory Workshop, Paris, France, May 16-18, 2016

Read more at: