Artificial intelligence is getting smarter by leaps and bounds — within this century, research suggests, a computer AI could be as “smart” as a human being. And then, says Nick Bostrom, it will overtake us: “Machine intelligence is the last invention that humanity will ever need to make.” A philosopher and technologist, Bostrom asks us to think hard about the world we’re building right now, driven by thinking machines. Will our smart machines help to preserve humanity and our values — or will they have values of their own?
Musk is just one of the people in Silicon Valley to take a keen interest in the “simulation hypothesis”, which argues that what we experience as reality is actually a giant computer simulation created by a more sophisticated intelligence. If it sounds a lot like The Matrix, that’s because it is.
One popular argument for the simulation hypothesis, outside of acid trips, came from Oxford University’s Nick Bostrom in 2003 (although the idea dates back as far as the 17th-century philosopher René Descartes). In a paper titled “Are You Living In a Simulation?”, Bostrom suggested that members of an advanced “posthuman” civilization with vast computing power might choose to run simulations of their ancestors in the universe.
This argument is extrapolated from observing current trends in technology, including the rise of virtual reality and efforts to map the human brain.
If we believe that there is nothing supernatural about what causes consciousness and it’s merely the product of a very complex architecture in the human brain, we’ll be able to reproduce it. “Soon there will be nothing technical standing in the way to making machines that have their own consciousness,” said Rich Terrile, a scientist at Nasa’s Jet Propulsion Laboratory.
At the same time, videogames are becoming more and more sophisticated and in the future we’ll be able to have simulations of conscious entities inside them.
“Forty years ago we had Pong – two rectangles and a dot. That’s where we were. Now 40 years later, we have photorealistic, 3D simulations with millions of people playing simultaneously and it’s getting better every year. And soon we’ll have virtual reality, we’ll have augmented reality,” said Musk. “If you assume any rate of improvement at all, then the games will become indistinguishable from reality.”
It’s a view shared by Terrile. “If one progresses at the current rate of technology a few decades into the future, very quickly we will be a society where there are artificial entities living in simulations that are much more abundant than human beings.”
If there are many more simulated minds than organic ones, then the chances of us being among the real minds starts to look more and more unlikely. As Terrile puts it: “If in the future there are more digital people living in simulated environments than there are today, then what is to say we are not part of that already?”
Reasons to believe that the universe is a simulation include the fact that it behaves mathematically and is broken up into pieces (subatomic particles) like a pixelated video game. “Even things that we think of as continuous – time, energy, space, volume – all have a finite limit to their size. If that’s the case, then our universe is both computable and finite. Those properties allow the universe to be simulated,” Terrile said.
“Quite frankly, if we are not living in a simulation, it is an extraordinarily unlikely circumstance,” he added.
So who has created this simulation? “Our future selves,” said Terrile.
Not everyone is so convinced by the hypothesis. “Is it logically possible that we are in a simulation? Yes. Are we probably in a simulation? I would say no,” said Max Tegmark, a professor of physics at MIT.
“In order to make the argument in the first place, we need to know what the fundamental laws of physics are where the simulations are being made. And if we are in a simulation then we have no clue what the laws of physics are. What I teach at MIT would be the simulated laws of physics,” he said.
Harvard theoretical physicist Lisa Randall is even more skeptical. “I don’t see that there’s really an argument for it,” she said. “There’s no real evidence.”
“It’s also a lot of hubris to think we would be what ended up being simulated.”
Terrile believes that recognizing that we are probably living in a simulation is as game-changing as Copernicus realizing that the Earth was not the center of the universe. “It was such a profound idea that it wasn’t even thought of as an assumption,” he said.
Before Copernicus, scientists had tried to explain the peculiar behaviour of the planets’ motion with complex mathematical models. “When they dropped the assumption, everything else became much simpler to understand.”
That we might be in a simulation is, Terrile argues, a simpler explanation for our existence than the idea that we are the first generation to rise up from primordial ooze and evolve into molecules, biology and eventually intelligence and self-awareness. The simulation hypothesis also accounts for peculiarities in quantum mechanics, particularly the measurement problem, whereby things only become defined when they are observed.
“For decades it’s been a problem. Scientists have bent over backwards to eliminate the idea that we need a conscious observer. Maybe the real solution is you do need a conscious entity like a conscious player of a video game,” he said.
For Tegmark, this doesn’t make sense. “We have a lot of problems in physics and we can’t blame our failure to solve them on simulation.”
How can the hypothesis be put to the test? On one hand, neuroscientists and artificial intelligence researchers can check whether it’s possible to simulate the human mind. So far, machines have proven to be good at playing chess and Go and putting captions on images. But can a machine achieve consciousness? We don’t know.
On the other hand, scientists can look for hallmarks of simulation. “Suppose someone is simulating our universe – it would be very tempting to cut corners in ways that makes the simulation cheaper to run. You could look for evidence of that in an experiment,” said Tegmark.
For Terrile, the simulation hypothesis has “beautiful and profound” implications.
First, it provides a scientific basis for some kind of afterlife or larger domain of reality above our world. “You don’t need a miracle, faith or anything special to believe it. It comes naturally out of the laws of physics,” he said.
Second, it means we will soon have the same ability to create our own simulations.
“We will have the power of mind and matter to be able to create whatever we want and occupy those worlds.”
These used to be questions that only philosophers worried about. Scientists just got on with figuring out how the world is, and why. But some of the current best guesses about how the world is seem to leave the question hanging over science too.
Several physicists, cosmologists and technologists are now happy to entertain the idea that we are all living inside a gigantic computer simulation, experiencing a Matrix-style virtual world that we mistakenly think is real.
Our instincts rebel, of course. It all feels too real to be a simulation. The weight of the cup in my hand, the rich aroma of the coffee it contains, the sounds all around me – how can such richness of experience be faked?
But then consider the extraordinary progress in computer and information technologies over the past few decades. Computers have given us games of uncanny realism – with autonomous characters responding to our choices – as well as virtual-reality simulators of tremendous persuasive power.
It is enough to make you paranoid.
The Matrix formulated the narrative with unprecedented clarity. In that story, humans are locked by a malignant power into a virtual world that they accept unquestioningly as “real”. But the science-fiction nightmare of being trapped in a universe manufactured within our minds can be traced back further, for instance to David Cronenberg’s Videodrome (1983) and Terry Gilliam’s Brazil (1985).
Over all these dystopian visions, there loom two questions. How would we know? And would it matter anyway?
Elon Musk, CEO of Tesla and SpaceX (Credit: Kristoffer Tripplaar/Alamy)
The idea that we live in a simulation has some high-profile advocates.
In June 2016, technology entrepreneur Elon Musk asserted that the odds are “a billion to one” against us living in “base reality”.
Similarly, Google’s machine-intelligence guru Ray Kurzweil has suggested that “maybe our whole universe is a science experiment of some junior high-school student in another universe”
What’s more, some physicists are willing to entertain the possibility. In April 2016, several of them debated the issue at the American Museum of Natural History in New York, US.
None of these people are proposing that we are physical beings held in some gloopy vat and wired up to believe in the world around us, as in The Matrix.
Instead, there are at least two other ways that the Universe around us might not be the real one.
Cosmologist Alan Guth of the Massachusetts Institute of Technology, US has suggested that our entire Universe might be real yet still a kind of lab experiment. The idea is that our Universe was created by some super-intelligence, much as biologists breed colonies of micro-organisms.
Everything you have ever done or will do could simply be the product of a highly-advanced computer code.
Every relationship, every sentiment, every memory could have been generated by banks of supercomputers.
This was the terrifying theory first proposed by British philosopher Nick Bostrom.
The shocking hypothesis was penned four years after Andrew and Lana Wachowski wrote and directed The Matrix, a film set in a dystopian future in which humans are subdued by a simulated reality.
In his paper, Dr Bostrom suggested a race of far-evolved descendants could be behind our digital imprisonment.
The futuristic beings – human or otherwise – could be using virtual reality to simulate a time in the past or recreate how their remote ancestors lived.
Sound crazy? Well, it turns out NASA thinks Dr Bostrom might be right.
The Standard Model of Physics does not yet hold an explanation for the force of gravity
Rich Terrile, director of the Centre for Evolutionary Computation and Automated Design at NASA’s Jet Propulsion Laboratory, has spoken out about the digital simulation.
“Right now the fastest NASA supercomputers are cranking away at about double the speed of the human brain,” the NASA scientist told Vice.
“If you make a simple calculation using Moore’s Law [which roughly claims computers double in power every two years], you’ll find that these supercomputers, inside of a decade, will have the ability to compute an entire human lifetime of 80 years – including every thought ever conceived during that lifetime – in the span of a month.
“In quantum mechanics, particles do not have a definite state unless they’re being observed.
“Many theorists have spent a lot of time trying to figure out how you explain this.
“One explanation is that we’re living within a simulation, seeing what we need to see when we need to see it.
“What I find inspiring is that, even if we are in a simulation or many orders of magnitude down in levels of simulation, somewhere along the line something escaped the primordial ooze to become us and to result in simulations that made us – and that’s cool.”
The idea that our Universe is a fiction generated by computer code solves a number of inconsistencies and mysteries about the cosmos.
Professor Fermi known for achieving the first controlled nuclear reaction, leads a lecture
Enrico Fermi outside an atomic energy plant in Newport in October 1957
The first is the Fermi Paradox – proposed by physicist Enrico Fermi during the 1960s – which highlights the contradiction between the apparent high probability of extraterrestrial civilisations within our ever-expanding universe and humanity’s lack of contact with, or lack of evidence for, these alien colonies.
“Where is everybody?” Mr Fermi asked.
It could simply be that Earth and mankind truly is the centre of the universe.
Another mystery explained by Dr Bostrom’s Matrix-like theory is the role of Dark Matter.
US theoretical cosmologist Michael Turner has called the hypothetical material “the most profound mystery in all of science”.
Dark Matter is one of many hypothetical materials used to explain a number of anomalies in the Standard Model – the all-encompassing theory science has used to explain the particles and forces of nature for the last 50 years.
The Standard Model of particle physics tells us that there are 17 fundamental particles which make up atomic matter.
A scientist works within the ATLAS control room, part of the Large Hadron Collider facility
Scientists hope to prove the existence of Dark Matter within the CERN accelerator
The Higgs boson, which was first theorised by scientists during the 1960s, is amongst these 17 fundamental particles.
In summer 2012, scientists at CERN observed what is now believed to be the elusive “God particle”.
But the Standard Model is as-yet unable to explain a number of baffling properties of the universe – including the fact that the universe is expanding at an ever-increasing speed.
Dark Matter is believed to be a web-like matter that binds visible matter together.
If it exists, it would explain why galaxies spin at the speed they do – something which remains unexplained based only on what we can currently observe.
The Standard Model does not yet hold an explanation for the force of gravity.
The as-yet unproven existence of Dark Matter could be explained by a virtual universe.
But not everybody is convinced about The Matrix explanation.
Professor Peter Millican, who teaches philosophy and computer science at Oxford University, thinks the virtual reality explanation is flawed.
“The theory seems to be based on the assumption that ‘superminds’ would do things in much the same way as we would do them,” he said.
“If they think this world is a simulation, then why do they think the superminds – who are outside the simulation – would be constrained by the same sorts of thoughts and methods that we are?
“They assume that the ultimate structure of a real world can’t be grid like, and also that the superminds would have to implement a virtual world using grids.
“We can’t conclude that a grid structure is evidence of a pretend reality just because our ways of implementing a pretend reality involve a grid.”
Professor Millican does believe there is worth in investigating the idea.
“It is an interesting idea, and it’s healthy to have some crazy ideas,” he told The Telegraph.
“You don’t want to censor ideas according to whether they seem sensible or not because sometimes important new advances will seem crazy to start with.
“You never know when good ideas may come from thinking outside the box.
“This Matrix thought-experiment is actually a bit like some ideas of Descartes and Berkeley, hundreds of years ago.
“Even if there turns out to be nothing in it, the fact that you have got into the habit of thinking crazy things could mean that at some point you are going to think of something that initially may seem rather way out, but turns out not to be crazy at all.”
Biological brains are unlikely to be the final stage of intelligence. Machines already have superhuman strength, speed and stamina – and one day they will have superhuman intelligence. The only reasons this may not occur is if we develop some other dangerous technology first that destroys us, or otherwise fall victim to some existential risk.
But assuming that scientific and technological progress continues, human-level machine intelligence is very likely to be developed. And shortly thereafter, superintelligence.
Predicting how long it will take to develop such intelligent machines is difficult. Contrary to what some reviewers of my book seem to believe, I don’t have any strong opinion about that matter. (It is as though the only two possible views somebody might hold about the future of artificial intelligence are “machines are stupid and will never live up to the hype!” and “machines are much further advanced than you imagined and true AI is just around the corner!”).
A survey of leading researchers in AI suggests that there is a 50% probability that human-level machine intelligence will have been attained by 2050 (defined here as “one that can carry out most human professions at least as well as a typical human”). This doesn’t seem entirely crazy. But one should place a lot of uncertainty on both sides of this: it could happen much sooner or very much later.
Exactly how we will get there is also still shrouded in mystery. There are several paths of development that should get there eventually, but we don’t know which of them will get there first.
We do have an actual example of generally intelligent system – the human brain – and one obvious idea is to proceed by trying to work out how this system does the trick. A full understanding of the brain is a very long way off, but it might be possible to glean enough of the basic computational principles that the brain uses to enable programmers to adapt them for use in computers without undue worry about getting all the messy biological details right.
We already know a few things about the working of the human brain: it is a neural network, it learns through reinforcement learning, it has a hierarchical structure to deal with perceptions and so forth. Perhaps there are a few more basic principles that we still need to discover – and that would then enable somebody to clobber together some form of “neuromorphic AI”: one with elements cribbed from biology but implemented in a way that is not fully biologically realistic.
Another path is the more mathematical “top-down” approach, which makes little or no use of insights from biology and instead tries to work things out from first principles. This would be a more desirable development path than neuromorphic AI, because it would be more likely to force the programmers to understand what they are doing at a deep level – just as doing an exam by working out the answers yourself is likely to require more understanding than doing an exam by copying one of your classmates’ work.
In general, we want the developers of the first human-level machine intelligence, or the first seed AI that will grow up to be superintelligence, to know what they are doing. We would like to be able to prove mathematical theorems about the system and how it will behave as it rises through the ranks of intelligence.
One could also imagine paths that rely more on brute computational force, such by as making extensive use of genetic algorithms. Such a development path is undesirable for the same reason that the path of neuromorphic AI is undesirable – because it could more easily succeed with a less than full understanding of what is being built. Having massive amounts of hardware could, to a certain extent, substitute for having deep mathematical insight.
We already know of code that would, given sufficiently ridiculous amounts of computing power, instantiate a superintelligent agent. The AIXI model is an example. As best we can tell, it would destroy the world. Thankfully, the required amounts of computer power are physically impossible.
The path of whole brain emulation, finally, would proceed by literally making a digital copy of a particular human mind. The idea would be to freeze or vitrify a brain, chop it into thin slices and feed those slices through an array of microscopes. Automated image recognition software would then extract the map of the neural connections of the original brain. This 3D map would be combined with neurocomputational models of the functionality of the various neuron types constituting the neuropil, and the whole computational structure would be run on some sufficiently capacious supercomputer. This approach would require very sophisticated technologies, but no new deep theoretical breakthrough.
In principle, one could imagine a sufficiently high-fidelity emulation process that the resulting digital mind would retain all the beliefs, desires, and personality of the uploaded individual. But I think it is likely that before the technology reached that level of perfection, it would enable a cruder form of emulation that would yield a distorted human-ish mind. And before efforts to achieve whole brain emulation would achieve even that degree of success, they would probably spill over into neuromorphic AI.
Competent humans first, please
Perhaps the most attractive path to machine superintelligence would be an indirect one, on which we would first enhance humanity’s own biological cognition. This could be achieved through, say, genetic engineering along with institutional innovations to improve our collective intelligence and wisdom.
It is not that this would somehow enable us “to keep up with the machines” – the ultimate limits of information processing in machine substrate far exceed those of a biological cortex however far enhanced. The contrary is instead the case: human cognitive enhancement would hasten the day when machines overtake us, since smarter humans would make more rapid progress in computer science. However, it would seem on balance beneficial if the transition to the machine intelligence era were engineered and overseen by a more competent breed of human, even if that would result in the transition happening somewhat earlier than otherwise.
Meanwhile, we can make the most of the time available, be it long or short, by getting to work on the control problem, the problem of how to ensure that superintelligent agents would be safe and beneficial. This would be a suitable occupation for some of our generation’s best mathematical talent.
The Conversation organised a public question-and-answer session on Reddit in which Nick Bostrom, director of the Future of Humanity Institute at the University of Oxford, talked about developing artificial intelligence and related topics.