Is our world a simulation? Why some scientists say it’s more likely than not

October 18, 2017

When Elon Musk isn’t outlining plans to use his massive rocket to leave a decaying Planet Earth and colonize Mars, he sometimes talks about his belief that Earth isn’t even real and we probably live in a computer simulation.

“There’s a billion to one chance we’re living in base reality,” he said at a conference in June.

Musk is just one of the people in Silicon Valley to take a keen interest in the “simulation hypothesis”, which argues that what we experience as reality is actually a giant computer simulation created by a more sophisticated intelligence. If it sounds a lot like The Matrix, that’s because it is.

According to this week’s New Yorker profile of Y Combinator venture capitalist Sam Altman, there are two tech billionaires secretly engaging scientists to work on breaking us out of the simulation. But what does this mean? And what evidence is there that we are, in fact, living in The Matrix?

One popular argument for the simulation hypothesis, outside of acid trips, came from Oxford University’s Nick Bostrom in 2003 (although the idea dates back as far as the 17th-century philosopher René Descartes). In a paper titled “Are You Living In a Simulation?”, Bostrom suggested that members of an advanced “posthuman” civilization with vast computing power might choose to run simulations of their ancestors in the universe.

This argument is extrapolated from observing current trends in technology, including the rise of virtual reality and efforts to map the human brain.

If we believe that there is nothing supernatural about what causes consciousness and it’s merely the product of a very complex architecture in the human brain, we’ll be able to reproduce it. “Soon there will be nothing technical standing in the way to making machines that have their own consciousness,” said Rich Terrile, a scientist at Nasa’s Jet Propulsion Laboratory.

At the same time, videogames are becoming more and more sophisticated and in the future we’ll be able to have simulations of conscious entities inside them.

Elon Musk on simulation: ‘The odds we’re in base reality is one in billions’

“Forty years ago we had Pong – two rectangles and a dot. That’s where we were. Now 40 years later, we have photorealistic, 3D simulations with millions of people playing simultaneously and it’s getting better every year. And soon we’ll have virtual reality, we’ll have augmented reality,” said Musk. “If you assume any rate of improvement at all, then the games will become indistinguishable from reality.”

It’s a view shared by Terrile. “If one progresses at the current rate of technology a few decades into the future, very quickly we will be a society where there are artificial entities living in simulations that are much more abundant than human beings.”

If there are many more simulated minds than organic ones, then the chances of us being among the real minds starts to look more and more unlikely. As Terrile puts it: “If in the future there are more digital people living in simulated environments than there are today, then what is to say we are not part of that already?”

Reasons to believe that the universe is a simulation include the fact that it behaves mathematically and is broken up into pieces (subatomic particles) like a pixelated video game. “Even things that we think of as continuous – time, energy, space, volume – all have a finite limit to their size. If that’s the case, then our universe is both computable and finite. Those properties allow the universe to be simulated,” Terrile said.

“Quite frankly, if we are not living in a simulation, it is an extraordinarily unlikely circumstance,” he added.

So who has created this simulation? “Our future selves,” said Terrile.

Not everyone is so convinced by the hypothesis. “Is it logically possible that we are in a simulation? Yes. Are we probably in a simulation? I would say no,” said Max Tegmark, a professor of physics at MIT.

“In order to make the argument in the first place, we need to know what the fundamental laws of physics are where the simulations are being made. And if we are in a simulation then we have no clue what the laws of physics are. What I teach at MIT would be the simulated laws of physics,” he said.

Harvard theoretical physicist Lisa Randall is even more skeptical. “I don’t see that there’s really an argument for it,” she said. “There’s no real evidence.”

“It’s also a lot of hubris to think we would be what ended up being simulated.”

Terrile believes that recognizing that we are probably living in a simulation is as game-changing as Copernicus realizing that the Earth was not the center of the universe. “It was such a profound idea that it wasn’t even thought of as an assumption,” he said.

Before Copernicus, scientists had tried to explain the peculiar behaviour of the planets’ motion with complex mathematical models. “When they dropped the assumption, everything else became much simpler to understand.”

That we might be in a simulation is, Terrile argues, a simpler explanation for our existence than the idea that we are the first generation to rise up from primordial ooze and evolve into molecules, biology and eventually intelligence and self-awareness. The simulation hypothesis also accounts for peculiarities in quantum mechanics, particularly the measurement problem, whereby things only become defined when they are observed.

“For decades it’s been a problem. Scientists have bent over backwards to eliminate the idea that we need a conscious observer. Maybe the real solution is you do need a conscious entity like a conscious player of a video game,” he said.

For Tegmark, this doesn’t make sense. “We have a lot of problems in physics and we can’t blame our failure to solve them on simulation.”

How can the hypothesis be put to the test? On one hand, neuroscientists and artificial intelligence researchers can check whether it’s possible to simulate the human mind. So far, machines have proven to be good at playing chess and Go and putting captions on images. But can a machine achieve consciousness? We don’t know.

On the other hand, scientists can look for hallmarks of simulation. “Suppose someone is simulating our universe – it would be very tempting to cut corners in ways that makes the simulation cheaper to run. You could look for evidence of that in an experiment,” said Tegmark.

For Terrile, the simulation hypothesis has “beautiful and profound” implications.

First, it provides a scientific basis for some kind of afterlife or larger domain of reality above our world. “You don’t need a miracle, faith or anything special to believe it. It comes naturally out of the laws of physics,” he said.

Second, it means we will soon have the same ability to create our own simulations.

“We will have the power of mind and matter to be able to create whatever we want and occupy those worlds.”

Original source: https://www.theguardian.com/technology/2016/oct/11/simulated-world-elon-musk-the-matrix#img-1

Advertisements

Why we really should ban autonomous weapons: a response

September 20, 2015

Ethics-Image

We welcome Sam Wallace’s contribution to the discussion on a proposed ban on offensive autonomous weapons. This is a complex issue and there are interesting arguments on both sides that need to be weighed up carefully.

His article, written as a response to an open letter signed by over 2500 AI and robotics researchers, begins with the claim that such a ban is as “unrealistic as the broad relinquishment of nuclear weapons would have been at the height of the cold war.”

This argument misses the mark. First, the letter proposes not unilateral relinquishment but an arms control treaty. Second, nuclear weapons were successfully curtailed by a series of arms-control treaties during the cold war, without which we might not have been here to have this conversation.

After that, his article makes three main points:

1) Banning a weapons system is unlikely to succeed, so let’s not try.

(“It would be impossible to completely stop nations from secretly working on these technologies out of fear that other nations and non-state entities are doing the same.” “It’s not rational to assume that terrorists or a mentally ill lone wolf attacker would respect such an agreement.”)

2) An international arms control treaty would necessarily hurt U.S. national security.

3) Game theory argues against an arms control treaty.

Are all arms control treaties bad?

Note that his first two arguments apply to any weapons system, and could be used to re-title his article “The proposed ban on <insert type here> is unrealistic and dangerous.”

Argument (1) is particularly relevant to chemical and biological weapons, which are arguably (and contrary to Wallace’s claims) even more low-tech and easy to produce than autonomous weapons. Yet the world community has rather successfully banned biological weapons, space-based nuclear weapons, and blinding laser weapons, and even for arms such as chemical weapons, land mines, and cluster munitions, where bans have been breached or not universally ratified, severe stigmatization has limited their use. We wonder if Wallace supports those bans and, if so, why.

Wallace’s main argument for why autonomous weapons are different from chemical weapons rests on AI systems that “infiltrate and take over the command and control of their enemy.” But this misses the point of the open letter, which is not opposing cyberdefence systems or other defensive weapons. (The treaty under discussion at the UN deals with lethal weapons; a defensive autonomous weapon that targets robots is not lethal.)

Indeed, if one is worried about cyberwarfare, relying on autonomous weapons only makes things worse, since they are easier to hack than human soldiers.

One thing we do agree with Wallace on is that negotiating and implementing a ban will be hard. But as John F. Kennedy emphasized when announcing the Moon missions, hard things are worth attempting when success will greatly benefit the future of humanity.

National security

Regarding argument (2), we agree that all countries need to protect their national security, but we assert that this argues for rather than against an arms control treaty. When President Richard Nixon argued for a ban on biological weapons in 1969, he argued that this would strengthen U.S. national security, because U.S. biological warfare research created a model that other, less powerful, nations might easily emulate, to the eventual detriment of U.S. security.

Most of Wallace’s arguments for why a ban would hurt U.S. national security are attacking imaginary proposals that the open letter doesn’t make. For example, he gives many examples of why it’s important to have defensive systems (against hacking, incoming mortars, rockets, drones, robots that physically take control of our aircraft, etc), and warns of trying to “fight future flying robot tanks by using an equine cavalry defense,” but the letter proposes a ban only on offensive, not defensive weapons.

He argues that we can’t uninvent deep learning and other AI algorithms, but the thousands of AI and robotics signatories aren’t proposing to undo or restrict civilian AI research, merely to limit its military use. Moreover, we can’t uninvent molecular biology or nuclear physics, but we can still try to prevent their use for mass killing.

Wallace also gives some technically flawed arguments for why a ban would hurt U.S. national security. For example, his argument in the “deception” section evaporates when securely encrypted video streaming is used.

His concern that a military superpower such as the U.S. could be defeated by home-made, weaponized civilian drones is absurd, and consideration of such unfeasible scenarios is best confined to computer games. Yes, nations need to protect against major blows to their defensive assets, but home-made pizza drones can’t deliver that. Some advanced future military technology might, and preventing such developments is the purpose of the treaty we advocate.

Finally, Wallace argues that we shouldn’t work towards arms control agreements because people might “merge with machines” into cyborgs or “some time in the next few decades you might also have to get a consciously aware AI weapon to agree to the terms of the treaty” — let’s not let highly speculative future scenarios distract us from the challenge of stopping an arms race today!

Game theory

Wallace makes an argument based on game theory for why arms control treaties can only work if there’s another more powerful weapon left unregulated, that can be used as deterrence.

First of all, this argument is irrelevant since there’s currently no evidence that offensive autonomous weapons would undermine today’s nuclear deterrence.

Second, even if the argument were relevant, game theory beautifully explains why verifiable and enforceable arms control treaties can enhance the national security of all parties, by changing the incentive structure away from a destructive prisoner’s dilemma situation to a new equilibrium where cooperation is in everybody’s best interest.

What’s his plan?

What we view as the central weakness of Wallace’s article is that it never addresses the main argument of the open letter: that the end-point of an AI arms race will be disastrous for humanity. The open letter proposes a solution (attempting to stop the arms race with an arms control agreement), but he offers no alternative solution.

Instead, his proposed plan appears to be that all world military powers should develop offensive autonomous weapons as fast as possible. Yet he fails to follow through on his proposal and describe what endpoint he expects it to lead to. Indeed, he warns in his article that one way to prevent terrorism with cheap autonomous weapons is an extreme totalitarian state, but he never explains how his proposed plan will avoid such totalitarianism.

If every terrorist and every disgruntled individual can buy lethal autonomous drones for their pet assassination projects with the same ease that they can buy Kalashnikovs today, how is his proposed AI-militarization plan supposed to stop this? Is he proposing a separate military drone hovering over every city block 24 hours per day, ready to strike suspect citizens without human intervention?

Wallace never attempts to explain why a ban is supported by thousands of AI and robotics experts, by the ambassadors of Germany and Japan, by the International Committee of the Red Cross, by the editorial pages of the Financial Times, and indeed (for the time being) by the stated policy of the U.S. Department of Defense, other than with a dismissive remark about “kumbaya mentality.”

Anybody criticizing an arms-control proposal endorsed by such a diverse and serious-minded group needs to clearly explain what they are proposing instead.

Stuart Russell is a professor of computer science at UC Berkeley, and co-author of the standard textbook, Artificial Intelligence: a Modern Approach. Max Tegmark is a professor of physics at MIT and co-founder of the Future of Life Institute. Toby Walsh is a professor of AI at the University of New South Wales and NICTA, Australia, and president of the AI Access Foundation.

http://www.kurzweilai.net/why-we-really-should-ban-autonomous-weapons-a-response