Can This Man and His Massive Robot Network Save America?

July 19, 2015

Creation-of-the-New-Adam-The-Transhumanist-Wager-Zoltan-Istvan

The future is forged by pouring a stiff drink, kicking back, and taking a second to question everything. We here at Esquire.com love a crazy-idea-that-just-might-work, so this week, we’re paying tribute to the forward-thinkers of past and present with a series called Esquire Predicts. Because no one gets ahead without imagining what “ahead” looks like.

Zoltan Istvan speaks in complete sentences, sometimes complete paragraphs, usually without stopping to breathe. He’s automatic. It takes him but a moment to process a question, then he’s off—spinning a web of complex information. He then starts building off that information. When he’s done, you have vastly more answers than you were originally searching for.

Istvan is the founder of the Transhumanist Party. Transhumanism is more of a way of life than a traditional political faction. Transhumanists believe that technology can and will continue to make us better; that we should merge our existence ever-closer with machines; that life extension is a beautiful and very real part of the coming future. In October 2014, Istvan founded the Transhumanist Party and became the party’s presumptive presidential nominee. Istvan, a former on-air journalist for National Geographic, is also a novelist and a philosopher. According to his bio, at age 21, he embarked on a multi-year sailing journey around the world with a primary cargo of “500 handpicked books” (mostly classics). He also pioneered an extreme sport known as volcano boarding. On the telephone, he is disarmingly polite.

Can a robot be president? Can that happen?

I have advocated for the use of artificial intelligence to potentially, one day, replace the president of the United States, as well as other politicians. And the reason is that you might actually have an entity that would be truly unselfish, truly not influenced by any type of lobbyist. Now, of course, I’m not [talking about] trying to have a robot today, especially if I’m running for the U.S. presidency. But in the future–maybe 30 years into the future–it’s very possible you could have an artificial intelligence system that can run the country better than a human being.

Why is that?

Because human beings are naturally selfish. Human beings are naturally after their own interests. We are geared towards pursuing our own desires, but oftentimes, those desires have contrasts to the benefit of society, at large, or against the benefit of the greater good. Whereas, if you have a machine, you will be able to program that machine to, hopefully, benefit the greatest good, and really go after that. Regardless of any personal interest that the machine might have. I think it’s based on having a more altruistic living entity that would be able to make decisions, rather than a human.

But what happens if people democratically pick a bad robot?

So, this is the danger of even thinking this way. Because it’s possible that you could get a robot that might become selfish during its term as president. Or it could be hacked, you know? The hacking could be the number one worry that everyone would have with an artificial intelligence leading the country. But, it could also do something crazy, like malfunction, and maybe we wouldn’t even know if it’s necessarily malfunctioning. This happens all the time in people. But the problem is, that far into the future, it wouldn’t be just one entity that’s closed off into some sort of computer that would be walking around. At that stage, an artificial intelligence that is leading the nation would be totally interconnected with all other machines. That presents another situation, because, potentially, it could just take over everything.

aVzSRxe4

That said, though, let’s say we had an on-and-off switch. This is what I have advocated for–a kind of really, really powerful on-and-off switch for any kind of A.I., because I don’t necessarily think we should release A.I. without a guaranteed on-and-off switch. For me, the greater prospect of an artificial intelligence one day leading countries is that we’re also going to be interconnected to them. Within 15 or 20 years, we’ll have cranial implant technology for mindwave-reading headsets that are so advanced that we’ll probably be directly interconnected–our thoughts, our minds, our memories–into these types of artificial entities. And at that point, I think the decision-making would be a dual-process where we would essentially have ourselves tied into artificial intelligence, but we still remain biological thinking creatures. And the artificial intelligences would help us make good decisions. You would always have something overlooking your moral systems. And that thing overlooking you would say, Hey, don’t hurt other people. Don’t hurt things that you love and don’t do things that are against the greater good of society.

Do you imagine a robot getting to a place of having morality?

Yes.

To begin with, I think we’re already getting to a stage where the basic artificial intelligences are discovering moral systems. My senior thesis in college was looking into the moral systems of A.I. and how that could be possible. I think, in many ways, moral systems are simply things that we have programmed into ourselves, either through childhood or just through genetic, ingrained ideas. So the same thing applies when you talk about machines. Eventually we’re gonna get to a situation where we’re always able to tell. Sort of like Asimov’s three laws, which essentially say, ‘You can never hurt any humans, and you must always be good to humans.’ I think we’ll get to that kind of stage where morality always breaks down into good or bad for people. So yeah, I think we’ll absolutely be able to program that into machines. But the real great danger is not our own programming. The real great danger is, how successful will that machine be at reprogramming itself? And will it have incentive to reprogram itself out of its own morality? And that’s dangerous, because I have no doubt that we could program the proper moral systems. It’s really whether a machine becomes smart enough and goes, Hey, human moral systems are not good enough for me.

Doesn’t an A.I. reach a point at which it no longer needs to please us? Does it hit a point of intelligence where its consciousness is moot, because it’s so above our own consciousness?

Yes, 100 percent. I advocate as a futurist and also as a member of the Transhumanist Party, that we never let artificial intelligence completely go on its own. I just don’t see why the human species needs an artificial entity, an artificial intelligence entity, that’s 10,000 times smarter than us. I just don’t see why that could ever be a good thing.

What I advocate for is that, as soon as we get to the point when artificial intelligence can take off and be as smart, or even 10 times more intelligent than us, we stop that research and we have the research of cranial implant technology or the brainwave. And we make that so good so that, when artificial intelligence actually decides–when we actually decide to switch the on-button–human beings will also be a part of that intelligence. We will be merged, basically directly. I see it in terms of: The world will take 100 of its best scientists–maybe even some preachers, religious people, some politicians, people from all different walks of society–and everybody will plug-in and mind upload at one time into this machine. And then when that occurs, we can let the artificial intelligence off, because that way, at least we’ll have some type of human intervention going with this incredible entity that some experts say could increase its intelligence by a thousand times within a few days.

We have to make sure that humans are at least a part of that journey. Because then it becomes something, you know, where it could go very wrong. An artificial intelligence may determine that human beings are completely unnecessary for its life, its existence. And these are not things that we want to have happen. I’m not sure if you’re familiar with my novel, The Transhumanist Wager, but I’ve often considered my book a kind of a bridge to artificial intelligence. In fact, I usually tell people that my novel is the very first book written for an artificial intelligence, because it contains a kind of moral code. Most humans hate the moral code in my novel, but I think it’s much more machine-like. Artificial intelligences, I believe, would probably very much appreciate the somewhat authoritarian moral principles that are in that book. I didn’t write the book as part of my campaign or anything like that, it’s just a fictional novel, but it contains a moral system that humans hate, because there’s no human element in its morality. And this is the danger with artificial intelligence, and why I don’t think we should bring artificial intelligence and just let it run wild–at least not without humans completely immersed into it. It’s a big challenge. We’re gonna find life extension with or without artificial intelligence. We’re gonna get closer to, hopefully, a more utopian society without it. Maybe we want to keep it to the level of a 16-year-old or a17-year-old adolescent, rather than some fully maxed-out artificial intelligence that becomes 10,000 times smarter than us in just a matter of years. Who knows what could happen? It could be a very dangerous scenario.

But is there precedent for that? Is there an example of any technology that has reached a certain age or point and stopped evolving?

I don’t if you’ve heard of the Fermi paradox, but it says that there are 2 billion planets in the universe that are potentially life-friendly. And the universe is about 14 billion years old. So, the chances of human beings being the only intelligent form of life in the universe are so minuscule that it’s really kind of crazy to actually–no scientist could ever argue that we would be alone. It’s much more likely that there are hundreds of thousands of other intelligences and other life forms out there in the universe just based on a strictly mathematical formula. And what that means is that artificial intelligence has probably already occurred in the universe. I’m a fan of the simulation theory. I tend to think that most of our existence, if not all of it, is part of a hologram created by some type of other life form, or some type of other artificial intelligence. Now, it may be impossible for us to ever know that, but a bunch of recent studies in string theory physics have proved that.

This means that if there’s something else already out there, it would almost certainly have puts limits on our growth of intelligence. And the reason it would have put limits on us is because it doesn’t want us to grow so intelligent that we would one day maybe take away their superpowered intelligence. So, I have this concept called the “singularity disparity,” which always says that whatever advanced intelligence evolves, it always puts a roadblock in the way of other intelligences evolving. And the reason this happens is so nobody can take away one’s power, no matter how far up the ladder they’ve gone.

Going back to the mind-upload. Do you see that as a thing that every country would build for its own 100 smartest minds? Or do you imagine it as one individual machine?

Vice allowed me to write [several] articles, and they basically build off each other. The first one asks, Are we approaching an artificial intelligence global arms race? And the main argument is that, whoever creates an artificial intelligence first has such a distinct military advantage over every other nation on the planet that they will forever, or they will at least indefinitely, rule the planet. For example, if we develop it, we can just rewrite all of Russia’s nuclear codes, rewrite all of the Chinese nuclear codes. It’s very important that a nice country, a democratic country, develops A.I. first, to protect other A.I.’s from developing that might be negative, or evil, or used for military purposes. The reason that’s important is that I think we’re probably only gonna end up with one A.I. ever. And for exactly the same idea that I told you about–the singularity disparity, which is once you’ve created an intelligence so smart, the real job of that intelligence is to protect itself from other intelligences becoming more intelligent than it. It’s just kind of like human beings. The way you look at money or the way you look at the success of your child, you always want to make sure that as far as it gets, it can protect itself and continue forward. So I think any type of intelligence, no matter what it is, is going to have this very basic principle to protect the power that it has gained. Therefore, I think whatever nation or whoever develops one artificial intelligence will probably make it so that artificial intelligence always stays ahead of any other developing artificial intelligence at any other point in time. It might even do things like send viruses to a second artificial intelligence, just so it can wipe it out, to protect its grounds. It’s gonna be very similar to national politics.

Are there any other politicians who share your beliefs? Do you have a role model?

You know, I actually have no role models. And it’s funny, I actually get asked this question a lot. After I had been with National Geographic for almost five years, and after a kind of a close call with a land mine in Vietnam, I came back to America and said, I’m going to dedicate my life to transhumanism. I had been covering some war zones and stuff like that for them. So I dedicated myself to transhumanism, and I took a full four years to write my novel, which sort of launched me to a pretty popular place in terms of a futurist and a transhumanist popularizer. About the first six months into the four-year endeavor of writing my novel, I stopped even listening to news, to transhumanist news. I stopped listening to Nick Bostrom and the other philosophers out there. And the reason I did is because I really wanted to come up with new ideas. I felt like the movement, itself, was kind of stagnating. It wasn’t going very far. So I sort of just stopped all the news and stopped reading anyone else and just started creating my ideas. And again, I am not advocating for that worldview in my political campaign, but I do base a huge amount of my philosophies on some of those ideas in that book which, presents its own comprehensive philosophy, which is teleologically egocentric functionalism. But the reason I mention that is that there have been no mentors. And if there is any person that I do follow somewhat closely, at least ideas I like, it’s been Frederich Nietzsche, but he’s been dead a few hundred years. And at the same time, I wouldn’t say that I actually, from a political standpoint, like many of his ideas. It just happened to be the core of a lot of my own beliefs of trying to modify my body and live indefinitely. What really applies is an evolutionary instinct to become a better entity altogether. So, in short, I don’t have any mentors or anyone that I actually follow, or would necessarily vote for.

What if you lose? Do you have any plans? Do you plan to participate in the next election? Do you have any other political aspirations?

To be honest, the main thing here in 2016–I am doing hundred-hour weeks. I am stressed to the max. We have interviews and videos and documentaries and bus tours and our campaign is real. I mean, wake up and check my email at two o’clock in the morning, four o’clock in the morning, six o’clock in the morning. It’s an incredibly involved campaign and we’re just in the beginning of it, you know? We’ve got another 14 months to go before we have to concede or something like that. Of course, I stand almost no chance of winning the 2016. But, I have been working, and I discussed this with my wife before I even started the campaign, that the real goal is to try to work and build the Transhumanist Party so that it has a much better shot at 2020 and 2024. That doesn’t mean it’s going to win in 2020 and 2024, of course, but I think we can bring the Transhumanist Party on par with the libertarian party or the green party, with the sizes of other third parties that can actually make a difference.

And its very possible–this is the trick of it all–if we can establish a Transhumanist Party by 2020, then we can get a billionaire on board. I have some very wealthy friends. Right now, they are still trying to determine if my campaign, if the Transhumanist Party, is going to work well, if it’s something that they want. But I think in four years, you put in the time, you establish yourself, you then reach out to some of these very wealthy people. It’s possible you could change the election if you just got one or two very wealthy tech people on board to say, Hey we have someone that’s on our side, we have someone who wants to take money away from wars and put it directly into science and technology. So that’s the main goal of my campaign right now, is to establish the Transhumanist Party as something that is not only credible, but something that is really worth watching.

In the meantime, we have people running for local offices already. We have someone in New York that’s going to try and do a congressional seat under the Transhumanist Party. We have a mayor in Washington that’s running under the Transhumanist Party. We are trying to spread our roots, so that by the time the future really rolls in–we think by 2020 it is going to be a different game. You know, four more years of technology developing, and the world is going to be really faced with some very strange ethical decisions. In four years, we won’t be talking artificial intelligence as if it’s something on the horizon. We’ll be talking about it as if it’s something within the next presidential election. Then candidates must address the issue because it becomes, after all, the history of civilization.

http://www.esquire.com/news-politics/interviews/a35078/transhumanist-presidential-candidate-zoltan/

2 comments

Leave a comment