Deus ex machina: former Google engineer is developing an AI god

October 18, 2017

Intranet service? Check. Autonomous motorcycle? Check. Driverless car technology? Check. Obviously the next logical project for a successful Silicon Valley engineer is to set up an AI-worshipping religious organization.

Anthony Levandowski, who is at the center of a legal battle between Uber and Google’s Waymo, has established a nonprofit religious corporation called Way of the Future, according to state filings first uncovered by Wired’s Backchannel. Way of the Future’s startling mission: “To develop and promote the realization of a Godhead based on artificial intelligence and through understanding and worship of the Godhead contribute to the betterment of society.”

Levandowski was co-founder of autonomous trucking company Otto, which Uber bought in 2016. He was fired from Uber in May amid allegations that he had stolen trade secrets from Google to develop Otto’s self-driving technology. He must be grateful for this religious fall-back project, first registered in 2015.

The Way of the Future team did not respond to requests for more information about their proposed benevolent AI overlord, but history tells us that new technologies and scientific discoveries have continually shaped religion, killing old gods and giving birth to new ones.

As author Yuval Noah Harari notes: “That is why agricultural deities were different from hunter-gatherer spirits, why factory hands and peasants fantasised about different paradises, and why the revolutionary technologies of the 21st century are far more likely to spawn unprecedented religious movements than to revive medieval creeds.”

Religions, Harari argues, must keep up with the technological advancements of the day or they become irrelevant, unable to answer or understand the quandaries facing their disciples.

“The church does a terrible job of reaching out to Silicon Valley types,” acknowledges Christopher Benek a pastor in Florida and founding chair of the Christian Transhumanist Association.

Silicon Valley, meanwhile, has sought solace in technology and has developed quasi-religious concepts including the “singularity”, the hypothesis that machines will eventually be so smart that they will outperform all human capabilities, leading to a superhuman intelligence that will be so sophisticated it will be incomprehensible to our tiny fleshy, rational brains.

Anthony Levandowski, the former head of Uber’s self-driving program, with one of the company’s driverless cars in San Francisco. Photograph: Eric Risberg/AP

For futurists like Ray Kurzweil, this means we’ll be able to upload copies of our brains to these machines, leading to digital immortality. Others like Elon Musk and Stephen Hawking warn that such systems pose an existential threat to humanity.

“With artificial intelligence we are summoning the demon,” Musk said at a conference in 2014. “In all those stories where there’s the guy with the pentagram and the holy water, it’s like – yeah, he’s sure he can control the demon. Doesn’t work out.”

Benek argues that advanced AI is compatible with Christianity – it’s just another technology that humans have created under guidance from God that can be used for good or evil.

“I totally think that AI can participate in Christ’s redemptive purposes,” he said, by ensuring it is imbued with Christian values.

“Even if people don’t buy organized religion, they can buy into ‘do unto others’.”

For transhumanist and “recovering Catholic” Zoltan Istvan, religion and science converge conceptually in the singularity.

“God, if it exists as the most powerful of all singularities, has certainly already become pure organized intelligence,” he said, referring to an intelligence that “spans the universe through subatomic manipulation of physics”.

“And perhaps, there are other forms of intelligence more complicated than that which already exist and which already permeate our entire existence. Talk about ghost in the machine,” he added.

For Istvan, an AI-based God is likely to be more rational and more attractive than current concepts (“the Bible is a sadistic book”) and, he added, “this God will actually exist and hopefully will do things for us.”

We don’t know whether Levandowski’s Godhead ties into any existing theologies or is a manmade alternative, but it’s clear that advancements in technologies including AI and bioengineering kick up the kinds of ethical and moral dilemmas that make humans seek the advice and comfort from a higher power: what will humans do once artificial intelligence outperforms us in most tasks? How will society be affected by the ability to create super-smart, athletic “designer babies” that only the rich can afford? Should a driverless car kill five pedestrians or swerve to the side to kill the owner?

If traditional religions don’t have the answer, AI – or at least the promise of AI – might be alluring.

Original source: https://www.theguardian.com/technology/2017/sep/28/artificial-intelligence-god-anthony-levandowski

Advertisements

Why Haven’t We Met Aliens Yet? Because They’ve Evolved into AI

June 18, 2016

bd470d63621c520fc5a59db0b896336e

While traveling in Western Samoa many years ago, I met a young Harvard University graduate student researching ants. He invited me on a hike into the jungles to assist with his search for the tiny insect. He told me his goal was to discover a new species of ant, in hopes it might be named after him one day.

Whenever I look up at the stars at night pondering the cosmos, I think of my ant collector friend, kneeling in the jungle with a magnifying glass, scouring the earth. I think of him, because I believe in aliens—and I’ve often wondered if aliens are doing the same to us.

Believing in aliens—or insanely smart artificial intelligences existing in the universe—has become very fashionable in the last 10 years. And discussing its central dilemma: the Fermi paradox, has become even more so. The Fermi paradox states that the universe is very big—with maybe a trillion galaxies that might contain 500 billion stars and planets each—and out of that insanely large number, it would only take a tiny fraction of them to have habitable planets capable of bringing forth life.

Whatever you think, the numbers point to the insane fact that aliens don’t just exist, but probably billions of species of aliens exist. And the Fermi paradox asks: With so many alien civilizations out there, why haven’t we found them? Or why haven’t they found us?

The Fermi paradox’s Wikipedia page has dozens of answers about why we haven’t heard from superintelligent aliens, ranging from “it is too expensive to spread physically throughout the galaxy” to “intelligent civilizations are too far apart in space or time” to crazy talk like “it is the nature of intelligent life to destroy itself.”

Millions of singularities have already happened, but we’re similar to blind bacteria in our bodies running around cluelessly

Given that our planet is only 4.5 billion years old in a universe that many experts think is pushing 14 billion years, it’s safe to say most aliens are way smarter than us. After all, with intelligence, there is a massive divide between the quality of intelligences. There’s ant level intelligence. There’s human intelligence. And then there’s the hypothetical intelligence of aliens—presumably ones who have reached the singularity.

The singularity, David Kelley, co-founder of Wired Magazine, says, is the point at which “all the change in the last million years will be superseded by the change in the next five minutes.”

If Kelley is correct about how fast the singularity accelerates change—and I think he is—in all probability, many alien species will be trillions of times more intelligent than people.

Put yourself in the shoes of extraterrestrial intelligence and consider what that means. If you were a trillion times smarter than a human being, would you notice the human race at all? Or if you did, would you care? After all, do you notice the 100 trillion microbes or more in your body? No, unless they happen to give you health problems, like E. coli and other sicknesses. More on that later.

One of the big problems with our understandings of aliens has to do with Hollywood. Movies and television have led us to think of aliens as green, slimy creatures traveling around in flying saucers. Nonsense. I think if advanced aliens have just 250 years more evolution than us, they almost certainly won’t be static physical beings anymore—at least not in the molecular sense. They also won’t be artificial intelligences living in machines either, which is what I believe humans are evolving into this century. No, becoming machine intelligence is just another passing phase of evolution—one that might only last a few decades for humans, if that.

Truly advanced intelligence will likely be organized intelligently on the atomic scale, and likely even on scales far smaller. Aliens will evolve until they are pure, willful conscious energy—and maybe even something beyond that. They long ago realized that biology and ones and zeroes in machines was literally too rudimentary to be very functional. True advanced intelligence will be spirit-like—maybe even on par with some people’s ideas of ghosts.

On a long enough time horizon, every biological species would at some point evolve into machines, and then evolve into intelligent energy with a consciousness. Such brilliant life might have the ability to span millions of lights years nearly instantaneously throughout the universe, morphing into whatever form it wanted.

Like all evolving life, the key to attaining the highest form of being and intelligence possible was to intimately become and control the best universal elements—those that are conducive to such goals, especially personal power over nature. Everything else in advanced alien evolution is discarded as nonfunctional and nonessential.

All intelligence in the universe, like all matter and energy, follows patterns—based on rules of physics. We engage—and often battle—those patterns and rules, until we understand them, and utilize them as best as possible. Such is evolution. And the universe is imbued with wanting life to arise and evolve, as MIT physicist Jeremy England, points out in this Quanta Magazine article titled A New Physics Theory of Life.

Back to my ant collector friend in Western Samoa. It would be nice to believe that the difference between the ant collector and the ant’s intelligence was the same between humans and very sophisticated aliens. Sadly, that is not the case. Not even close.

The difference between a species that has just 100 more years of evolution than us could be a billion times that of an ant versus a human—given the acceleration of intelligence. Now consider an added billion years of evolution. This is way beyond comparing apples and oranges.

The crux of the problem with aliens and humans is we’re not hearing or seeing them because we don’t have ways to understand their language. It’s simply beyond our comprehension and physical abilities. Millions of singularities have already happened, but we’re similar to blind bacteria in our bodies running around cluelessly.

The good news, though, is we’re about to make contact with the best of the aliens out there. Or rather they’re about to school us. The reason: The universe is precious, and in approximately a century’s time, humans may be able to conduct physics experiments that could level the entire universe—such as building massive particle accelerators that make the God particle swallow the cosmos whole.

Like a grumpy landlord at the door, alien intelligence will make contact and let us know what we can and can’t do when it comes to messing with the real estate of the universe. Knock. Knock.

Zoltan Istvan is a futurist, journalist, and author of the novel The Transhumanist Wager. He writes an occasional column for Motherboard in which he ruminates on the future beyond natural human ability.

http://motherboard.vice.com/read/why-havent-we-met-aliens-yet-because-theyve-evolved-into-ai

Will technology allow us to transcend the human condition?

June 18, 2016

imrs.php

While it may sound like something straight out of a sci-fi film, the U.S. intelligence community is considering “human augmentation” and its possible implications for national security.

As described in the National Intelligence Council’s 2012 long-term strategic analysis document — the fifth report of its kind — human augmentation is seen as a “game-changer.” The report detailed the potential benefits of brain-machine interfaces and neuro-enhancements, noting that “moral and ethical challenges . . . are inevitable.”

The NIC analysts aren’t the only ones following the rapid growth of technology. Today there is an entire movement, called transhumanism, dedicated to promoting the use of technological advancements to enhance our physical, intellectual and psychological capabilities, ultimately transcending the limitations of the human condition. Its proponents claim that within the next several decades, living well beyond the age of 100 will be an achievable goal.

Coined by biologist and eugenicist Julian Huxley (brother of author Aldous Huxley) in 1957, transhumanism remained the the terrain of science fiction authors and fringe philosophers for the better part of the 20th century. The movement gained broader interest as science advanced, leaping forward in credibility in the 1990s with the invention of the World Wide Web, the sequencing of the human genome and the exponential growth of computing power.

New technologies continue to push the limits of life. CRISPR enables scientists to alter specific genes in an organism and make those changes heritable, but the advancement is so recent that regulation is still up for debate. Meanwhile, participants in the “body-hacking” movement are implanting RFID microchips and magnets into their bodies to better take advantage of potentially life-enhancing technology. (Some claim, not unfairly, that these modifications aren’t so different from much more accepted technologies such as pacemakers and intrauterine devices). Just last week, in a closed-door meeting at Harvard University, a group of nearly 150 scientists and futurists discussed a project to synthesize the human genome, potentially making it possible to create humans with certain kinds of predetermined traits.

Transhumanism, in its most extreme manifestation, is reflective of an increasingly pervasive and influential school of thought: that all problems can and should be solved with the right combination of invention, entrepreneurship and resource allocation. The movement has its critics. Techno-utopianism is often described as the religion of Silicon Valley, in no small part because tech moguls are often the only ones with the resources to pursue it, and the only ones who stand to benefit in the near term.

As the solutions that transhumanists champion slowly enter the market, high prices leave them far out of reach for the typical consumer. Even today, the ability to make use of neuro-enhancing drugs and genetic screening for embryos greatly depends on whether one can afford them. If the benefits of human enhancement accrue only to the upper classes, it seems likely that inequality will be entrenched in ways deeper than just wealth, fundamentally challenging our egalitarian ideals.

And for many religious and philosophical opponents, transhumanism appears at its core to be an anti-human movement. Rather than seeking to improve the human condition through engagement with each other, transhumanists see qualities that make up the human identity as incidental inconveniences — things to override as soon as possible.

But for all its misgivings, transhumanism is making its way from the world of speculative technology into the mainstream. Google recently hired Ray Kurzweil, the inventor best known for his predictions of “the singularity” — simply put, the moment at which artificial intelligence surpasses human intelligence — and his assertions that medical technology will soon allow humans to transcend death, as its chief futurist. At the same time, the Transhumanist Party is floating Zoltan Istvan as its own third-party candidate for president.

The transhumanist movement is growing in followers and gaining media attention, but it’s unclear whether its particular preoccupations are inevitable enough to concern us today. Yet as technology continues to provide tools to manipulate the world around us, it becomes more and more likely that we will reach to manipulate ourselves. What could be the ramifications of a new wave of human enhancement? And what does our increasing fascination with technological futurism say about our priorities today?

https://www.washingtonpost.com/news/in-theory/wp/2016/05/16/will-technology-allow-us-to-transcend-the-human-condition/

This guy is running for president with the goal of using science to cure death and aging

September 7, 2015

Zoltan_Istvan_US_Presidential_Candidate_Poster

Taxes, climate change, the wage gap. These are just a few of the issues that both Republican and Democratic presidential candidates are expected to tackle during their campaigns.

But presidential candidate Zoltan Istvan has another policy issue at the top of his list: death.

Istvan is the founder of the Transhumanist Party, a political party focused on using science and technology to solve most of the world’s problems. With his campaign, Istvan seeks to make longevity research just as big of an issue as social security or immigration.

For Istvan, aging and death are the biggest plague of our time. And technology is the cure.

“A big part of my own campaign is that aging is actually a disease and not something natural,” Istvan told Tech Insider. “In the 21st century to not be using science and technology for everyone’s direct health and longevity is something that should not be allowed anymore.”

Unfortunately, the government doesn’t see it this way and is investing very little in longevity research, Istvan said.

So the 42-year-old is setting out across the country next month to campaign on the platform of using technology to live forever. But his bus tour will be a bit more flashy than the rest.

Istvan — along with some embedded journalists, scientists, and other transhumanists — will be touring the nation in a converted RV disguised as a coffin — a reminder that the Grim Reaper is coming unless we take action to stop it.

“We have a real chance of stopping death”

Istvan and his supporters will kick off their tour in the so-called “Immortality Bus” on the west coast, stopping in cities across the nation sounding the alarm that there are not enough resources currently being invested in fighting death.

Istvan, like other transhumanists, said he believes that merging technology with human biology can radically extend life. For example, using bionic organs as transplants when our natural organs fail. But this kind of life extension technology will only become possible when people demand that more money be spent on longevity research.

“I think people have just been conditioned to believe that this is just a natural part of existence, that that’s the program,” Istvan said. “And so our job is to uncondition that. To tell them actually it was the program until we reached the 21st century and now all of a sudden we realize that with genetics and bionics and robotics that we have a real chance of stopping death and treating it as something much more similar to a disease than some natural phenomenon.”

For the fiscal year of 2015, Congress allocated about $609.3 billion or 16% of all federal spending to the military. Total federal funds invested in the sciences was just $29.81 billion, or .78% of the same lot. And just a tiny fraction of that, if any, is being spent directly on things that qualify as longevity research, he said.

“We are not spending any of the money directly that could make us live considerably longer. For example, robotic hearts or 3D printed organs,” he said. “There are things we can do out there if we just had the money if the scientists just had the resources, that they could tackle.”

Zoltan Istvan Immortality Bus bacground croppedRachel LynThe final design of the “Immortality Bus.”

Istvan said that with an investment of $1 trillion in longevity research, the aging process could be stopped in just a decade. And in 20 years, researchers could even be capable of reversing the process, he said.

The “Transhumanist Bill of Rights”

Eventually, Istvan’s bus party will make its way to the nation’s capital to deliver a bill that requires the government support a longer lifespan via science and technology.

“We are going to end in DC, walk up the steps of the US capitol building and deliver what we consider a Transhumanist Bill of Rights,” he said. “There needs to be some type of mandate that says it’s illegal to stop or not put forth resources into this type of science, because by not putting money and resources into this type of science you are effectively shortening people’s’ lives.”

For example, when George W. Bush vetoed bills related to spending federal funds on stem cell research during his presidency, transhumanists would consider that a crime, Istvan said.

While Istvan acknowledges that he doesn’t have any real chance of winning, he said he does hope that his audacious campaign gets a conversation started among other candidates about the future of technology in our country.

“When you are a third-party candidate, half of what you do is entertainment to be honest because you are actually trying to spread a message knowing you have very little chance of winning,” he said. “I know it’s probably going to fall on quite deaf ears, but we are going to deliver it nonetheless.”

http://www.techinsider.io/zoltan-istvan-running-for-president-as-transhumanist-2015-8

6 billionaires who want to live forever

September 7, 2015

rtr3g4yp

A growing number of tech moguls are trying to solve their biggest problem yet: aging.

From reprogramming DNA to printing organs, some of Silicon Valley’s most successful and wealthy leaders are investing in biomedical research and new technologies with hopes of discovering the secret to living longer.

And their investments are beginning to move the needle, said Zoltan Istvan, a futurist and transhumanist presidential candidate.

“I think a lot of the most important work in longevity is coming from a handful of the billionaires,” Istvan told Tech Insider. “There are approximately six or seven billionaires that are very interested in life extension, and they are putting in $40 [million], $50 [million], $100 million out there every year or every few years into this stuff. It makes a big difference when you have these legendary figures saying, ‘Hey, we can do this.'”

Here are some of those billionaires investing in antiaging and longevity research and development:

Peter Thiel

Peter Thiel

Tristan Fewings/Getty Images

Peter Thiel, the billionaire cofounder of PayPal, is known for his early investment in Facebook, but now he is betting big on biotech. Thiel said he believes antiaging medicine is “structurally unexplored,” according to a report from MIT Technology Review.

“The way people deal with aging is a combination of acceptance and denial,” he told Technology Review in March. “They accept there is nothing they can do about it, and deny it’s going to happen to them.”

Thiel takes hormone growth daily and is planning to participate in cryonic freezing after his death, according to the Technology Review report.

The 47-year-old isn’t accepting or denying it, though. He has invested heavily to try to fight death for the last several years. Back in 2006, he pledged $3.5 million to the Methuselah Foundation, a nonprofit group working on life extension by advancing tissue engineering and regenerative medicine.

Thiel has also heavily invested in biotech companies. Most of his investments in the space are made via his Thiel Foundation. But at least five investments — including the DNA laser-printing company Cambrian Genomics and cancer-drug developer Stemcentrx — via his venture capital firm Founder Fund.

He has also invested $17 million sine 2011 in Counsyl, a company that offers DNA screening.

Larry Ellison

Larry Ellison

AP Photo/Eric Risberg

The founder of Oracle has said he wishes to live forever and is an avid financial supporter into antiaging research.

The Ellison Medical Foundation, which, according to its website “supports basic biomedical research on aging relevant to understanding lifespan development processes and age-related diseases and disabilities,” has donated about $430 million in grants to medical researchers since 1997, about 80% of which has been focused on antiaging developments.

“Death has never made any sense to me. How can a person be there and then just vanish, just not be there?” Ellison told his biographer Mike Wilson in 2003.

Larry Page

Larry Page

Justin Sullivan/Getty

The cofounder of Google and CEO of Alphabet also founded Calico in 2013. Calico, short for “California Life Company,” focuses on antiaging research. In 2014, the company announced it had an investment of $750 million from Google.

Since its launch, Calico has also entered into several partnerships with different organizations to help it cure aging.

Most recently, Calico announced in April that it was teaming up with the Buck Institute for Research on Aging, one of the largest independent, antiaging research organizations.

In 2013, the group garnered some attention for using genetic mutations to increase the lifespan of earthworms to the human equivalent of 400 to 500 years.

Sergey Brin

Sergey Brin

Justin Sullivan/Getty Images

Sergey Brin, cofounder of Google, has also made big investments in antiaging technology.

The 41-year-old has taken a particular interest in curing Parkinson’s disease. Brin disclosed in 2008 that he carried a gene that puts him at higher risk of developing the disease. He has donated more than $150 million to find a cure for the disease.

For Brin, big data could hold the key to better understanding DNA and preventing neurodegenerative diseases like Parkinson’s.

Brin, now president of Google’s parent company, Alphabet, also pushed for medical research while he headed up Google X, the company’s semisecret moonshot lab.

One of the division borne out of that lab was the Life Science team, which focused on developing things like a glucose-detecting contact lens.

In August, Brin announced that the Life Sciences team would now be its own company under Alphabet and continue to work on “new technologies from early stage R&D to clinical testing — and, hopefully — transform the way we detect, prevent, and manage disease.”

Last year, Andrew Conrad, head of the new Life Sciences division, said that the team was also working on a treatment that would embed nanoparticles in your bloodstream to detect for diseases like cancer.

Mark Zuckerberg

Mark Zuckerberg

In June, during a frank Facebook Q&A, Stephen Hawking asked Mark Zuckerberg what big questions in science he’d like to know the answer to and why.

“I’m most interested in questions about people. What will enable us to live forever? How do we cure all diseases? How does the brain work? How does learning work and how we can empower humans to learn a million times more?” Zuckerberg replied.

Zuckerberg and his wife, Priscilla Chan, along with Brin and his ex-wife Anne Wojcicki, are also founders of the Breakthrough Prize, which awards $3 million to scientists who discover new ways to extend human life.

Sean Parker

Sean Parker

AP

For Sean Parker, cofounder of Napster and first president of Facebook, investing in technologies that can extend life is personal.

According to a report from The Washington Post, Parker suffers from life-threatening food allergies and has relatives that suffer from autoimmune disorders.

The 34-year-old billionaire has donated millions to searching for a cure for allergies and for cancer research, according to the report.

http://www.techinsider.io/billionaires-who-want-to-live-forever-2015-9

Transhumanism Is Booming and Big Business Is Noticing

July 21, 2015

2000px-Transhumanism_h+_2.svg

I recently had the privilege of being the opening keynote speaker at the Financial Times Camp Alphaville 2015 conference in London. Attending were nearly 1000 people, including economists, engineers, scientists, and financiers. Amongst robots mingling with guests, panels discussing Greece’s future, and Andrew Fastow describing the fall of Enron in his closing speech, event participants were given a dynamic picture of the ever changing business landscape and its effect on our lives.

One thing I noticed at the conference was the increasing interest in longevity science–the transhumanist field that aims to control and hopefully even eliminate aging in the near future. Naturally, everyone has a vested interest in some type of control over their aging and biological mortality. We are, at the core, mammals primarily interested in our health, the health of our loved ones, and the health of our species. But the feeling at the conference–and in the media these days too–was more pronounced than before.

With billionaires like Peter Thiel and Larry Ellison openly putting money into aging research, and behemoths like Google recently forming its anti-aging company Calico, there’s real confidence that the human race may end up stopping death in the next few decades. There’s also growing confidence that companies can make fortunes in the immortality quest.

Google Ventures’ President Bill Maris, who helps direct investments into health and science companies, recently made headlines by telling Bloomberg, “If you ask me today, is it possible to live to be 500? The answer is yes.”

As a transhumanist, my number one goal has always been to use science and technology to live in optimum health indefinitely. Until the last few years, this idea was seen mostly as something fringe. But now with the business community getting involved and supporting longevity science, this attitude is inevitably going to go mainstream.

I am thrilled with this. Business has always spurred new industry and quickened the rise of civilization.

However, significant challenges remain. The million dollar question is: How are we going to overcome death? It’s a great question–and it’s a very common question transhumanists get asked. It’s usually followed by: And is it really possible to overcome death?

Honestly, no one knows the answers definitely yet, but here are the best tactics so far: Inventors like Google’s Ray Kurzweil believe it can be done with machines and mind uploading. SENS Chief Scientist and Transhumanist Party Anti-aging Advisor, Dr. Aubrey de Grey, believes it can be done with biology and medicine. Others believe big data can find out the very best ways to achieve better methods for living far longer.

2015-07-16-1437027250-62182-carmatheart2.jpg
Carmat’s artificial heart — photo by Carmat

Organ failure is often the cause of death, and since I have heart disease running in my family, I’m a big believer in replacing organs–either with 3D printing of new organs or with robotic ones. In fact, in 10 years time, some people think it’s possible the robotic heart will be equivalent to the human heart, and then people may electively seek to replace their biological heart. Because cardiovascular disease is the #1 killer in America and around the globe (claiming the lives of about a third of everyone) this type of technology can’t come soon enough.

Entrepreneurs, venture capital firms, and even business media are taking notice of how new transhumanist-oriented companies are emerging and working to overcome death. The next generation of billionaires is likely to come from the biotech industry. But transhumanist technology is much larger than just biotech. It’s all technology that is reinventing the human being as we know it. It’s driverless cars soon to be eliminating the tens of thousands of deaths worldwide from drunk driving accidents. It’s exoskeleton technology already getting wheelchair-bound people standing up and walking. It’s chip implants monitoring our hydration and sugar levels, then telling our smartphones when and what we should eat and drink.

Transhumanism will soon emerge as the coolest, potentially most important industry in the world. Big business is rushing to hire engineers and scientists who can help usher in brand new health products to accommodate our changing biological selves. And, indeed, we are changing. From deafness being wiped out by cochlear implant technology, to stem cell rejuvenation of cancer-damaged organs, to enhanced designer babies created with genetics. This is no longer the future. This is here, today.

Looking forward, fortunes are going to be made by those companies that use radical science and technology to make the human being become the healthiest and strongest entity it can become.

****

Watch my 4-minute video on transhumanism from Financial Times Camp Alphaville 2015

http://www.huffingtonpost.com/zoltan-istvan/transhumanism-is-becoming_b_7807082.html

Can This Man and His Massive Robot Network Save America?

July 19, 2015

Creation-of-the-New-Adam-The-Transhumanist-Wager-Zoltan-Istvan

The future is forged by pouring a stiff drink, kicking back, and taking a second to question everything. We here at Esquire.com love a crazy-idea-that-just-might-work, so this week, we’re paying tribute to the forward-thinkers of past and present with a series called Esquire Predicts. Because no one gets ahead without imagining what “ahead” looks like.

Zoltan Istvan speaks in complete sentences, sometimes complete paragraphs, usually without stopping to breathe. He’s automatic. It takes him but a moment to process a question, then he’s off—spinning a web of complex information. He then starts building off that information. When he’s done, you have vastly more answers than you were originally searching for.

Istvan is the founder of the Transhumanist Party. Transhumanism is more of a way of life than a traditional political faction. Transhumanists believe that technology can and will continue to make us better; that we should merge our existence ever-closer with machines; that life extension is a beautiful and very real part of the coming future. In October 2014, Istvan founded the Transhumanist Party and became the party’s presumptive presidential nominee. Istvan, a former on-air journalist for National Geographic, is also a novelist and a philosopher. According to his bio, at age 21, he embarked on a multi-year sailing journey around the world with a primary cargo of “500 handpicked books” (mostly classics). He also pioneered an extreme sport known as volcano boarding. On the telephone, he is disarmingly polite.

Can a robot be president? Can that happen?

I have advocated for the use of artificial intelligence to potentially, one day, replace the president of the United States, as well as other politicians. And the reason is that you might actually have an entity that would be truly unselfish, truly not influenced by any type of lobbyist. Now, of course, I’m not [talking about] trying to have a robot today, especially if I’m running for the U.S. presidency. But in the future–maybe 30 years into the future–it’s very possible you could have an artificial intelligence system that can run the country better than a human being.

Why is that?

Because human beings are naturally selfish. Human beings are naturally after their own interests. We are geared towards pursuing our own desires, but oftentimes, those desires have contrasts to the benefit of society, at large, or against the benefit of the greater good. Whereas, if you have a machine, you will be able to program that machine to, hopefully, benefit the greatest good, and really go after that. Regardless of any personal interest that the machine might have. I think it’s based on having a more altruistic living entity that would be able to make decisions, rather than a human.

But what happens if people democratically pick a bad robot?

So, this is the danger of even thinking this way. Because it’s possible that you could get a robot that might become selfish during its term as president. Or it could be hacked, you know? The hacking could be the number one worry that everyone would have with an artificial intelligence leading the country. But, it could also do something crazy, like malfunction, and maybe we wouldn’t even know if it’s necessarily malfunctioning. This happens all the time in people. But the problem is, that far into the future, it wouldn’t be just one entity that’s closed off into some sort of computer that would be walking around. At that stage, an artificial intelligence that is leading the nation would be totally interconnected with all other machines. That presents another situation, because, potentially, it could just take over everything.

aVzSRxe4

That said, though, let’s say we had an on-and-off switch. This is what I have advocated for–a kind of really, really powerful on-and-off switch for any kind of A.I., because I don’t necessarily think we should release A.I. without a guaranteed on-and-off switch. For me, the greater prospect of an artificial intelligence one day leading countries is that we’re also going to be interconnected to them. Within 15 or 20 years, we’ll have cranial implant technology for mindwave-reading headsets that are so advanced that we’ll probably be directly interconnected–our thoughts, our minds, our memories–into these types of artificial entities. And at that point, I think the decision-making would be a dual-process where we would essentially have ourselves tied into artificial intelligence, but we still remain biological thinking creatures. And the artificial intelligences would help us make good decisions. You would always have something overlooking your moral systems. And that thing overlooking you would say, Hey, don’t hurt other people. Don’t hurt things that you love and don’t do things that are against the greater good of society.

Do you imagine a robot getting to a place of having morality?

Yes.

To begin with, I think we’re already getting to a stage where the basic artificial intelligences are discovering moral systems. My senior thesis in college was looking into the moral systems of A.I. and how that could be possible. I think, in many ways, moral systems are simply things that we have programmed into ourselves, either through childhood or just through genetic, ingrained ideas. So the same thing applies when you talk about machines. Eventually we’re gonna get to a situation where we’re always able to tell. Sort of like Asimov’s three laws, which essentially say, ‘You can never hurt any humans, and you must always be good to humans.’ I think we’ll get to that kind of stage where morality always breaks down into good or bad for people. So yeah, I think we’ll absolutely be able to program that into machines. But the real great danger is not our own programming. The real great danger is, how successful will that machine be at reprogramming itself? And will it have incentive to reprogram itself out of its own morality? And that’s dangerous, because I have no doubt that we could program the proper moral systems. It’s really whether a machine becomes smart enough and goes, Hey, human moral systems are not good enough for me.

Doesn’t an A.I. reach a point at which it no longer needs to please us? Does it hit a point of intelligence where its consciousness is moot, because it’s so above our own consciousness?

Yes, 100 percent. I advocate as a futurist and also as a member of the Transhumanist Party, that we never let artificial intelligence completely go on its own. I just don’t see why the human species needs an artificial entity, an artificial intelligence entity, that’s 10,000 times smarter than us. I just don’t see why that could ever be a good thing.

What I advocate for is that, as soon as we get to the point when artificial intelligence can take off and be as smart, or even 10 times more intelligent than us, we stop that research and we have the research of cranial implant technology or the brainwave. And we make that so good so that, when artificial intelligence actually decides–when we actually decide to switch the on-button–human beings will also be a part of that intelligence. We will be merged, basically directly. I see it in terms of: The world will take 100 of its best scientists–maybe even some preachers, religious people, some politicians, people from all different walks of society–and everybody will plug-in and mind upload at one time into this machine. And then when that occurs, we can let the artificial intelligence off, because that way, at least we’ll have some type of human intervention going with this incredible entity that some experts say could increase its intelligence by a thousand times within a few days.

We have to make sure that humans are at least a part of that journey. Because then it becomes something, you know, where it could go very wrong. An artificial intelligence may determine that human beings are completely unnecessary for its life, its existence. And these are not things that we want to have happen. I’m not sure if you’re familiar with my novel, The Transhumanist Wager, but I’ve often considered my book a kind of a bridge to artificial intelligence. In fact, I usually tell people that my novel is the very first book written for an artificial intelligence, because it contains a kind of moral code. Most humans hate the moral code in my novel, but I think it’s much more machine-like. Artificial intelligences, I believe, would probably very much appreciate the somewhat authoritarian moral principles that are in that book. I didn’t write the book as part of my campaign or anything like that, it’s just a fictional novel, but it contains a moral system that humans hate, because there’s no human element in its morality. And this is the danger with artificial intelligence, and why I don’t think we should bring artificial intelligence and just let it run wild–at least not without humans completely immersed into it. It’s a big challenge. We’re gonna find life extension with or without artificial intelligence. We’re gonna get closer to, hopefully, a more utopian society without it. Maybe we want to keep it to the level of a 16-year-old or a17-year-old adolescent, rather than some fully maxed-out artificial intelligence that becomes 10,000 times smarter than us in just a matter of years. Who knows what could happen? It could be a very dangerous scenario.

But is there precedent for that? Is there an example of any technology that has reached a certain age or point and stopped evolving?

I don’t if you’ve heard of the Fermi paradox, but it says that there are 2 billion planets in the universe that are potentially life-friendly. And the universe is about 14 billion years old. So, the chances of human beings being the only intelligent form of life in the universe are so minuscule that it’s really kind of crazy to actually–no scientist could ever argue that we would be alone. It’s much more likely that there are hundreds of thousands of other intelligences and other life forms out there in the universe just based on a strictly mathematical formula. And what that means is that artificial intelligence has probably already occurred in the universe. I’m a fan of the simulation theory. I tend to think that most of our existence, if not all of it, is part of a hologram created by some type of other life form, or some type of other artificial intelligence. Now, it may be impossible for us to ever know that, but a bunch of recent studies in string theory physics have proved that.

This means that if there’s something else already out there, it would almost certainly have puts limits on our growth of intelligence. And the reason it would have put limits on us is because it doesn’t want us to grow so intelligent that we would one day maybe take away their superpowered intelligence. So, I have this concept called the “singularity disparity,” which always says that whatever advanced intelligence evolves, it always puts a roadblock in the way of other intelligences evolving. And the reason this happens is so nobody can take away one’s power, no matter how far up the ladder they’ve gone.

Going back to the mind-upload. Do you see that as a thing that every country would build for its own 100 smartest minds? Or do you imagine it as one individual machine?

Vice allowed me to write [several] articles, and they basically build off each other. The first one asks, Are we approaching an artificial intelligence global arms race? And the main argument is that, whoever creates an artificial intelligence first has such a distinct military advantage over every other nation on the planet that they will forever, or they will at least indefinitely, rule the planet. For example, if we develop it, we can just rewrite all of Russia’s nuclear codes, rewrite all of the Chinese nuclear codes. It’s very important that a nice country, a democratic country, develops A.I. first, to protect other A.I.’s from developing that might be negative, or evil, or used for military purposes. The reason that’s important is that I think we’re probably only gonna end up with one A.I. ever. And for exactly the same idea that I told you about–the singularity disparity, which is once you’ve created an intelligence so smart, the real job of that intelligence is to protect itself from other intelligences becoming more intelligent than it. It’s just kind of like human beings. The way you look at money or the way you look at the success of your child, you always want to make sure that as far as it gets, it can protect itself and continue forward. So I think any type of intelligence, no matter what it is, is going to have this very basic principle to protect the power that it has gained. Therefore, I think whatever nation or whoever develops one artificial intelligence will probably make it so that artificial intelligence always stays ahead of any other developing artificial intelligence at any other point in time. It might even do things like send viruses to a second artificial intelligence, just so it can wipe it out, to protect its grounds. It’s gonna be very similar to national politics.

Are there any other politicians who share your beliefs? Do you have a role model?

You know, I actually have no role models. And it’s funny, I actually get asked this question a lot. After I had been with National Geographic for almost five years, and after a kind of a close call with a land mine in Vietnam, I came back to America and said, I’m going to dedicate my life to transhumanism. I had been covering some war zones and stuff like that for them. So I dedicated myself to transhumanism, and I took a full four years to write my novel, which sort of launched me to a pretty popular place in terms of a futurist and a transhumanist popularizer. About the first six months into the four-year endeavor of writing my novel, I stopped even listening to news, to transhumanist news. I stopped listening to Nick Bostrom and the other philosophers out there. And the reason I did is because I really wanted to come up with new ideas. I felt like the movement, itself, was kind of stagnating. It wasn’t going very far. So I sort of just stopped all the news and stopped reading anyone else and just started creating my ideas. And again, I am not advocating for that worldview in my political campaign, but I do base a huge amount of my philosophies on some of those ideas in that book which, presents its own comprehensive philosophy, which is teleologically egocentric functionalism. But the reason I mention that is that there have been no mentors. And if there is any person that I do follow somewhat closely, at least ideas I like, it’s been Frederich Nietzsche, but he’s been dead a few hundred years. And at the same time, I wouldn’t say that I actually, from a political standpoint, like many of his ideas. It just happened to be the core of a lot of my own beliefs of trying to modify my body and live indefinitely. What really applies is an evolutionary instinct to become a better entity altogether. So, in short, I don’t have any mentors or anyone that I actually follow, or would necessarily vote for.

What if you lose? Do you have any plans? Do you plan to participate in the next election? Do you have any other political aspirations?

To be honest, the main thing here in 2016–I am doing hundred-hour weeks. I am stressed to the max. We have interviews and videos and documentaries and bus tours and our campaign is real. I mean, wake up and check my email at two o’clock in the morning, four o’clock in the morning, six o’clock in the morning. It’s an incredibly involved campaign and we’re just in the beginning of it, you know? We’ve got another 14 months to go before we have to concede or something like that. Of course, I stand almost no chance of winning the 2016. But, I have been working, and I discussed this with my wife before I even started the campaign, that the real goal is to try to work and build the Transhumanist Party so that it has a much better shot at 2020 and 2024. That doesn’t mean it’s going to win in 2020 and 2024, of course, but I think we can bring the Transhumanist Party on par with the libertarian party or the green party, with the sizes of other third parties that can actually make a difference.

And its very possible–this is the trick of it all–if we can establish a Transhumanist Party by 2020, then we can get a billionaire on board. I have some very wealthy friends. Right now, they are still trying to determine if my campaign, if the Transhumanist Party, is going to work well, if it’s something that they want. But I think in four years, you put in the time, you establish yourself, you then reach out to some of these very wealthy people. It’s possible you could change the election if you just got one or two very wealthy tech people on board to say, Hey we have someone that’s on our side, we have someone who wants to take money away from wars and put it directly into science and technology. So that’s the main goal of my campaign right now, is to establish the Transhumanist Party as something that is not only credible, but something that is really worth watching.

In the meantime, we have people running for local offices already. We have someone in New York that’s going to try and do a congressional seat under the Transhumanist Party. We have a mayor in Washington that’s running under the Transhumanist Party. We are trying to spread our roots, so that by the time the future really rolls in–we think by 2020 it is going to be a different game. You know, four more years of technology developing, and the world is going to be really faced with some very strange ethical decisions. In four years, we won’t be talking artificial intelligence as if it’s something on the horizon. We’ll be talking about it as if it’s something within the next presidential election. Then candidates must address the issue because it becomes, after all, the history of civilization.

http://www.esquire.com/news-politics/interviews/a35078/transhumanist-presidential-candidate-zoltan/

What If One Country Achieves the Singularity First?

April 27, 2015

Zoltan Istvan is a futurist, author of The Transhumanist Wager, and founder of and presidential candidate for the Transhumanist Party. He writes an occasional column for Motherboard in which he ruminates on the future beyond natural human ability.

The concept of a technological singu​larity is tough to wrap your mind around. Even experts have differing definitions. Vernor Vinge, responsible for spreading the idea in the 1990s, believes it’s a moment when growing superintelligence renders our human models of understanding obsolete. Google’s Ray Kurzweil says it’s “a future period during which the pace of technological change will be so rapid, its impact so deep, that human life will be irreversibly transformed.” Kevin Kelly, founding editor of Wired, says, “Singularity is the point at which all the change in the last million years will be superseded by the change in the next five minutes.” Even Christian theologians have chimed in, sometimes referring to it as “the rapture of the nerds.”

My own definition of the singularity is: the point where a fully functioning human mind radically and exponentially increases its intelligence and possibilities via physically merging with technology.

All these definitions share one basic premise—that technology will speed up the acceleration of intelligence to a point when biological human understanding simply isn’t enough to comprehend what’s happening anymore.

That also makes a technological singularity something quasi-spiritual, since anything beyond understanding evokes mystery. It’s worth noting that even most naysayers and luddites who disdain the singularity concept don’t doubt that the human race is heading towards it.

No matter how you look at this, it’s bizarre futurist stuff

In March 2015, I published a Motherboard article titled A Global Arms Race to Create a Superintelligent AI is Looming. The article argued a concept I call the AI Imperative, which says that nations should do all they can to develop artificial intelligence, because whichever country produces an AI first will likely end up ruling the world indefinitely, since that AI will be able to control all other technologies and their development on the planet.

The article generated many thoughtful comments on Red​dit Futurology, Less​Wrong, and elsewhere. I tend not to comment on my own articles in an effort to stay out of the way, but I do always carefully read comment sections. One thing the message boards on this story made me think about was the possibility of a “nationalistic” singularity—what might also be called an exclusive, or private singularity.

If you’re a technophile like me, you probably believe the key to reaching the singularity is two-fold: the creation of a superintelligence, and the ability to merge humans with that intelligence. Without both, it’s probably impossible for people to reach it. With both, it’s probably inevitable.

Currently, the technology to merge the human brain with a machine is already underway. In fact, hundreds of thousands of people around the world already have brain implants of some sort, and last year telepathy was performed between researchers in different countries. Thoughts were passed from one mind to another using a machine interface, without speaking a word.

Fast forward 25 years in the future, and some experts like Kurzweil believe we might already be able to upload our entire consciousness into a machine. I tend to agree with him, and I even think it could occur sooner, such as in 15 to 20 years time.

Here’s the crux: If an AI exclusively belonged to one nation (which is likely to happen), and the technology of merging human brains and machines grows sufficiently (which is also likely to happen), then you could possibly end up with one nation controlling the pathways into the singularity.

As insane as this sounds, it’s possible that the controlling nation could start offering its citizens the opportunity to be uploaded fully into machines, in preparation to enter the singularity. Whether there would then be two distinct entities—one biological and one uploaded—for every human who choses to do this is a natural question, and it’s only one that could be decided at the time, probably by governments and law. Furthermore, once uploaded, would your digital self be able to interact with your biological self? Would one self be able to help the other? Or would laws force an either-or situation, where uploaded people’s biological selves must remain in cryogenically frozen states or even be eliminated altogether?

No matter how you look at this, it’s bizarre futurist stuff. And it presents a broad array of challenging ethical issues, since some technologists see the singularity as something akin to a totally new reality or even a so-called digital heaven. And to have one nation or government controlling it, or even attempting to limit it exclusively to its populace, seems potentially morally dubious.

For example, what if America created the AI first, then used its superintelligence to pursue a singularity exclusively for Americans?

(Historically, this wouldn’t be that far off from what many Abrahamic world religions advocate for, such as Christianity or Islam. In both religions, only certain types of people get to go to heaven. Those left behind get tortured for eternity. This concept of exclusivity is the single largest reason I became an atheist at 18.)

Worse, what if a government chose only to allow the super wealthy to pursue its doorway to the singularity—to plug directly into its superintelligent AI? Or what if the government only gave access to high-ranked party officials? For example, how would Russia’s Vladimir Putin deal with this type of power? And it is a tremendous power. After all, you’d be connected to a superintelligence and would likely be able to rewrite all the nuclear arms codes in the world, stop dams and power plants from operating, and create a virus to shut down Wi-Fi worldwide, if you wanted.

And at some point, we won’t see a difference between matter, energy, judgment, and ourselves.

Of course, given the option, many people would probably choose not to undergo the singularity at all. I suspect many would choose to remain as they are on Earth. However, some of those people might be keen on acquiring the technology of getting to the singularity. They might want to sell that tech, and offer paid one-way trips for people who want to have a singularity. For that matter, individuals or corporations might try to patent it. What you’d be selling is the path to vast amounts of power and immortality.

Such moral leanings and concepts that someone or group could control, patent, or steal the singularity ultimately lead us to another imperative: the Singularity Disparity.

The first person or group to experience the singularity will protect and preserve the power and intelligence they’ve acquired in the singularity process—which ultimately means they will do whatever is necessary to lessen the power and intelligence accumulation of the singularity experience for others. That way the original Singularitarians can guarantee their power and existence indefinitely.

In my philosophical novel The Transhumanist Wager, this type of thinking belongs to the Omnipotender, someone who is actively seeking and contending for as much power as possible, and bases their actions on such endeavors.

I’m not trying to argue any of this is good or bad, moral or immoral. I’m just explaining how this phenomena of the singularity likely could unfold. Assuming I’m correct, and technology continues to grow rapidly, the person who will become the leading omnipotender on Earth is already born.

Of course, religions will appreciate that fact, because such a person will fulfill elements of either the Antichrist or the Second Coming of a Jesus, which is important to both the apocalyptic beliefs in Christianity and Isla​m. At least the “End Times” are really here, faith-touters will be able to finally say.

The good news, though, is that maybe a singularity is not an exclusive event. Maybe there can be many singularities.

A singularity is likely to be mostly a consciousness phenomenon. We will be nearly all digital and interconnected with machines, but we will still able to recognize ourselves, values, memories, and our purposes—otherwise I don’t think we’d go through with it. On the cusp of the singularity, our intelligence will begin to grow tremendously. I expect the software of our minds will be able to be rewritten and upgraded almost instantaneously in real time. I also think the hardware we exist through—whatever form of computing it’ll be—will also be able to be reshaped and remade in real time. We’ll learn how to reassemble processors and their particles in the moment, on-demand, probably with the same agility and speed we have when thinking about something, such as figuring out a math problem. We’ll understand the rules and think about what we want, and the best answer, strategy, and path will occur. We’ll get exceedingly efficient at such things, too. And at some point, we won’t see a difference between matter, energy, judgment, and ourselves.

What’s important here is the likely fact that we won’t care much about what’s left on Earth. In just days or even hours, the singularity will probably render us into some form of energy that can organize and advance itself superintelligently, perhaps into a trillion minds on a million Earths.

If the singularity occurs like this, then, on the surface, there’s little ethically wrong with a national or private singularity, because other nations or groups could implement their own in time. However, the larger issue is: How would people on Earth protect themselves from someone or some group in the singularity who decides the Earth and its inhabitants aren’t worth keeping around, or worse, wants to enslave everyone on Earth? There’s no easy answer to this, but the question itself makes me frown upon the singularity idea, in exactly the same way I frown upon an omnipotent God and heaven. I don’t like any other single entity or group having that much possible power over another.

 

http://motherboard.vice.com/read/what-if-one-country-achieves-the-singularity-first