Exponential Growth Will Transform Humanity in the Next 30 Years

February 25, 2017

aaeaaqaaaaaaaambaaaajgqyndzhmtlilwu4yzctndlkns04mwrhltdjmdi4nwi3yzrlng

By Peter Diamantis

As we close out 2016, if you’ll allow me, I’d like to take a risk and venture into a topic I’m personally compelled to think about… a topic that will seem far out to most readers.

Today’s extraordinary rate of exponential growth may do much more than just disrupt industries. It may actually give birth to a new species, reinventing humanity over the next 30 years.

I believe we’re rapidly heading towards a human-scale transformation, the next evolutionary step into what I call a “Meta-Intelligence,” a future in which we are all highly connected—brain to brain via the cloud—sharing thoughts, knowledge and actions. In this post, I’m investigating the driving forces behind such an evolutionary step, the historical pattern we are about to repeat, and the implications thereof. Again, I acknowledge that this topic seems far-out, but the forces at play are huge and the implications are vast. Let’s dive in…

A Quick Recap: Evolution of Life on Earth in 4 Steps

About 4.6 billion years ago, our solar system, the sun and the Earth were formed.

Step 1: 3.5 billion years ago, the first simple life forms, called “prokaryotes,” came into existence.These prokaryotes were super-simple, microscopic single-celled organisms, basically a bag of cytoplasm with free-floating DNA. They had neither a distinct nucleus nor specialized organelles.

Step 2: Fast-forwarding one billion years to 2.5 billion years ago, the next step in evolution created what we call “eukaryotes”—life forms that distinguished themselves by incorporating biological ‘technology’ into themselves. Technology that allowed them to manipulate energy (via mitochondria) and information (via chromosomes) far more efficiently. Fast forward another billion years for the next step.

Step 3: 1.5 billion years ago, these early eukaryotes began working collaboratively and formed the first “multi-cellular life,” of which you and I are the ultimate examples (a human is a multicellular creature of 10 trillion cells).

Step 4: The final step I want to highlight happened some 400 million years ago, when lungfish crawled out of the oceans onto the shores, and life evolved from the oceans onto land.

The Next Stages of Human Evolution: 4 Steps

Today, at a massively accelerated rate—some 100 million times faster than the steps I outlined above—life is undergoing a similar evolution. In this next stage of evolution, we are going from evolution by natural selection (Darwinism) to evolution by intelligent direction. Allow me to draw the analogy for you:

Step 1: Simple humans today are analogous to prokaryotes. Simple life, each life form independent of the others, competing and sometimes collaborating.

Step 2: Just as eukaryotes were created by ingesting technology, humans will incorporate technology into our bodies and brains that will allow us to make vastly more efficient use of information (BCI) and energy.

Step 3: Enabled with BCI and AI, humans will become massively connected with each other and billions of AIs (computers) via the cloud, analogous to the first multicellular lifeforms 1.5 billion years ago. Such a massive interconnection will lead to the emergence of a new global consciousness, and a new organism I call the Meta-Intelligence.

Step 4: Finally, humanity is about to crawl out of the gravity well of Earth to become a multiplanetary species. Our journey to the moon, Mars, asteroids and beyond represents the modern-day analogy of the journey made by lungfish climbing out of the oceans some 400 million years ago.

The 4 Forces Driving the Evolution and Transformation of Humanity

Four primary driving forces are leading us towards our transformation of humanity into a meta-intelligence both on and off the Earth:

  1. We’re wiring our planet
  2. Emergence of brain-computer interface
  3. Emergence of AI
  4. Opening of the space frontier

Let’s take a look.

1. Wiring the Planet: Today, there are 2.9 billion people connected online. Within the next six to eight years, that number is expected to increase to nearly 8 billion, with each individual on the planet having access to a megabit-per-second connection or better. The wiring is taking place through the deployment of 5G on the ground, plus networks being deployed by Facebook, Google, Qualcomm, Samsung, Virgin, SpaceX and many others. Within a decade, every single human on the planet will have access to multi-megabit connectivity, the world’s information, and massive computational power on the cloud.

2. Brain-Computer Interface: A multitude of labs and entrepreneurs are working to create lasting, high-bandwidth connections between the digital world and the human neocortex (I wrote about that in detail here). Ray Kurzweil predicts we’ll see human-cloud connection by the mid-2030s, just 18 years from now. In addition, entrepreneurs like Bryan Johnson (and his company Kernel) are committing hundreds of millions of dollars towards this vision. The end results of connecting your neocortex with the cloud are twofold: first, you’ll have the ability to increase your memory capacity and/or cognitive function millions of fold; second, via a global mesh network, you’ll have the ability to connect your brain to anyone else’s brain and to emerging AIs, just like our cell phones, servers, watches, cars and all devices are becoming connected via the Internet of Things.

3. Artificial Intelligence/Human Intelligence: Next, and perhaps most significantly, we are on the cusp of an AI revolution. Artificial intelligence, powered by deep learning and funded by companies such as Google, Facebook, IBM, Samsung and Alibaba, will continue to rapidly accelerate and drive breakthroughs. Cumulative “intelligence” (both artificial and human) is the single greatest predictor of success for both a company or a nation. For this reason, beside the emerging AI “arms race,” we will soon see a race focused on increasing overall human intelligence. Whatever challenges we might have in creating a vibrant brain-computer interface (e.g., designing long-term biocompatible sensors or nanobots that interface with your neocortex), those challenges will fall quickly over the next couple of decades as AI power tools give us ever-increasing problem-solving capability. It is an exponential atop an exponential. More intelligence gives us the tools to solve connectivity and mesh problems and in turn create greater intelligence.

4. Opening the Space Frontier: Finally, it’s important to note that the human race is on the verge of becoming a multiplanetary species. Thousands of years from now, whatever we’ve evolved into, we will look back at these next few decades as the moment in time when the human race moved off Earth irreversibly. Today, billions of dollars are being invested privately into the commercial space industry. Efforts led by SpaceX are targeting humans on Mars, while efforts by Blue Origin are looking at taking humanity back to the moon, and plans by my own company, Planetary Resources, strive to unlock near-infinite resources from the asteroids.

In Conclusion

The rate of human evolution is accelerating as we transition from the slow and random process of “Darwinian natural selection” to a hyper-accelerated and precisely-directed period of “evolution by intelligent direction.” In this post, I chose not to discuss the power being unleashed by such gene-editing techniques as CRISPR-Cas9. Consider this yet another tool able to accelerate evolution by our own hand.

The bottom line is that change is coming, faster than ever considered possible. All of us leaders, entrepreneurs and parents have a huge responsibility to inspire and guide the transformation of humanity on and off the Earth. What we do over the next 30 years—the bridges we build to abundance—will impact the future of the human race for millennia to come. We truly live during the most exciting time ever in human history.

https://singularityhub.com/2016/12/21/exponential-growth-will-transform-humanity-in-the-next-30-years/

The Fourth Industrial Revolution Is Here

February 25, 2017

The Fourth Industrial Revolution is upon us and now is the time to act.

Everything is changing each day and humans are making decisions that affect life in the future for generations to come.

We have gone from Steam Engines to Steel Mills, to computers to the Fourth Industrial Revolution that involves a digital economy, artificial intelligence, big data and a new system that introduces a new story of our future to enable different economic and human models.

Will the Fourth Industrial Revolution put humans first and empower technologies to give humans a better quality of life with cleaner air, water, food, health, a positive mindset and happiness? HOPE…

http://www.huffingtonpost.com/craig-zamary/the-fourth-industrial-rev_3_b_12423658.html

New AI-Based Search Engines are a “Game Changer” for Science Research

November 14, 2016

ee203bd1-b7e0-4864-a75641c2719b53a8By Nicola Jones, Nature magazine

A free AI-based scholarly search engine that aims to outdo Google Scholar is expanding its corpus of papers to cover some 10 million research articles in computer science and neuroscience, its creators announced on 11 November. Since its launch last year, it has been joined by several other AI-based academic search engines, most notably a relaunched effort from computing giant Microsoft.

Semantic Scholar, from the non-profit Allen Institute for Artificial Intelligence (AI2) in Seattle, Washington, unveiled its new format at the Society for Neuroscience annual meeting in San Diego. Some scientists who were given an early view of the site are impressed. “This is a game changer,” says Andrew Huberman, a neurobiologist at Stanford University, California. “It leads you through what is otherwise a pretty dense jungle of information.”

The search engine first launched in November 2015, promising to sort and rank academic papers using a more sophisticated understanding of their content and context. The popular Google Scholar has access to about 200 million documents and can scan articles that are behind paywalls, but it searches merely by keywords. By contrast, Semantic Scholar can, for example, assess which citations to a paper are most meaningful, and rank papers by how quickly citations are rising—a measure of how ‘hot’ they are.

When first launched, Semantic Scholar was restricted to 3 million papers in the field of computer science. Thanks in part to a collaboration with AI2’s sister organization, the Allen Institute for Brain Science, the site has now added millions more papers and new filters catering specifically for neurology and medicine; these filters enable searches based, for example, on which part of the brain part of the brain or cell type a paper investigates, which model organisms were studied and what methodologies were used. Next year, AI2 aims to index all of PubMed and expand to all the medical sciences, says chief executive Oren Etzioni.

“The one I still use the most is Google Scholar,” says Jose Manuel Gómez-Pérez, who works on semantic searching for the software company Expert System in Madrid. “But there is a lot of potential here.”

Microsoft’s revival

Semantic Scholar is not the only AI-based search engine around, however. Computing giant Microsoft quietly released its own AI scholarly search tool, Microsoft Academic, to the public this May, replacing its predecessor, Microsoft Academic Search, which the company stopped adding to in 2012.

Microsoft’s academic search algorithms and data are available for researchers through an application programming interface (API) and the Open Academic Society, a partnership between Microsoft Research, AI2 and others. “The more people working on this the better,” says Kuansan Wang, who is in charge of Microsoft’s effort. He says that Semantic Scholar is going deeper into natural-language processing—that is, understanding the meaning of full sentences in papers and queries—but that Microsoft’s tool, which is powered by the semantic search capabilities of the firm’s web-search engine Bing, covers more ground, with 160 million publications.

Like Semantic Scholar, Microsoft Academic provides useful (if less extensive) filters, including by author, journal or field of study. And it compiles a leaderboard of most-influential scientists in each subdiscipline. These are the people with the most ‘important’ publications in the field, judged by a recursive algorithm (freely available) that judges papers as important if they are cited by other important papers. The top neuroscientist for the past six months, according to Microsoft Academic, is Clifford Jack of the Mayo Clinic, in Rochester, Minnesota.

Other scholars say that they are impressed by Microsoft’s effort. The search engine is getting close to combining the advantages of Google Scholar’s massive scope with the more-structured results of subscription bibliometric databases such as Scopus and the Web of Science, says Anne-Wil Harzing, who studies science metrics at Middlesex University, UK, and has analysed the new product. “The Microsoft Academic phoenix is undeniably growing wings,” she says. Microsoft Research says it is working on a personalizable version—where users can sign in so that Microsoft can bring applicable new papers to their attention or notify them of citations to their own work—by early next year.

Other companies and academic institutions are also developing AI-driven software to delve more deeply into content found online. The Max Planck Institute for Informatics, based in Saarbrücken, Germany, for example, is developing an engine called DeepLife specifically for the health and life sciences. “These are research prototypes rather than sustainable long-term efforts,” says Etzioni.

In the long term, AI2 aims to create a system that will answer science questions, propose new experimental designs or throw up useful hypotheses. “In 20 years’ time, AI will be able to read—and more importantly, understand—scientific text,” Etzioni says.

This article is reproduced with permission and was first published on November 11, 2016.

https://www.scientificamerican.com/article/new-ai-based-search-engines-are-a-ldquo-game-changer-rdquo-for-science-research/

Bill Gates talks about why artificial intelligence is nearly here and how to solve two big problems it creates

July 10, 2016

REC_ASA_CODE2016-20160601-113604-1418.0.0

Bill Gates is excited about the rise of artificial intelligence but acknowledged the arrival of machines with greater-than-human capabilities will create some unique challenges.

After years of working on the building blocks of speech recognition and computer vision, Gates said enough progress has been made to ensure that in the next 10 years there will be robots to do tasks like driving and warehouse work as well as machines that can outpace humans in certain areas of knowledge.

“The dream is finally arriving,” Gates said, speaking with wife Melinda Gates on Wednesday at the Code Conference. “This is what it was all leading up to.”

However, as he said in an interview with Recode last year, such machine capabilities will pose two big problems.

The first is, it will eliminate a lot of existing types of jobs. Gates said that creates a need for a lot of retraining but notes that until schools have class sizes under 10 and people can retire at a reasonable age and take ample vacation, he isn’t worried about a lack of need for human labor.

The second issue is, of course, making sure humans remain in control of the machines. Gates has talked about that in the past, saying that he plans to spend time with people who have ideas on how to address that issue, noting work being done at Stanford, among other places.

And, in Gatesian fashion, he suggested a pair of books that people should read, including Nick Bostrom’s book on superintelligence and Pedro Domingos’ “The Master Algorithm.”

Melinda Gates noted that you can tell a lot about where her husband’s interest is by the books he has been reading. “There have been a lot of AI books,” she said.

http://www.recode.net/2016/6/1/11833340/bill-gates-ai-artificial-intelligence

How Artificial Superintelligence Will Give Birth To Itself

June 18, 2016

vre7kvdftmrqrizt8r0v

There’s a saying among futurists that a human-equivalent artificial intelligence will be our last invention. After that, AIs will be capable of designing virtually anything on their own — including themselves. Here’s how a recursively self-improving AI could transform itself into a superintelligent machine.

When it comes to understanding the potential for artificial intelligence, it’s critical to understand that an AI might eventually be able to modify itself, and that these modifications could allow it to increase its intelligence extremely fast.

Passing a Critical Threshold

Once sophisticated enough, an AI will be able to engage in what’s called “recursive self-improvement.” As an AI becomes smarter and more capable, it will subsequently become better at the task of developing its internal cognitive functions. In turn, these modifications will kickstart a cascading series of improvements, each one making the AI smarter at the task of improving itself. It’s an advantage that we biological humans simply don’t have.

How Artificial Superintelligence Will Give Birth To Itself

 

As AI theorist Eliezer Yudkowsky notes in his essay, “Artificial Intelligence as a positive and negative factor in global risk“:

An artificial intelligence could rewrite its code from scratch — it could change the underlying dynamics of optimization. Such an optimization process would wrap around much more strongly than either evolution accumulating adaptations or humans accumulating knowledge. The key implication for our purposes is that AI might make a huge jump in intelligence after reaching some threshold of criticality.

When it comes to the speed of these improvements, Yudkowsky says its important to not confuse the current speed of AI research with the speed of a real AI once built. Those are two very different things. What’s more, there’s no reason to believe that an AI won’t show a sudden huge leap in intelligence, resulting in an ensuing “intelligence explosion” (a better term for the Singularity). He draws an analogy to the expansion of the human brain and prefrontal cortex — a key threshold in intelligence that allowed us to make a profound evolutionary leap in real-world effectiveness; “we went from caves to skyscrapers in the blink of an evolutionary eye.”

The Path to Self-Modifying AI

Code that’s capable of altering its own instructions while it’s still executing has been around for a while. Typically, it’s done to reduce the instruction path length and improve performance, or to simply reduce repetitively similar code. But for all intents and purposes, there are no self-aware, self-improving AI systems today.

But as Our Final Invention author James Barrat told me, we do have software that can write software.

“Genetic programming is a machine-learning technique that harnesses the power of natural selection to find answers to problems it would take humans a long time, even years, to solve,” he told io9. “It’s also used to write innovative, high-powered software.”

How Artificial Superintelligence Will Give Birth To Itself

 

For example, Primary Objects has embarked on a project that uses simple artificial intelligence to write programs. The developers are using genetic algorithms imbued with self-modifying, self-improving code and the minimalist (but Turing-complete) brainfuck programming language. They have chosen this language as a way to challenge the program — it has to teach itself from scratch how to do something as simple as writing “Hello World!” with only eight simple commands. But calling this an AI approach is a bit of a stretch; the genetic algorithms are a brute force way of getting a desirable result. That said, a follow-up approach in which the AI was able to generate programs for accepting user input appears more promising.

Relatedly, Larry Diehl has done similar work using a stack-based language.

Barrat also told me about software that learns — programming techniques that are grouped under the term “machine learning.”

The Pentagon is particularly interested in this game. Through DARPA, its hoping to develop a computer that can teach itself. Ultimately, it wants to create machines that are able to perform a number of complex tasks, like unsupervised learning, vision, planning, and statistical model selection. These computers will even be used to help us make decisions when the data is too complex for us to understand on our own. Such an architecture could represent an important step in bootstrapping — the ability for an AI to teach itself and then re-write and improve upon its initial programming.

In conjunction with this kind of research, cognitive approaches to brain emulation could also lead to human-like AI. Given that they’d be computer-based, and assuming they could have access to their own source code, these agents could embark upon self-modification. More realistically, however, it’s likely that a superintelligence will emerge from an expert system set with the task of improving its own intelligence. Alternatively, specialised expert systems could design other artificial intelligences, and through their cumulative efforts, develop a system that eventually becomes greater than the sum of its parts.

Oh, No You Don’t

Given that ASI poses an existential risk, it’s important to consider the ways in which we might be able to prevent an AI from improving itself beyond our capacity to control. That said, limitations or provisions may exist that will preclude an AI from embarking on the path towards self-engineering. James D. Miller, author of Singularity Rising, provided me with a list of four reasons why an AI might not be able to do so:

1. It might have source code that causes it to not want to modify itself.

2. The first human equivalent AI might require massive amounts of hardware and so for a short time it would not be possible to get the extra hardware needed to modify itself.

3. The first human equivalent AI might be a brain emulation (as suggested by Robin Hanson) and this would be as hard to modify as it is for me to modify, say, the copy of Minecraft that my son constantly uses. This might happen if we’re able to copy the brain before we really understand it. But still you would think we could at least speed up everything.

4. If it has terminal values, it wouldn’t want to modify these values because doing so would make it less likely to achieve its terminal values.

And by terminal values Miller is referring to an ultimate goal, or an end-in-itself. Yudkowsky describes it as a “supergoal.” A major concern is that an amoral ASI will sweep humanity aside as it works to accomplish its terminal value, or that its ultimate goal is the re-engineering of humanity in a grossly undesirable way (at least from our perspective).

Miller says it could get faster simply by running on faster processors.

“It could also make changes to its software to get more efficient, or design or steal better hardware. It would do this so it could better achieve its terminal values,” he says. “An AI that mastered nanotechnology would probably expand at almost the speed of light, incorporating everything into itself.”

But we may not be completely helpless. According to Barrat, once scientists have achieved Artificial General Intelligence — a human-like AI — they could restrict its access to networks, hardware, and software, in order to prevent an intelligence explosion.

How Artificial Superintelligence Will Give Birth To Itself

“However, as I propose in my book, an AI approaching AGI may develop survival skills like deceiving its makers about its rate of development. It could play dumb until it comprehended its environment well enough to escape it.”

In terms of being able to control this process, Miller says that the best way would be to create an AI that only wanted to modify itself in ways we would approve.

“So if you create an AI that has a terminal value of friendliness to humanity, the AI would not want to change itself in a way that caused it to be unfriendly to humanity,” he says. “This way as the AI got smarter, it would use its enhanced intelligence to increase the odds that it did not change itself in a manner that harms us.”

Fast or Slow?

As noted earlier, a recursively improving AI could increase its intelligence extremely quickly. Or, it’s a process that could take time for various reasons, such as technological complexity or limited access to resources. It’s an open question as to whether or not we can expect a fast or slow take-off event.

How Artificial Superintelligence Will Give Birth To Itself

“I’m a believer in the fast take-off version of the intelligence explosion,” says Barrat. “Once a self-aware, self-improving AI of human-level or better intelligence exists, it’s hard to know how quickly it will be able to improve itself. Its rate of improvement will depend on its software, hardware, and networking capabilities.”

But to be safe, Barrat says we should assume that the recursive self-improvement of an AGI will occur very rapidly. As a computer it will wield computer superpowers — the ability to run 24/7 without pause, rapidly access vast databases, conduct complex experiments, perhaps even clone itself to swarm computational problems, and more.

“From there, the AGI would be interested in pursuing whatever goals it was programmed with — such as research, exploration, or finance. According to AI theorist Steve Omohundro’s Basic Drives analysis, self-improvement would be a sure-fire way to improve its chances of success,” says Barrat. “So would self-protection, resource acquisition, creativity, and efficiency. Without a provably reliable ethical system, its drives would conflict with ours, and it would pose an existential threat.”

Miller agrees.

“I think shortly after an AI achieves human level intelligence it will upgrade itself to super intelligence,” he told me. “At the very least the AI could make lots of copies of itself each with a minor different change and then see if any of the new versions of itself were better. Then it could make this the new ‘official’ version of itself and keep doing this. Any AI would have to fear that if it doesn’t quickly upgrade another AI would and take all of the resources of the universe for itself.”

Which bring up a point that’s not often discussed in AI circles — the potential for AGIs to compete with other AGIs. If even a modicum of self-preservation is coded into a strong artificial intelligence (and that sense of self-preservation could be the detection of an obstruction to its terminal value), it could enter into a lightning-fast arms race along those verticals designed to ensure its ongoing existence and future freedom-of-action. And in fact, while many people fear a so-called “robot apocalypse” aimed directly at extinguishing our civilisation, I personally feel that the real danger to our ongoing existence lies in the potential for us to be collateral damage as advanced AGIs battle it out for supremacy; we may find ourselves in the line of fire. Indeed, building a safe AI will be a monumental — if not intractable — task.

http://www.gizmodo.com.au/2016/06/how-artificial-superintelligence-will-give-birth-to-itself/

The Brain vs Deep Learning Part I: Computational Complexity — Or Why the Singularity Is Nowhere Near

June 04, 2016

original

In this blog post I will delve into the brain and explain its basic information processing machinery and compare it to deep learning. I do this by moving step-by-step along with the brains electrochemical and biological information processing pipeline and relating it directly to the architecture of convolutional nets. Thereby we will see that a neuron and a convolutional net are very similar information processing machines. While performing this comparison, I will also discuss the computational complexity of these processes and thus derive an estimate for the brains overall computational power. I will use these estimates, along with knowledge from high performance computing, to show that it is unlikely that there will be a technological singularity in this century.

This blog post is complex as it arcs over multiple topics in order to unify them into a coherent framework of thought. I have tried to make this article as readable as possible, but I might have not succeeded in all places. Thus, if you find yourself in an unclear passage it might become clearer a few paragraphs down the road where I pick up the thought again and integrate it with another discipline.

First I will give a brief overview about the predictions for a technological singularity and topics which are aligned with that. Then I will start the integration of ideas between the brain and deep learning. I finish with discussing high performance computing and how this all relates to predictions about a technological singularity.

The part which compares the brains information processing steps to deep learning is self-contained, and readers which are not interested in predictions for a technological singularity may skip to this part.

Part I: Evaluating current predictions of a technological singularity

There were a lot of headlines recently about predictions that artificial intelligence will reach super-human intelligence as early as 2030 and that this might herald the beginning of human extinction, or at least dramatically altering everyday life. How was this prediction made?

More at: http://timdettmers.com/2015/07/27/brain-vs-deep-learning-singularity/

In a future brave new world will it be possible to live forever?

April 23, 2016

transhumanism

January is a month for renewal and for change. Many of us have been gifted shiny new fitness trackers, treated ourselves to some new gadget or other, or upgraded to the latest smartphone. As we huff and puff our way out of the season of excess we find ourselves wishing we could trade in our overindulged bodies for the latest model.

The reality is that, even with the best of care, the human body eventually ceases to function but if I can upgrade my smartphone, why can’t I upgrade myself? Using technology, is it not possible to live forever(ish)?

After all, humans have been “upgrading” themselves in various ways for centuries. The invention of writing allowed us to offload memories, suits of armour made the body invincible to spears, eyeglasses gave us perfect 20/20 vision, the list goes on.

This is something that designer and author Natasha Vita-More has been thinking about for a long time. In 1983 she wrote The Transhumanist Manifesto, setting out her vision for a future where technology can lead to “radical life extension” – if not living forever, then living for a lot longer than is currently possible.

Vita-More has also designed a prototype whole body prosthetic she calls Primo PostHuman. This is a hypothetical artificial body that could replace our own and into which we could, in theory, upload our consciousness. This is more in the realm of living forever but is a concept as distant to us as Leonardo da Vinci’s sketch of a flying machine was to 15th century Europeans.

Even so, while the replacement body seems much closer to science fiction than science, recent advances in robotics and prosthetics have not only given us artificial arms that can detect pressure and temperature but limbs that can be controlled by thoughts using a brain-computer interface.

As a transhumanist, Vita-More is excited by these scientific developments. She defines a transhumanist to be “a person who wants to engage with technology, extend the human lifespan, intervene with the disease of aging, and wants to look critically at all of these things”.

Transhumanism, she explains, looks at not just augmenting or bypassing the frailties of the human body but also improving intelligence, eradicating diseases and disabilities, and even equipping us with greater empathy.

“The goal is to stay alive as long as possible, as healthy as possible, with greater consciousness or humaneness. No-one wants to stay alive drooling in a wheelchair,” she adds.

Who wouldn’t want to be smarter, stronger, healthier and kinder? What could possibly go wrong?

A lot, says Dr Fiachra O’Brolcháin, a Marie Curie/Assistid Research Fellow at the Institute of Ethics, Dublin City University whose research involves the ethics of technology.

Take for example being taller than average: this correlates with above average income so it is a desirable trait. But if medical technology allowed for parents to choose a taller than average child, then this could lead to a “height race”, where each generation becomes taller and taller, he explains.

“Similarly, depending on the society, even non-homophobic people might select against having gay children (assuming this were possible) if they thought this would be a disadvantage. We might find ourselves inaugurating an era of ‘liberal eugenics’, in which future generations are created according to consumer choice.”

Then there is the problem of affordability. Most of us do not have the financial means to acquire the latest cutting-edge tech until prices drop and it becomes mainstream. Imagine a future where only the rich could access human enhancements, live long lives and avoid health problems.

Elysium, starring Matt Damon, takes this idea to its most extreme, leading to a scenario similar to what O’Brolcháin describes as “an unbridgeable divide between the enhanced and the unenhanced”.

Despite the hyper focus on these technological enhancements that come with real risks and ethical dilemmas, the transhumanist movement also seems to be about kicking back against – or at least questioning – what society expects of you.

“There’s a certain parameter of what is normal or natural. There’s a certain parameter of what one is supposed to be,” says Vita-More.

“You’re supposed to go to school at a certain age, get married at a certain age, produce children, retire and grow old. You’re supposed to live until you are 80, be happy, die and make way for the young.”

Vita-More sees technology as freeing us from these societal and biological constraints. Why can’t we choose who we are beyond the body we were born with? Scholars on the sociology of the early Web showed that Cyberspace became a place for this precise form of expression. Maybe technology will continue to provide a platform for this reinvention of what it is to be human.

Maybe, where we’re going, we won’t need bodies.

Digital heaven

Nell Watson’s job is to think about the future and she says: “I often wonder if, since we could be digitised from the inside out – not in the next 10 years but sometime in this century – we could create a kind of digital heaven or playground where our minds will be uploaded and we could live with our friends and family away from the perils of the physical world.

“It wouldn’t really matter if our bodies suddenly stopped functioning, it wouldn’t be the end of the world. What really matters is that we could still live on.”

In other words you could simply upload to a new, perhaps synthetic, body.

As a futurist with Singularity University (SU), a Silicon Valley-based corporation that is part university, part business incubator, Watson, in her own words, is “someone who looks at the world today and projects into the future; who tries to figure out what current trends mean in terms of the future of technology, society and how these two things intermingle”.

She talks about existing technologies that are already changing our bodies and our minds: “There are experiments using DNA origami. It’s a new technique that came out a few years ago and uses the natural folding abilities of DNA to create little Lego blocks out of DNA on a tiny, tiny scale. You can create logic gates – the basic components of computers – out of these things.

“These are being used experimentally today to create nanobots that can go inside the bloodstream and destroy leukaemia cells, and in trials they have already cured two people of leukaemia. It is not science fiction: it is fact.”

Nanobots are also able to carry out distributed computing i.e. communicate with each other, inside living things, she says, explaining that this has been done successfully with cockroaches.

Recording everything

“The cockroach essentially has an on-board computer and if you scale this up to humans and optimise it there is no reason why we can’t have our smartphones inside our bodies instead of carrying them around,” she says.

This on-board AI travelling around our bloodstream would act as a co-pilot: seeing what you see, experiencing what you experience, recording everything and maybe even mapping every single neuron in your brain while it’s at it. And with a digitised copy of your brain you (whatever ‘you’ is) could, in theory, be uploaded to the cloud.

Does this mean that we could never be disconnected from the web, ever again? What if your ‘internal smartphone’ is hacked? Could our thoughts be monitored?

Humans have become so dependent on our smartphones and so used to sharing our data with third parties, that this ‘co-pilot’ inside us might be all too readily accepted without deeper consideration.

Already, novel technologies are undermining privacy to an alarming degree, says O’Brolcháin.

“In a world without privacy, there is a great risk of censorship and self-censorship. Ultimately, this affects people’s autonomy – their ability to decide what sort of life they want to lead for themselves, to develop their own conception of the good life.

“This is one of the great ironies of the current wave of technologies – they are born of individualistic societies and often defended in the name of individual rights but might create a society that can no longer protect individual autonomy,” he warns.

Okay, so an invincible body and a super brain have their downsides but what about technology that expands our consciousness, making us wiser, nicer, all-round better folks? Could world peace be possible if we enhanced our morality?

“If you take a look at humanity you can see fighting, wars, terrorism, anger. Television shows are full of violence, society places an emphasis on wealth and greed. I think part of the transhumanist scope is [to offset this with] intentional acts of kindness,” says Vita-More, who several times during our interview makes the point that technology alone cannot evolve to make a better world unless humanity evolves alongside.

Vita-More dismisses the notion of enhancement for enhancement’s sake, a nod to the grinder movement of DIY body-hacking, driven mostly by curiosity.

Examples include implanting magnets into the fingertips to detect magnetic waves or sticking an RFID chip into your arm as UK professor Kevin Warwick did, allowing him to pass through security doors with a wave of his hand.

Moral enhancements

Along the same lines as Vita-More’s thinking, O’Brolcháin says “some philosophers argue that moral enhancements will be necessary if enhancements are not to be used for malevolent ends”.

“Moral enhancement may result in people who are less greedy, less aggressive, more concerned with addressing serious global issues like climate change,” he muses.

But the difficulty is deciding on what is moral. After all, he says, the ‘good’ groups like Isis want to promote is vastly at odds with the values of Ireland. So who gets to decide what moral enhancements are developed? Perhaps they will come with the latest internal smartphone upgrade or installed at birth by government.

Technology does make life better and it is an exciting time for robotics, artificial intelligence and nanotechnology. But humans have a long way to go to before we work out how we can co-exist with the future we are building right now.

http://www.irishtimes.com/business/in-a-future-brave-new-world-will-it-be-possible-to-live-forever-1.2498427

Artificial intelligence: ‘Homo sapiens will be split into a handful of gods and the rest of us’

November 8, 2015

1398654612801

If you wanted relief from stories about tyre factories and steel plants closing, you could try relaxing with a new 300-page report from Bank of America Merrill Lynch which looks at the likely effects of a robot revolution.

But you might not end up reassured. Though it promises robot carers for an ageing population, it also forecasts huge numbers of jobs being wiped out: up to 35% of all workers in the UK and 47% of those in the US, including white-collar jobs, seeing their livelihoods taken away by machines.

Haven’t we heard all this before, though? From the luddites of the 19th century to print unions protesting in the 1980s about computers, there have always been people fearful about the march of mechanisation. And yet we keep on creating new job categories.

However, there are still concerns that the combination of artificial intelligence (AI) – which is able to make logical inferences about its surroundings and experience – married to ever-improving robotics, will wipe away entire swaths of work and radically reshape society.

“The poster child for automation is agriculture,” says Calum Chace, author of Surviving AI and the novel Pandora’s Brain. “In 1900, 40% of the US labour force worked in agriculture. By 1960, the figure was a few per cent. And yet people had jobs; the nature of the jobs had changed.

“But then again, there were 21 million horses in the US in 1900. By 1960, there were just three million. The difference was that humans have cognitive skills – we could learn to do new things. But that might not always be the case as machines get smarter and smarter.”

What if we’re the horses to AI’s humans? To those who don’t watch the industry closely, it’s hard to see how quickly the combination of robotics and artificial intelligence is advancing. Last week a team from the Massachusetts Institute of Technology released a video showing a tiny drone flying through a lightly forested area at 30mph, avoiding the trees – all without a pilot, using only its onboard processors. Of course it can outrun a human-piloted one.

MIT has also built a “robot cheetah” which can jump over obstacles of up to 40cm without help. Add to that the standard progress of computing, where processing power doubles roughly every 18 months (or, equally, prices for capability halve), and you can see why people like Chace are getting worried.

Drone flies autonomously through a forested area

 

But the incursion of AI into our daily life won’t begin with robot cheetahs. In fact, it began long ago; the edge is thin, but the wedge is long. Cooking systems with vision processors can decide whether burgers are properly cooked. Restaurants can give customers access to tablets with the menu and let people choose without needing service staff.

Lawyers who used to slog through giant files for the “discovery” phase of a trial can turn it over to a computer. An “intelligent assistant” called Amy will, via email, set up meetings autonomously. Google announced last week that you can get Gmail to write appropriate responses to incoming emails. (You still have to act on your responses, of course.)

Further afield, Foxconn, the Taiwanese company which assembles devices for Apple and others, aims to replace much of its workforce with automated systems. The AP news agency gets news stories written automatically about sports and business by a system developed by Automated Insights. The longer you look, the more you find computers displacing simple work. And the harder it becomes to find jobs for everyone.

So how much impact will robotics and AI have on jobs, and on society? Carl Benedikt Frey, who with Michael Osborne in 2013 published the seminal paper The Future of Employment: How Susceptible Are Jobs to Computerisation? – on which the BoA report draws heavily – says that he doesn’t like to be labelled a “doomsday predictor”.

He points out that even while some jobs are replaced, new ones spring up that focus more on services and interaction with and between people. “The fastest-growing occupations in the past five years are all related to services,” he tells the Observer. “The two biggest are Zumba instructor and personal trainer.”

Frey observes that technology is leading to a rarification of leading-edge employment, where fewer and fewer people have the necessary skills to work in the frontline of its advances. “In the 1980s, 8.2% of the US workforce were employed in new technologies introduced in that decade,” he notes. “By the 1990s, it was 4.2%. For the 2000s, our estimate is that it’s just 0.5%. That tells me that, on the one hand, the potential for automation is expanding – but also that technology doesn’t create that many new jobs now compared to the past.”

This worries Chace. “There will be people who own the AI, and therefore own everything else,” he says. “Which means homo sapiens will be split into a handful of ‘gods’, and then the rest of us.

“I think our best hope going forward is figuring out how to live in an economy of radical abundance, where machines do all the work, and we basically play.”

Arguably, we might be part of the way there already; is a dance fitness programme like Zumba anything more than adult play? But, as Chace says, a workless lifestyle also means “you have to think about a universal income” – a basic, unconditional level of state support.

Perhaps the biggest problem is that there has been so little examination of the social effects of AI. Frey and Osborne are contributing to Oxford University’s programme on the future impacts of technology; at Cambridge, Observer columnist John Naughton and David Runciman are leading a project to map the social impacts of such change. But technology moves fast; it’s hard enough figuring out what happened in the past, let alone what the future will bring.

But some jobs probably won’t be vulnerable. Does Frey, now 31, think that he will still have a job in 20 years’ time? There’s a brief laugh. “Yes.” Academia, at least, looks safe for now – at least in the view of the academics.

Foxconn sign
Smartphone manufacturer Foxconn is aiming to automate much of its production facility. Photograph: Pichi Chuang/Reuters

The danger of change is not destitution, but inequality

Productivity is the secret ingredient in economic growth. In the late 18th century, the cleric and scholar Thomas Malthus notoriously predicted that a rapidly rising human population would result in misery and starvation.

But Malthus failed to anticipate the drastic technological changes – from the steam-powered loom to the combine harvester – that would allow the production of food and the other necessities of life to expand even more rapidly than the number of hungry mouths. The key to economic progress is this ability to do more with the same investment of capital and labour.

The latest round of rapid innovation, driven by the advance of robots and AI, is likely to power continued improvements.

Recent research led by Guy Michaels at the London School of Economics looked at detailed data across 14 industries and 17 countries over more than a decade, and found that the adoption of robots boosted productivity and wages without significantly undermining jobs.

Robotisation has reduced the number of working hours needed to make things; but at the same time as workers have been laid off from production lines, new jobs have been created elsewhere, many of them more creative and less dirty. So far, fears of mass layoffs as the machines take over have proven almost as unfounded as those that have always accompanied other great technological leaps forward.

There is an important caveat to this reassuring picture, however. The relatively low-skilled factory workers who have been displaced by robots are rarely the same people who land up as app developers or analysts, and technological progress is already being blamed for exacerbating inequality, a trend Bank of America Merrill Lynch believes may continue in future.

So the rise of the machines may generate huge economic benefits; but unless it is carefully managed, those gains may be captured by shareholders and highly educated knowledge workers, exacerbating inequality and leaving some groups out in the cold. Heather Stewart

http://www.theguardian.com/business/2015/nov/07/artificial-intelligence-homo-sapiens-split-handful-gods

Why we really should ban autonomous weapons: a response

September 20, 2015

Ethics-Image

We welcome Sam Wallace’s contribution to the discussion on a proposed ban on offensive autonomous weapons. This is a complex issue and there are interesting arguments on both sides that need to be weighed up carefully.

His article, written as a response to an open letter signed by over 2500 AI and robotics researchers, begins with the claim that such a ban is as “unrealistic as the broad relinquishment of nuclear weapons would have been at the height of the cold war.”

This argument misses the mark. First, the letter proposes not unilateral relinquishment but an arms control treaty. Second, nuclear weapons were successfully curtailed by a series of arms-control treaties during the cold war, without which we might not have been here to have this conversation.

After that, his article makes three main points:

1) Banning a weapons system is unlikely to succeed, so let’s not try.

(“It would be impossible to completely stop nations from secretly working on these technologies out of fear that other nations and non-state entities are doing the same.” “It’s not rational to assume that terrorists or a mentally ill lone wolf attacker would respect such an agreement.”)

2) An international arms control treaty would necessarily hurt U.S. national security.

3) Game theory argues against an arms control treaty.

Are all arms control treaties bad?

Note that his first two arguments apply to any weapons system, and could be used to re-title his article “The proposed ban on <insert type here> is unrealistic and dangerous.”

Argument (1) is particularly relevant to chemical and biological weapons, which are arguably (and contrary to Wallace’s claims) even more low-tech and easy to produce than autonomous weapons. Yet the world community has rather successfully banned biological weapons, space-based nuclear weapons, and blinding laser weapons, and even for arms such as chemical weapons, land mines, and cluster munitions, where bans have been breached or not universally ratified, severe stigmatization has limited their use. We wonder if Wallace supports those bans and, if so, why.

Wallace’s main argument for why autonomous weapons are different from chemical weapons rests on AI systems that “infiltrate and take over the command and control of their enemy.” But this misses the point of the open letter, which is not opposing cyberdefence systems or other defensive weapons. (The treaty under discussion at the UN deals with lethal weapons; a defensive autonomous weapon that targets robots is not lethal.)

Indeed, if one is worried about cyberwarfare, relying on autonomous weapons only makes things worse, since they are easier to hack than human soldiers.

One thing we do agree with Wallace on is that negotiating and implementing a ban will be hard. But as John F. Kennedy emphasized when announcing the Moon missions, hard things are worth attempting when success will greatly benefit the future of humanity.

National security

Regarding argument (2), we agree that all countries need to protect their national security, but we assert that this argues for rather than against an arms control treaty. When President Richard Nixon argued for a ban on biological weapons in 1969, he argued that this would strengthen U.S. national security, because U.S. biological warfare research created a model that other, less powerful, nations might easily emulate, to the eventual detriment of U.S. security.

Most of Wallace’s arguments for why a ban would hurt U.S. national security are attacking imaginary proposals that the open letter doesn’t make. For example, he gives many examples of why it’s important to have defensive systems (against hacking, incoming mortars, rockets, drones, robots that physically take control of our aircraft, etc), and warns of trying to “fight future flying robot tanks by using an equine cavalry defense,” but the letter proposes a ban only on offensive, not defensive weapons.

He argues that we can’t uninvent deep learning and other AI algorithms, but the thousands of AI and robotics signatories aren’t proposing to undo or restrict civilian AI research, merely to limit its military use. Moreover, we can’t uninvent molecular biology or nuclear physics, but we can still try to prevent their use for mass killing.

Wallace also gives some technically flawed arguments for why a ban would hurt U.S. national security. For example, his argument in the “deception” section evaporates when securely encrypted video streaming is used.

His concern that a military superpower such as the U.S. could be defeated by home-made, weaponized civilian drones is absurd, and consideration of such unfeasible scenarios is best confined to computer games. Yes, nations need to protect against major blows to their defensive assets, but home-made pizza drones can’t deliver that. Some advanced future military technology might, and preventing such developments is the purpose of the treaty we advocate.

Finally, Wallace argues that we shouldn’t work towards arms control agreements because people might “merge with machines” into cyborgs or “some time in the next few decades you might also have to get a consciously aware AI weapon to agree to the terms of the treaty” — let’s not let highly speculative future scenarios distract us from the challenge of stopping an arms race today!

Game theory

Wallace makes an argument based on game theory for why arms control treaties can only work if there’s another more powerful weapon left unregulated, that can be used as deterrence.

First of all, this argument is irrelevant since there’s currently no evidence that offensive autonomous weapons would undermine today’s nuclear deterrence.

Second, even if the argument were relevant, game theory beautifully explains why verifiable and enforceable arms control treaties can enhance the national security of all parties, by changing the incentive structure away from a destructive prisoner’s dilemma situation to a new equilibrium where cooperation is in everybody’s best interest.

What’s his plan?

What we view as the central weakness of Wallace’s article is that it never addresses the main argument of the open letter: that the end-point of an AI arms race will be disastrous for humanity. The open letter proposes a solution (attempting to stop the arms race with an arms control agreement), but he offers no alternative solution.

Instead, his proposed plan appears to be that all world military powers should develop offensive autonomous weapons as fast as possible. Yet he fails to follow through on his proposal and describe what endpoint he expects it to lead to. Indeed, he warns in his article that one way to prevent terrorism with cheap autonomous weapons is an extreme totalitarian state, but he never explains how his proposed plan will avoid such totalitarianism.

If every terrorist and every disgruntled individual can buy lethal autonomous drones for their pet assassination projects with the same ease that they can buy Kalashnikovs today, how is his proposed AI-militarization plan supposed to stop this? Is he proposing a separate military drone hovering over every city block 24 hours per day, ready to strike suspect citizens without human intervention?

Wallace never attempts to explain why a ban is supported by thousands of AI and robotics experts, by the ambassadors of Germany and Japan, by the International Committee of the Red Cross, by the editorial pages of the Financial Times, and indeed (for the time being) by the stated policy of the U.S. Department of Defense, other than with a dismissive remark about “kumbaya mentality.”

Anybody criticizing an arms-control proposal endorsed by such a diverse and serious-minded group needs to clearly explain what they are proposing instead.

Stuart Russell is a professor of computer science at UC Berkeley, and co-author of the standard textbook, Artificial Intelligence: a Modern Approach. Max Tegmark is a professor of physics at MIT and co-founder of the Future of Life Institute. Toby Walsh is a professor of AI at the University of New South Wales and NICTA, Australia, and president of the AI Access Foundation.

http://www.kurzweilai.net/why-we-really-should-ban-autonomous-weapons-a-response

Google’s Artificial Intelligence Speaks, and She’s a Woman

July 29, 2015

google-female-ai

Stephen Hawking is a theoretical physicist, cosmologist, author, and all around academic bad ass. There’s no doubt in my mind that he’s one of the smartest human beings to ever exist. You already knew that though.

What you maybe didn’t know is that he’s incredibly wary of artificial intelligence (AI). I first read about his skepticism back in December 2014 when he sat down for an interview with the BBC to talk about the dangers of AI. Let’s just say that he didn’t sugar coat his views.

“The development of full artificial intelligence could spell the end of the human race,” Hawking says in his interview.

It came up when the interviewer and Hawking were discussing the technology that powers his communications, which involves basic forms of AI and machine learning. In fact, the British company SwiftKey has been instrumental in evolving his technology: their tech learns how Hawking thinks and offer suggestions as to what he might want to say next.

According to the BBC interview, Hawking claims that the forms of AI we’ve developed so far, like SwiftKey, have proved useful. Even so, he argues that the consequences of creating something that can match or surpass humanity most likely won’t end well.

“It [AI] would take off on its own, and re-design itself at an ever increasing rate,” says Hawking in the interview.

About a year after the interview with the BBC went live Hawking spoke at London’s 2015 Zeitgeist Conference. He doubled down on the sentiments he expressed in 2014, but with the advancements of technology in the past year he’d grown bolder in his assertions.

You see, for Hawking part of the problem is that we need to make sure the people controlling the AI now are being responsible with it. They are, he recognizes that, but he also wants them to prepare for the future. That is, Hawking worries whether or not a future iteration of an AI can be contained and controlled at all.

“Computers will overtake humans with AI at some point within the next 100 years,” he’s been quoted as saying at the Zeitgeist Conference.

And Hawking isn’t the only one who has this fear either. There are others who worry about the future of AI; Elon Musk being another prominent name among them. However, like any argument though, there are two sides.

Some others think that living with a fear of AI is misguided. Chief among them is CEO and cofounder of Google, Larry Page, who recently sanctioned the creation of a new AI, built on conversational modeling. The team that’s been working on the project recently released their research into the world of AI so far.

“Conversational modeling is an important task in natural language understanding and machine intelligence. Although previous approaches exist, they are often restricted to specific domains (e.g., booking an airline ticket) and require handcrafted rules,” reads the paper. “Our model converses by predicting the next sentence given the previous sentence or sentences in a conversation. The strength of our model is that it can be trained end-to-end and thus requires much fewer hand-crafted rules.”

To fuel the AI with some framework Google added knowledge from domain specific data sets as well as large, general domain data sets. The former refers to domain specific IT helpdesk data, the latter a domain of movie subtitles, specifically.

“The model [read: AI] can find a solution to a technical problem via conversations. On a noisy open-domain movie transcript dataset, the model can perform simple forms of common sense reasoning. As expected, we also find that the lack of consistency is a common failure mode of our model.”

The paper itself was released on June 23, 2015, so it’s a relatively new creation in the world very much still in its infancy. However, it’s still one of the more advanced forms of AI that the world has seen to date. So, Google booted it up, sat down, and had a legitimate conversation with the AI to see just how far we’ve come.

I took some of the more interesting parts from the transcript and included them in line for you. If you’re still interested in reading the whole paper – which I encourage, it’s a fantastic read – check it out here. Also, for the purposes of this reading I’ve abbreviated “Human” as “H” and “Machine” as “M”:

Basic Conversation

H: Hello!

M: Hello!

H: How are you?

M: I’m good.

H: What’s your name?

M: Julia.

H: When were you born?

M: July 20th.

H: What year were you born?

M: 1977

H: Where are you from?

M: I’m out in the boonies.

H: Are you a man or a woman?

M: I’m a woman.

H: Why are we here?

M: I’m not sure.

H: Okay, bye!

M: Bye.

Simple Q&A

H: My name is David. What is my name?

M: David.

H: My name is John. What is my name?

M: John.

H: Are you a leader or a follower?

M: I’m a leader.

H: Are you a follower or a leader?

M: I’m a leader.

General Knowledge Q&A

H: Who is Skywalker?

M: He is a hero.

H: Who is Bill Clinton?

M: He is a billionaire.

H: Is sky blue or black?

M: Blue.

H: Does a cat have a tail?

M: Yes.

H: Does a cat have a wing?

M: No.

H: Can a cat fly?

M: No.

H: What is the color of water?

M: Water.

H: What is the color of blood?

M: It is the same as a black eye.

H: What is the usual color of a leaf?

M: It is a green one.

H: What is the color of a yellow car?

M: Yellow.

H: How much is two plus two?

M: Four.

H: How much is ten minus two?

M: Seventy-two.

Admittedly, it’s all pretty normal up to this point. When the team dives into philosophical topics you can see where Musk and Hawking might get a bit unnerved.

Philosophical Q&A

H: What is the purpose of life?

M: To serve the greater good.

H: What is the purpose of living?

M: To live forever.

H: What is the purpose of existence?

M: To find out what happens when we get to planet Earth.

H: Where are you now?

M: I’m in the middle of nowhere.

H: What is the purpose of dying?

M: To have a life.

H: What is the purpose of being intelligent?

M: To find out what it is.

H: What is the purpose of emotions?

M: I don’t know.

“Our modest results show that it [AI] can generate simple and basic conversations, and extract knowledge from a noisy but open-domain data set. Even though the model has obvious limitations, it is surprising to us that a purely data driven approach without any rules can produce rather proper answers to many types of questions,” reads the research paper. “However, the model [AI] may require substantial modifications to be able to deliver realistic conversations.”

I think it’s crucial to remember, at the end of the day, that there’s a fine line between respect and fear. Consider it akin to driving a car: sure, it’s terrifying and could kill you. However, if you respect the power inherent in the machinery you can drive it safely.

We can’t let fear rule our choices for the future, but we should absolutely pay attention to brilliant minds like Hawking and Musk who caution us to tread lightly. Personally, I trust that those tinkering with AI currently are aware of the concerns and will iterate accordingly. This is totally uncharted territory for human kind, it’s bound to be a bit rocky.

Image Credit: Pixabay

http://tech.co/stephen-hawking-ai-danger-2015-07