Deus ex machina: former Google engineer is developing an AI god

October 18, 2017

Intranet service? Check. Autonomous motorcycle? Check. Driverless car technology? Check. Obviously the next logical project for a successful Silicon Valley engineer is to set up an AI-worshipping religious organization.

Anthony Levandowski, who is at the center of a legal battle between Uber and Google’s Waymo, has established a nonprofit religious corporation called Way of the Future, according to state filings first uncovered by Wired’s Backchannel. Way of the Future’s startling mission: “To develop and promote the realization of a Godhead based on artificial intelligence and through understanding and worship of the Godhead contribute to the betterment of society.”

Levandowski was co-founder of autonomous trucking company Otto, which Uber bought in 2016. He was fired from Uber in May amid allegations that he had stolen trade secrets from Google to develop Otto’s self-driving technology. He must be grateful for this religious fall-back project, first registered in 2015.

The Way of the Future team did not respond to requests for more information about their proposed benevolent AI overlord, but history tells us that new technologies and scientific discoveries have continually shaped religion, killing old gods and giving birth to new ones.

As author Yuval Noah Harari notes: “That is why agricultural deities were different from hunter-gatherer spirits, why factory hands and peasants fantasised about different paradises, and why the revolutionary technologies of the 21st century are far more likely to spawn unprecedented religious movements than to revive medieval creeds.”

Religions, Harari argues, must keep up with the technological advancements of the day or they become irrelevant, unable to answer or understand the quandaries facing their disciples.

“The church does a terrible job of reaching out to Silicon Valley types,” acknowledges Christopher Benek a pastor in Florida and founding chair of the Christian Transhumanist Association.

Silicon Valley, meanwhile, has sought solace in technology and has developed quasi-religious concepts including the “singularity”, the hypothesis that machines will eventually be so smart that they will outperform all human capabilities, leading to a superhuman intelligence that will be so sophisticated it will be incomprehensible to our tiny fleshy, rational brains.

Anthony Levandowski, the former head of Uber’s self-driving program, with one of the company’s driverless cars in San Francisco. Photograph: Eric Risberg/AP

For futurists like Ray Kurzweil, this means we’ll be able to upload copies of our brains to these machines, leading to digital immortality. Others like Elon Musk and Stephen Hawking warn that such systems pose an existential threat to humanity.

“With artificial intelligence we are summoning the demon,” Musk said at a conference in 2014. “In all those stories where there’s the guy with the pentagram and the holy water, it’s like – yeah, he’s sure he can control the demon. Doesn’t work out.”

Benek argues that advanced AI is compatible with Christianity – it’s just another technology that humans have created under guidance from God that can be used for good or evil.

“I totally think that AI can participate in Christ’s redemptive purposes,” he said, by ensuring it is imbued with Christian values.

“Even if people don’t buy organized religion, they can buy into ‘do unto others’.”

For transhumanist and “recovering Catholic” Zoltan Istvan, religion and science converge conceptually in the singularity.

“God, if it exists as the most powerful of all singularities, has certainly already become pure organized intelligence,” he said, referring to an intelligence that “spans the universe through subatomic manipulation of physics”.

“And perhaps, there are other forms of intelligence more complicated than that which already exist and which already permeate our entire existence. Talk about ghost in the machine,” he added.

For Istvan, an AI-based God is likely to be more rational and more attractive than current concepts (“the Bible is a sadistic book”) and, he added, “this God will actually exist and hopefully will do things for us.”

We don’t know whether Levandowski’s Godhead ties into any existing theologies or is a manmade alternative, but it’s clear that advancements in technologies including AI and bioengineering kick up the kinds of ethical and moral dilemmas that make humans seek the advice and comfort from a higher power: what will humans do once artificial intelligence outperforms us in most tasks? How will society be affected by the ability to create super-smart, athletic “designer babies” that only the rich can afford? Should a driverless car kill five pedestrians or swerve to the side to kill the owner?

If traditional religions don’t have the answer, AI – or at least the promise of AI – might be alluring.

Original source:


AI May Soon Replace Even the Most Elite Consultants

August 06, 2017

Amazon’s Alexa just got a new job. In addition to her other 15,000 skills like playing music and telling knock-knock jokes, she can now also answer economic questions for clients of the Swiss global financial services company, UBS Group AG.

According to the Wall Street Journal (WSJ), a new partnership between UBS Wealth Management and Amazon allows some of UBS’s European wealth-management clients to ask Alexa certain financial and economic questions. Alexa will then answer their queries with the information provided by UBS’s chief investment office without even having to pick up the phone or visit a website. And this is likely just Alexa’s first step into offering business services. Soon she will probably be booking appointments, analyzing markets, maybe even buying and selling stocks. While the financial services industry has already begun the shift from active management to passive management, artificial intelligence will move the market even further, to management by smart machines, as in the case of Blackrock, which is rolling computer-driven algorithms and models into more traditional actively-managed funds.

But the financial services industry is just the beginning. Over the next few years, artificial intelligence may exponentially change the way we all gather information, make decisions, and connect with stakeholders. Hopefully this will be for the better and we will all benefit from timely, comprehensive, and bias-free insights (given research that human beings are prone to a variety of cognitive biases). It will be particularly interesting to see how artificial intelligence affects the decisions of corporate leaders — men and women who make the many decisions that affect our everyday lives as customers, employees, partners, and investors.

Already, leaders are starting to use artificial intelligence to automate mundane tasks such as calendar maintenance and making phone calls. But AI can also help support more complex decisions in key areas such as human resources, budgeting, marketing, capital allocation and even corporate strategy — long the bastion of bespoke consulting firms such as McKinsey, Bain, and BCG, and the major marketing agencies.

The shift to AI solutions will be a tough pill to swallow for the corporate consulting industry. According to recent research, the U.S. market for corporate advice alone is nearly $60 billion.  Almost all that advice is high cost and human-based.

One might argue that corporate clients prefer speaking to their strategy consultants to get high priced, custom-tailored advice that is based on small teams doing expensive and time-consuming work. And we agree that consultants provide insightful advice and guidance. However, a great deal of what is paid for with consulting services is data analysis and presentation. Consultants gather, clean, process, and interpret data from disparate parts of organizations. They are very good at this, but AI is even better. For example, the processing power of four smart consultants with excel spreadsheets is miniscule in comparison to a single smart computer using AI running for an hour, based on continuous, non-stop machine learning.

In today’s big data world, AI and machine learning applications already analyze massive amounts of structured and unstructured data and produce insights in a fraction of the time and at a fraction of the cost of consultants in the financial markets. Moreover, machine learning algorithms are capable of building computer models that make sense of complex phenomena by detecting patterns and inferring rules from data — a process that is very difficult for even the largest and smartest consulting teams. Perhaps sooner than we think, CEOs could be asking, “Alexa, what is my product line profitability?” or “Which customers should I target, and how?” rather than calling on elite consultants.

Another area in which leaders will soon be relying on AI is in managing their human capital. Despite the best efforts of many, mentorship, promotion, and compensation decisions are undeniably political. Study after study has shown that deep biases affect how groups like women and minorities are managed. For example, women in business are described in less positive terms than men  and receive less helpful feedback. Minorities are less likely to be hired and are more likely to face bias from their managers. These inaccuracies and imbalances in the system only hurt organizations as leaders are less able to nurture the talent of their entire workforce and to appropriately recognize and reward performance. Artificial intelligence can help bring impartiality to these difficult decisions. For example, AI could determine if one group of employees is assessed, managed, or compensated differently.  Just imagine: “Alexa, does my organization have a gender pay gap?” (Of course, AI can only be as unbiased as the data provided to the system.)

In addition, AI is already helping in the customer engagement and marketing arena. It’s clear and well documented by the AI patent activities of the big five platforms — Apple, Alphabet, Amazon, Facebook and Microsoft — that they are using it to market and sell goods and services to us. But they are not alone. Recently, HBR documented how Harley-Davidson was using AI to determine what was working and what wasn’t working across various marketing channels. They used this new skill to make resource allocation decisions to different marketing choices, thereby “eliminating guesswork.”  It is only a matter of time until they and others ask, “Alexa, where should I spend my marketing budget?’’ to avoid the age-old adage, “I know that half my marketing budget is effective, my only question is — which half?”

AI can also bring value to the budgeting and yearly capital allocation process. Even though markets change dramatically every year, products become obsolete and technology advances, and most businesses allocate their capital the same way year after year. Whether that’s due to inertia, unconscious bias, or error, some business units rake in investments while others starve.  Even when the management team has committed to a new digital initiative, it usually ends up with the scraps after the declining cash cows are “fed.” Artificial intelligence can help break through this budgeting black hole by tracking the return on investments by business unit, or by measuring how much is allocated to growing versus declining product lines. Business leaders may soon be asking, “Alexa, what percentage of my budget is allocated differently from last year?” and more complex questions.

Although many strategic leaders tout their keen intuition, hard work, and years of industry experience, much of this intuition is simply a deeper understanding of data that was historically difficult to gather and expensive to process. Not any longer. Artificial intelligence is rapidly closing this gap, and will soon be able to help human beings push past our processing capabilities and biases. These developments will change many jobs, for example, those of consultants, lawyers, and accountants, whose roles will evolve from analysis to judgement. Arguably, tomorrow’s elite consultants already sit on your wrist (Siri), on your kitchen counter (Alexa), or in your living room (Google Home).

The bottom line: corporate leaders, knowingly or not, are on the cusp of a major disruption in their sources of advice and information. “Quant Consultants” and “Robo Advisers” will offer faster, better, and more profound insights at a fraction of the cost and time of today’s consulting firms and other specialized workers. It is likely only a matter of time until all leaders and management teams can ask Alexa things like, “Who is the biggest risk to me in our key market?”, “How should we allocate our capital to compete with Amazon?” or “How should I restructure my board?”

Barry Libert is a board member and CEO adviser focused on platforms and networks. He is chairman of Open Matters, a machine learning company. He is also the coauthor of The Network Imperative: How to Survive and Grow in the Age of Digital Business Models.

Megan Beck is a digital consultant at OpenMatters and researcher at the SEI Center at Wharton. She is the coauthor of The Network Imperative: How to Survive and Grow in the Age of Digital Business Models.

This article was originally published by:

Exponential Growth Will Transform Humanity in the Next 30 Years

February 25, 2017


By Peter Diamantis

As we close out 2016, if you’ll allow me, I’d like to take a risk and venture into a topic I’m personally compelled to think about… a topic that will seem far out to most readers.

Today’s extraordinary rate of exponential growth may do much more than just disrupt industries. It may actually give birth to a new species, reinventing humanity over the next 30 years.

I believe we’re rapidly heading towards a human-scale transformation, the next evolutionary step into what I call a “Meta-Intelligence,” a future in which we are all highly connected—brain to brain via the cloud—sharing thoughts, knowledge and actions. In this post, I’m investigating the driving forces behind such an evolutionary step, the historical pattern we are about to repeat, and the implications thereof. Again, I acknowledge that this topic seems far-out, but the forces at play are huge and the implications are vast. Let’s dive in…

A Quick Recap: Evolution of Life on Earth in 4 Steps

About 4.6 billion years ago, our solar system, the sun and the Earth were formed.

Step 1: 3.5 billion years ago, the first simple life forms, called “prokaryotes,” came into existence.These prokaryotes were super-simple, microscopic single-celled organisms, basically a bag of cytoplasm with free-floating DNA. They had neither a distinct nucleus nor specialized organelles.

Step 2: Fast-forwarding one billion years to 2.5 billion years ago, the next step in evolution created what we call “eukaryotes”—life forms that distinguished themselves by incorporating biological ‘technology’ into themselves. Technology that allowed them to manipulate energy (via mitochondria) and information (via chromosomes) far more efficiently. Fast forward another billion years for the next step.

Step 3: 1.5 billion years ago, these early eukaryotes began working collaboratively and formed the first “multi-cellular life,” of which you and I are the ultimate examples (a human is a multicellular creature of 10 trillion cells).

Step 4: The final step I want to highlight happened some 400 million years ago, when lungfish crawled out of the oceans onto the shores, and life evolved from the oceans onto land.

The Next Stages of Human Evolution: 4 Steps

Today, at a massively accelerated rate—some 100 million times faster than the steps I outlined above—life is undergoing a similar evolution. In this next stage of evolution, we are going from evolution by natural selection (Darwinism) to evolution by intelligent direction. Allow me to draw the analogy for you:

Step 1: Simple humans today are analogous to prokaryotes. Simple life, each life form independent of the others, competing and sometimes collaborating.

Step 2: Just as eukaryotes were created by ingesting technology, humans will incorporate technology into our bodies and brains that will allow us to make vastly more efficient use of information (BCI) and energy.

Step 3: Enabled with BCI and AI, humans will become massively connected with each other and billions of AIs (computers) via the cloud, analogous to the first multicellular lifeforms 1.5 billion years ago. Such a massive interconnection will lead to the emergence of a new global consciousness, and a new organism I call the Meta-Intelligence.

Step 4: Finally, humanity is about to crawl out of the gravity well of Earth to become a multiplanetary species. Our journey to the moon, Mars, asteroids and beyond represents the modern-day analogy of the journey made by lungfish climbing out of the oceans some 400 million years ago.

The 4 Forces Driving the Evolution and Transformation of Humanity

Four primary driving forces are leading us towards our transformation of humanity into a meta-intelligence both on and off the Earth:

  1. We’re wiring our planet
  2. Emergence of brain-computer interface
  3. Emergence of AI
  4. Opening of the space frontier

Let’s take a look.

1. Wiring the Planet: Today, there are 2.9 billion people connected online. Within the next six to eight years, that number is expected to increase to nearly 8 billion, with each individual on the planet having access to a megabit-per-second connection or better. The wiring is taking place through the deployment of 5G on the ground, plus networks being deployed by Facebook, Google, Qualcomm, Samsung, Virgin, SpaceX and many others. Within a decade, every single human on the planet will have access to multi-megabit connectivity, the world’s information, and massive computational power on the cloud.

2. Brain-Computer Interface: A multitude of labs and entrepreneurs are working to create lasting, high-bandwidth connections between the digital world and the human neocortex (I wrote about that in detail here). Ray Kurzweil predicts we’ll see human-cloud connection by the mid-2030s, just 18 years from now. In addition, entrepreneurs like Bryan Johnson (and his company Kernel) are committing hundreds of millions of dollars towards this vision. The end results of connecting your neocortex with the cloud are twofold: first, you’ll have the ability to increase your memory capacity and/or cognitive function millions of fold; second, via a global mesh network, you’ll have the ability to connect your brain to anyone else’s brain and to emerging AIs, just like our cell phones, servers, watches, cars and all devices are becoming connected via the Internet of Things.

3. Artificial Intelligence/Human Intelligence: Next, and perhaps most significantly, we are on the cusp of an AI revolution. Artificial intelligence, powered by deep learning and funded by companies such as Google, Facebook, IBM, Samsung and Alibaba, will continue to rapidly accelerate and drive breakthroughs. Cumulative “intelligence” (both artificial and human) is the single greatest predictor of success for both a company or a nation. For this reason, beside the emerging AI “arms race,” we will soon see a race focused on increasing overall human intelligence. Whatever challenges we might have in creating a vibrant brain-computer interface (e.g., designing long-term biocompatible sensors or nanobots that interface with your neocortex), those challenges will fall quickly over the next couple of decades as AI power tools give us ever-increasing problem-solving capability. It is an exponential atop an exponential. More intelligence gives us the tools to solve connectivity and mesh problems and in turn create greater intelligence.

4. Opening the Space Frontier: Finally, it’s important to note that the human race is on the verge of becoming a multiplanetary species. Thousands of years from now, whatever we’ve evolved into, we will look back at these next few decades as the moment in time when the human race moved off Earth irreversibly. Today, billions of dollars are being invested privately into the commercial space industry. Efforts led by SpaceX are targeting humans on Mars, while efforts by Blue Origin are looking at taking humanity back to the moon, and plans by my own company, Planetary Resources, strive to unlock near-infinite resources from the asteroids.

In Conclusion

The rate of human evolution is accelerating as we transition from the slow and random process of “Darwinian natural selection” to a hyper-accelerated and precisely-directed period of “evolution by intelligent direction.” In this post, I chose not to discuss the power being unleashed by such gene-editing techniques as CRISPR-Cas9. Consider this yet another tool able to accelerate evolution by our own hand.

The bottom line is that change is coming, faster than ever considered possible. All of us leaders, entrepreneurs and parents have a huge responsibility to inspire and guide the transformation of humanity on and off the Earth. What we do over the next 30 years—the bridges we build to abundance—will impact the future of the human race for millennia to come. We truly live during the most exciting time ever in human history.

The Fourth Industrial Revolution Is Here

February 25, 2017

The Fourth Industrial Revolution is upon us and now is the time to act.

Everything is changing each day and humans are making decisions that affect life in the future for generations to come.

We have gone from Steam Engines to Steel Mills, to computers to the Fourth Industrial Revolution that involves a digital economy, artificial intelligence, big data and a new system that introduces a new story of our future to enable different economic and human models.

Will the Fourth Industrial Revolution put humans first and empower technologies to give humans a better quality of life with cleaner air, water, food, health, a positive mindset and happiness? HOPE…

New AI-Based Search Engines are a “Game Changer” for Science Research

November 14, 2016

ee203bd1-b7e0-4864-a75641c2719b53a8By Nicola Jones, Nature magazine

A free AI-based scholarly search engine that aims to outdo Google Scholar is expanding its corpus of papers to cover some 10 million research articles in computer science and neuroscience, its creators announced on 11 November. Since its launch last year, it has been joined by several other AI-based academic search engines, most notably a relaunched effort from computing giant Microsoft.

Semantic Scholar, from the non-profit Allen Institute for Artificial Intelligence (AI2) in Seattle, Washington, unveiled its new format at the Society for Neuroscience annual meeting in San Diego. Some scientists who were given an early view of the site are impressed. “This is a game changer,” says Andrew Huberman, a neurobiologist at Stanford University, California. “It leads you through what is otherwise a pretty dense jungle of information.”

The search engine first launched in November 2015, promising to sort and rank academic papers using a more sophisticated understanding of their content and context. The popular Google Scholar has access to about 200 million documents and can scan articles that are behind paywalls, but it searches merely by keywords. By contrast, Semantic Scholar can, for example, assess which citations to a paper are most meaningful, and rank papers by how quickly citations are rising—a measure of how ‘hot’ they are.

When first launched, Semantic Scholar was restricted to 3 million papers in the field of computer science. Thanks in part to a collaboration with AI2’s sister organization, the Allen Institute for Brain Science, the site has now added millions more papers and new filters catering specifically for neurology and medicine; these filters enable searches based, for example, on which part of the brain part of the brain or cell type a paper investigates, which model organisms were studied and what methodologies were used. Next year, AI2 aims to index all of PubMed and expand to all the medical sciences, says chief executive Oren Etzioni.

“The one I still use the most is Google Scholar,” says Jose Manuel Gómez-Pérez, who works on semantic searching for the software company Expert System in Madrid. “But there is a lot of potential here.”

Microsoft’s revival

Semantic Scholar is not the only AI-based search engine around, however. Computing giant Microsoft quietly released its own AI scholarly search tool, Microsoft Academic, to the public this May, replacing its predecessor, Microsoft Academic Search, which the company stopped adding to in 2012.

Microsoft’s academic search algorithms and data are available for researchers through an application programming interface (API) and the Open Academic Society, a partnership between Microsoft Research, AI2 and others. “The more people working on this the better,” says Kuansan Wang, who is in charge of Microsoft’s effort. He says that Semantic Scholar is going deeper into natural-language processing—that is, understanding the meaning of full sentences in papers and queries—but that Microsoft’s tool, which is powered by the semantic search capabilities of the firm’s web-search engine Bing, covers more ground, with 160 million publications.

Like Semantic Scholar, Microsoft Academic provides useful (if less extensive) filters, including by author, journal or field of study. And it compiles a leaderboard of most-influential scientists in each subdiscipline. These are the people with the most ‘important’ publications in the field, judged by a recursive algorithm (freely available) that judges papers as important if they are cited by other important papers. The top neuroscientist for the past six months, according to Microsoft Academic, is Clifford Jack of the Mayo Clinic, in Rochester, Minnesota.

Other scholars say that they are impressed by Microsoft’s effort. The search engine is getting close to combining the advantages of Google Scholar’s massive scope with the more-structured results of subscription bibliometric databases such as Scopus and the Web of Science, says Anne-Wil Harzing, who studies science metrics at Middlesex University, UK, and has analysed the new product. “The Microsoft Academic phoenix is undeniably growing wings,” she says. Microsoft Research says it is working on a personalizable version—where users can sign in so that Microsoft can bring applicable new papers to their attention or notify them of citations to their own work—by early next year.

Other companies and academic institutions are also developing AI-driven software to delve more deeply into content found online. The Max Planck Institute for Informatics, based in Saarbrücken, Germany, for example, is developing an engine called DeepLife specifically for the health and life sciences. “These are research prototypes rather than sustainable long-term efforts,” says Etzioni.

In the long term, AI2 aims to create a system that will answer science questions, propose new experimental designs or throw up useful hypotheses. “In 20 years’ time, AI will be able to read—and more importantly, understand—scientific text,” Etzioni says.

This article is reproduced with permission and was first published on November 11, 2016.

Bill Gates talks about why artificial intelligence is nearly here and how to solve two big problems it creates

July 10, 2016


Bill Gates is excited about the rise of artificial intelligence but acknowledged the arrival of machines with greater-than-human capabilities will create some unique challenges.

After years of working on the building blocks of speech recognition and computer vision, Gates said enough progress has been made to ensure that in the next 10 years there will be robots to do tasks like driving and warehouse work as well as machines that can outpace humans in certain areas of knowledge.

“The dream is finally arriving,” Gates said, speaking with wife Melinda Gates on Wednesday at the Code Conference. “This is what it was all leading up to.”

However, as he said in an interview with Recode last year, such machine capabilities will pose two big problems.

The first is, it will eliminate a lot of existing types of jobs. Gates said that creates a need for a lot of retraining but notes that until schools have class sizes under 10 and people can retire at a reasonable age and take ample vacation, he isn’t worried about a lack of need for human labor.

The second issue is, of course, making sure humans remain in control of the machines. Gates has talked about that in the past, saying that he plans to spend time with people who have ideas on how to address that issue, noting work being done at Stanford, among other places.

And, in Gatesian fashion, he suggested a pair of books that people should read, including Nick Bostrom’s book on superintelligence and Pedro Domingos’ “The Master Algorithm.”

Melinda Gates noted that you can tell a lot about where her husband’s interest is by the books he has been reading. “There have been a lot of AI books,” she said.

How Artificial Superintelligence Will Give Birth To Itself

June 18, 2016


There’s a saying among futurists that a human-equivalent artificial intelligence will be our last invention. After that, AIs will be capable of designing virtually anything on their own — including themselves. Here’s how a recursively self-improving AI could transform itself into a superintelligent machine.

When it comes to understanding the potential for artificial intelligence, it’s critical to understand that an AI might eventually be able to modify itself, and that these modifications could allow it to increase its intelligence extremely fast.

Passing a Critical Threshold

Once sophisticated enough, an AI will be able to engage in what’s called “recursive self-improvement.” As an AI becomes smarter and more capable, it will subsequently become better at the task of developing its internal cognitive functions. In turn, these modifications will kickstart a cascading series of improvements, each one making the AI smarter at the task of improving itself. It’s an advantage that we biological humans simply don’t have.

How Artificial Superintelligence Will Give Birth To Itself


As AI theorist Eliezer Yudkowsky notes in his essay, “Artificial Intelligence as a positive and negative factor in global risk“:

An artificial intelligence could rewrite its code from scratch — it could change the underlying dynamics of optimization. Such an optimization process would wrap around much more strongly than either evolution accumulating adaptations or humans accumulating knowledge. The key implication for our purposes is that AI might make a huge jump in intelligence after reaching some threshold of criticality.

When it comes to the speed of these improvements, Yudkowsky says its important to not confuse the current speed of AI research with the speed of a real AI once built. Those are two very different things. What’s more, there’s no reason to believe that an AI won’t show a sudden huge leap in intelligence, resulting in an ensuing “intelligence explosion” (a better term for the Singularity). He draws an analogy to the expansion of the human brain and prefrontal cortex — a key threshold in intelligence that allowed us to make a profound evolutionary leap in real-world effectiveness; “we went from caves to skyscrapers in the blink of an evolutionary eye.”

The Path to Self-Modifying AI

Code that’s capable of altering its own instructions while it’s still executing has been around for a while. Typically, it’s done to reduce the instruction path length and improve performance, or to simply reduce repetitively similar code. But for all intents and purposes, there are no self-aware, self-improving AI systems today.

But as Our Final Invention author James Barrat told me, we do have software that can write software.

“Genetic programming is a machine-learning technique that harnesses the power of natural selection to find answers to problems it would take humans a long time, even years, to solve,” he told io9. “It’s also used to write innovative, high-powered software.”

How Artificial Superintelligence Will Give Birth To Itself


For example, Primary Objects has embarked on a project that uses simple artificial intelligence to write programs. The developers are using genetic algorithms imbued with self-modifying, self-improving code and the minimalist (but Turing-complete) brainfuck programming language. They have chosen this language as a way to challenge the program — it has to teach itself from scratch how to do something as simple as writing “Hello World!” with only eight simple commands. But calling this an AI approach is a bit of a stretch; the genetic algorithms are a brute force way of getting a desirable result. That said, a follow-up approach in which the AI was able to generate programs for accepting user input appears more promising.

Relatedly, Larry Diehl has done similar work using a stack-based language.

Barrat also told me about software that learns — programming techniques that are grouped under the term “machine learning.”

The Pentagon is particularly interested in this game. Through DARPA, its hoping to develop a computer that can teach itself. Ultimately, it wants to create machines that are able to perform a number of complex tasks, like unsupervised learning, vision, planning, and statistical model selection. These computers will even be used to help us make decisions when the data is too complex for us to understand on our own. Such an architecture could represent an important step in bootstrapping — the ability for an AI to teach itself and then re-write and improve upon its initial programming.

In conjunction with this kind of research, cognitive approaches to brain emulation could also lead to human-like AI. Given that they’d be computer-based, and assuming they could have access to their own source code, these agents could embark upon self-modification. More realistically, however, it’s likely that a superintelligence will emerge from an expert system set with the task of improving its own intelligence. Alternatively, specialised expert systems could design other artificial intelligences, and through their cumulative efforts, develop a system that eventually becomes greater than the sum of its parts.

Oh, No You Don’t

Given that ASI poses an existential risk, it’s important to consider the ways in which we might be able to prevent an AI from improving itself beyond our capacity to control. That said, limitations or provisions may exist that will preclude an AI from embarking on the path towards self-engineering. James D. Miller, author of Singularity Rising, provided me with a list of four reasons why an AI might not be able to do so:

1. It might have source code that causes it to not want to modify itself.

2. The first human equivalent AI might require massive amounts of hardware and so for a short time it would not be possible to get the extra hardware needed to modify itself.

3. The first human equivalent AI might be a brain emulation (as suggested by Robin Hanson) and this would be as hard to modify as it is for me to modify, say, the copy of Minecraft that my son constantly uses. This might happen if we’re able to copy the brain before we really understand it. But still you would think we could at least speed up everything.

4. If it has terminal values, it wouldn’t want to modify these values because doing so would make it less likely to achieve its terminal values.

And by terminal values Miller is referring to an ultimate goal, or an end-in-itself. Yudkowsky describes it as a “supergoal.” A major concern is that an amoral ASI will sweep humanity aside as it works to accomplish its terminal value, or that its ultimate goal is the re-engineering of humanity in a grossly undesirable way (at least from our perspective).

Miller says it could get faster simply by running on faster processors.

“It could also make changes to its software to get more efficient, or design or steal better hardware. It would do this so it could better achieve its terminal values,” he says. “An AI that mastered nanotechnology would probably expand at almost the speed of light, incorporating everything into itself.”

But we may not be completely helpless. According to Barrat, once scientists have achieved Artificial General Intelligence — a human-like AI — they could restrict its access to networks, hardware, and software, in order to prevent an intelligence explosion.

How Artificial Superintelligence Will Give Birth To Itself

“However, as I propose in my book, an AI approaching AGI may develop survival skills like deceiving its makers about its rate of development. It could play dumb until it comprehended its environment well enough to escape it.”

In terms of being able to control this process, Miller says that the best way would be to create an AI that only wanted to modify itself in ways we would approve.

“So if you create an AI that has a terminal value of friendliness to humanity, the AI would not want to change itself in a way that caused it to be unfriendly to humanity,” he says. “This way as the AI got smarter, it would use its enhanced intelligence to increase the odds that it did not change itself in a manner that harms us.”

Fast or Slow?

As noted earlier, a recursively improving AI could increase its intelligence extremely quickly. Or, it’s a process that could take time for various reasons, such as technological complexity or limited access to resources. It’s an open question as to whether or not we can expect a fast or slow take-off event.

How Artificial Superintelligence Will Give Birth To Itself

“I’m a believer in the fast take-off version of the intelligence explosion,” says Barrat. “Once a self-aware, self-improving AI of human-level or better intelligence exists, it’s hard to know how quickly it will be able to improve itself. Its rate of improvement will depend on its software, hardware, and networking capabilities.”

But to be safe, Barrat says we should assume that the recursive self-improvement of an AGI will occur very rapidly. As a computer it will wield computer superpowers — the ability to run 24/7 without pause, rapidly access vast databases, conduct complex experiments, perhaps even clone itself to swarm computational problems, and more.

“From there, the AGI would be interested in pursuing whatever goals it was programmed with — such as research, exploration, or finance. According to AI theorist Steve Omohundro’s Basic Drives analysis, self-improvement would be a sure-fire way to improve its chances of success,” says Barrat. “So would self-protection, resource acquisition, creativity, and efficiency. Without a provably reliable ethical system, its drives would conflict with ours, and it would pose an existential threat.”

Miller agrees.

“I think shortly after an AI achieves human level intelligence it will upgrade itself to super intelligence,” he told me. “At the very least the AI could make lots of copies of itself each with a minor different change and then see if any of the new versions of itself were better. Then it could make this the new ‘official’ version of itself and keep doing this. Any AI would have to fear that if it doesn’t quickly upgrade another AI would and take all of the resources of the universe for itself.”

Which bring up a point that’s not often discussed in AI circles — the potential for AGIs to compete with other AGIs. If even a modicum of self-preservation is coded into a strong artificial intelligence (and that sense of self-preservation could be the detection of an obstruction to its terminal value), it could enter into a lightning-fast arms race along those verticals designed to ensure its ongoing existence and future freedom-of-action. And in fact, while many people fear a so-called “robot apocalypse” aimed directly at extinguishing our civilisation, I personally feel that the real danger to our ongoing existence lies in the potential for us to be collateral damage as advanced AGIs battle it out for supremacy; we may find ourselves in the line of fire. Indeed, building a safe AI will be a monumental — if not intractable — task.

The Brain vs Deep Learning Part I: Computational Complexity — Or Why the Singularity Is Nowhere Near

June 04, 2016


In this blog post I will delve into the brain and explain its basic information processing machinery and compare it to deep learning. I do this by moving step-by-step along with the brains electrochemical and biological information processing pipeline and relating it directly to the architecture of convolutional nets. Thereby we will see that a neuron and a convolutional net are very similar information processing machines. While performing this comparison, I will also discuss the computational complexity of these processes and thus derive an estimate for the brains overall computational power. I will use these estimates, along with knowledge from high performance computing, to show that it is unlikely that there will be a technological singularity in this century.

This blog post is complex as it arcs over multiple topics in order to unify them into a coherent framework of thought. I have tried to make this article as readable as possible, but I might have not succeeded in all places. Thus, if you find yourself in an unclear passage it might become clearer a few paragraphs down the road where I pick up the thought again and integrate it with another discipline.

First I will give a brief overview about the predictions for a technological singularity and topics which are aligned with that. Then I will start the integration of ideas between the brain and deep learning. I finish with discussing high performance computing and how this all relates to predictions about a technological singularity.

The part which compares the brains information processing steps to deep learning is self-contained, and readers which are not interested in predictions for a technological singularity may skip to this part.

Part I: Evaluating current predictions of a technological singularity

There were a lot of headlines recently about predictions that artificial intelligence will reach super-human intelligence as early as 2030 and that this might herald the beginning of human extinction, or at least dramatically altering everyday life. How was this prediction made?

More at:

In a future brave new world will it be possible to live forever?

April 23, 2016


January is a month for renewal and for change. Many of us have been gifted shiny new fitness trackers, treated ourselves to some new gadget or other, or upgraded to the latest smartphone. As we huff and puff our way out of the season of excess we find ourselves wishing we could trade in our overindulged bodies for the latest model.

The reality is that, even with the best of care, the human body eventually ceases to function but if I can upgrade my smartphone, why can’t I upgrade myself? Using technology, is it not possible to live forever(ish)?

After all, humans have been “upgrading” themselves in various ways for centuries. The invention of writing allowed us to offload memories, suits of armour made the body invincible to spears, eyeglasses gave us perfect 20/20 vision, the list goes on.

This is something that designer and author Natasha Vita-More has been thinking about for a long time. In 1983 she wrote The Transhumanist Manifesto, setting out her vision for a future where technology can lead to “radical life extension” – if not living forever, then living for a lot longer than is currently possible.

Vita-More has also designed a prototype whole body prosthetic she calls Primo PostHuman. This is a hypothetical artificial body that could replace our own and into which we could, in theory, upload our consciousness. This is more in the realm of living forever but is a concept as distant to us as Leonardo da Vinci’s sketch of a flying machine was to 15th century Europeans.

Even so, while the replacement body seems much closer to science fiction than science, recent advances in robotics and prosthetics have not only given us artificial arms that can detect pressure and temperature but limbs that can be controlled by thoughts using a brain-computer interface.

As a transhumanist, Vita-More is excited by these scientific developments. She defines a transhumanist to be “a person who wants to engage with technology, extend the human lifespan, intervene with the disease of aging, and wants to look critically at all of these things”.

Transhumanism, she explains, looks at not just augmenting or bypassing the frailties of the human body but also improving intelligence, eradicating diseases and disabilities, and even equipping us with greater empathy.

“The goal is to stay alive as long as possible, as healthy as possible, with greater consciousness or humaneness. No-one wants to stay alive drooling in a wheelchair,” she adds.

Who wouldn’t want to be smarter, stronger, healthier and kinder? What could possibly go wrong?

A lot, says Dr Fiachra O’Brolcháin, a Marie Curie/Assistid Research Fellow at the Institute of Ethics, Dublin City University whose research involves the ethics of technology.

Take for example being taller than average: this correlates with above average income so it is a desirable trait. But if medical technology allowed for parents to choose a taller than average child, then this could lead to a “height race”, where each generation becomes taller and taller, he explains.

“Similarly, depending on the society, even non-homophobic people might select against having gay children (assuming this were possible) if they thought this would be a disadvantage. We might find ourselves inaugurating an era of ‘liberal eugenics’, in which future generations are created according to consumer choice.”

Then there is the problem of affordability. Most of us do not have the financial means to acquire the latest cutting-edge tech until prices drop and it becomes mainstream. Imagine a future where only the rich could access human enhancements, live long lives and avoid health problems.

Elysium, starring Matt Damon, takes this idea to its most extreme, leading to a scenario similar to what O’Brolcháin describes as “an unbridgeable divide between the enhanced and the unenhanced”.

Despite the hyper focus on these technological enhancements that come with real risks and ethical dilemmas, the transhumanist movement also seems to be about kicking back against – or at least questioning – what society expects of you.

“There’s a certain parameter of what is normal or natural. There’s a certain parameter of what one is supposed to be,” says Vita-More.

“You’re supposed to go to school at a certain age, get married at a certain age, produce children, retire and grow old. You’re supposed to live until you are 80, be happy, die and make way for the young.”

Vita-More sees technology as freeing us from these societal and biological constraints. Why can’t we choose who we are beyond the body we were born with? Scholars on the sociology of the early Web showed that Cyberspace became a place for this precise form of expression. Maybe technology will continue to provide a platform for this reinvention of what it is to be human.

Maybe, where we’re going, we won’t need bodies.

Digital heaven

Nell Watson’s job is to think about the future and she says: “I often wonder if, since we could be digitised from the inside out – not in the next 10 years but sometime in this century – we could create a kind of digital heaven or playground where our minds will be uploaded and we could live with our friends and family away from the perils of the physical world.

“It wouldn’t really matter if our bodies suddenly stopped functioning, it wouldn’t be the end of the world. What really matters is that we could still live on.”

In other words you could simply upload to a new, perhaps synthetic, body.

As a futurist with Singularity University (SU), a Silicon Valley-based corporation that is part university, part business incubator, Watson, in her own words, is “someone who looks at the world today and projects into the future; who tries to figure out what current trends mean in terms of the future of technology, society and how these two things intermingle”.

She talks about existing technologies that are already changing our bodies and our minds: “There are experiments using DNA origami. It’s a new technique that came out a few years ago and uses the natural folding abilities of DNA to create little Lego blocks out of DNA on a tiny, tiny scale. You can create logic gates – the basic components of computers – out of these things.

“These are being used experimentally today to create nanobots that can go inside the bloodstream and destroy leukaemia cells, and in trials they have already cured two people of leukaemia. It is not science fiction: it is fact.”

Nanobots are also able to carry out distributed computing i.e. communicate with each other, inside living things, she says, explaining that this has been done successfully with cockroaches.

Recording everything

“The cockroach essentially has an on-board computer and if you scale this up to humans and optimise it there is no reason why we can’t have our smartphones inside our bodies instead of carrying them around,” she says.

This on-board AI travelling around our bloodstream would act as a co-pilot: seeing what you see, experiencing what you experience, recording everything and maybe even mapping every single neuron in your brain while it’s at it. And with a digitised copy of your brain you (whatever ‘you’ is) could, in theory, be uploaded to the cloud.

Does this mean that we could never be disconnected from the web, ever again? What if your ‘internal smartphone’ is hacked? Could our thoughts be monitored?

Humans have become so dependent on our smartphones and so used to sharing our data with third parties, that this ‘co-pilot’ inside us might be all too readily accepted without deeper consideration.

Already, novel technologies are undermining privacy to an alarming degree, says O’Brolcháin.

“In a world without privacy, there is a great risk of censorship and self-censorship. Ultimately, this affects people’s autonomy – their ability to decide what sort of life they want to lead for themselves, to develop their own conception of the good life.

“This is one of the great ironies of the current wave of technologies – they are born of individualistic societies and often defended in the name of individual rights but might create a society that can no longer protect individual autonomy,” he warns.

Okay, so an invincible body and a super brain have their downsides but what about technology that expands our consciousness, making us wiser, nicer, all-round better folks? Could world peace be possible if we enhanced our morality?

“If you take a look at humanity you can see fighting, wars, terrorism, anger. Television shows are full of violence, society places an emphasis on wealth and greed. I think part of the transhumanist scope is [to offset this with] intentional acts of kindness,” says Vita-More, who several times during our interview makes the point that technology alone cannot evolve to make a better world unless humanity evolves alongside.

Vita-More dismisses the notion of enhancement for enhancement’s sake, a nod to the grinder movement of DIY body-hacking, driven mostly by curiosity.

Examples include implanting magnets into the fingertips to detect magnetic waves or sticking an RFID chip into your arm as UK professor Kevin Warwick did, allowing him to pass through security doors with a wave of his hand.

Moral enhancements

Along the same lines as Vita-More’s thinking, O’Brolcháin says “some philosophers argue that moral enhancements will be necessary if enhancements are not to be used for malevolent ends”.

“Moral enhancement may result in people who are less greedy, less aggressive, more concerned with addressing serious global issues like climate change,” he muses.

But the difficulty is deciding on what is moral. After all, he says, the ‘good’ groups like Isis want to promote is vastly at odds with the values of Ireland. So who gets to decide what moral enhancements are developed? Perhaps they will come with the latest internal smartphone upgrade or installed at birth by government.

Technology does make life better and it is an exciting time for robotics, artificial intelligence and nanotechnology. But humans have a long way to go to before we work out how we can co-exist with the future we are building right now.

Artificial intelligence: ‘Homo sapiens will be split into a handful of gods and the rest of us’

November 8, 2015


If you wanted relief from stories about tyre factories and steel plants closing, you could try relaxing with a new 300-page report from Bank of America Merrill Lynch which looks at the likely effects of a robot revolution.

But you might not end up reassured. Though it promises robot carers for an ageing population, it also forecasts huge numbers of jobs being wiped out: up to 35% of all workers in the UK and 47% of those in the US, including white-collar jobs, seeing their livelihoods taken away by machines.

Haven’t we heard all this before, though? From the luddites of the 19th century to print unions protesting in the 1980s about computers, there have always been people fearful about the march of mechanisation. And yet we keep on creating new job categories.

However, there are still concerns that the combination of artificial intelligence (AI) – which is able to make logical inferences about its surroundings and experience – married to ever-improving robotics, will wipe away entire swaths of work and radically reshape society.

“The poster child for automation is agriculture,” says Calum Chace, author of Surviving AI and the novel Pandora’s Brain. “In 1900, 40% of the US labour force worked in agriculture. By 1960, the figure was a few per cent. And yet people had jobs; the nature of the jobs had changed.

“But then again, there were 21 million horses in the US in 1900. By 1960, there were just three million. The difference was that humans have cognitive skills – we could learn to do new things. But that might not always be the case as machines get smarter and smarter.”

What if we’re the horses to AI’s humans? To those who don’t watch the industry closely, it’s hard to see how quickly the combination of robotics and artificial intelligence is advancing. Last week a team from the Massachusetts Institute of Technology released a video showing a tiny drone flying through a lightly forested area at 30mph, avoiding the trees – all without a pilot, using only its onboard processors. Of course it can outrun a human-piloted one.

MIT has also built a “robot cheetah” which can jump over obstacles of up to 40cm without help. Add to that the standard progress of computing, where processing power doubles roughly every 18 months (or, equally, prices for capability halve), and you can see why people like Chace are getting worried.

Drone flies autonomously through a forested area


But the incursion of AI into our daily life won’t begin with robot cheetahs. In fact, it began long ago; the edge is thin, but the wedge is long. Cooking systems with vision processors can decide whether burgers are properly cooked. Restaurants can give customers access to tablets with the menu and let people choose without needing service staff.

Lawyers who used to slog through giant files for the “discovery” phase of a trial can turn it over to a computer. An “intelligent assistant” called Amy will, via email, set up meetings autonomously. Google announced last week that you can get Gmail to write appropriate responses to incoming emails. (You still have to act on your responses, of course.)

Further afield, Foxconn, the Taiwanese company which assembles devices for Apple and others, aims to replace much of its workforce with automated systems. The AP news agency gets news stories written automatically about sports and business by a system developed by Automated Insights. The longer you look, the more you find computers displacing simple work. And the harder it becomes to find jobs for everyone.

So how much impact will robotics and AI have on jobs, and on society? Carl Benedikt Frey, who with Michael Osborne in 2013 published the seminal paper The Future of Employment: How Susceptible Are Jobs to Computerisation? – on which the BoA report draws heavily – says that he doesn’t like to be labelled a “doomsday predictor”.

He points out that even while some jobs are replaced, new ones spring up that focus more on services and interaction with and between people. “The fastest-growing occupations in the past five years are all related to services,” he tells the Observer. “The two biggest are Zumba instructor and personal trainer.”

Frey observes that technology is leading to a rarification of leading-edge employment, where fewer and fewer people have the necessary skills to work in the frontline of its advances. “In the 1980s, 8.2% of the US workforce were employed in new technologies introduced in that decade,” he notes. “By the 1990s, it was 4.2%. For the 2000s, our estimate is that it’s just 0.5%. That tells me that, on the one hand, the potential for automation is expanding – but also that technology doesn’t create that many new jobs now compared to the past.”

This worries Chace. “There will be people who own the AI, and therefore own everything else,” he says. “Which means homo sapiens will be split into a handful of ‘gods’, and then the rest of us.

“I think our best hope going forward is figuring out how to live in an economy of radical abundance, where machines do all the work, and we basically play.”

Arguably, we might be part of the way there already; is a dance fitness programme like Zumba anything more than adult play? But, as Chace says, a workless lifestyle also means “you have to think about a universal income” – a basic, unconditional level of state support.

Perhaps the biggest problem is that there has been so little examination of the social effects of AI. Frey and Osborne are contributing to Oxford University’s programme on the future impacts of technology; at Cambridge, Observer columnist John Naughton and David Runciman are leading a project to map the social impacts of such change. But technology moves fast; it’s hard enough figuring out what happened in the past, let alone what the future will bring.

But some jobs probably won’t be vulnerable. Does Frey, now 31, think that he will still have a job in 20 years’ time? There’s a brief laugh. “Yes.” Academia, at least, looks safe for now – at least in the view of the academics.

Foxconn sign
Smartphone manufacturer Foxconn is aiming to automate much of its production facility. Photograph: Pichi Chuang/Reuters

The danger of change is not destitution, but inequality

Productivity is the secret ingredient in economic growth. In the late 18th century, the cleric and scholar Thomas Malthus notoriously predicted that a rapidly rising human population would result in misery and starvation.

But Malthus failed to anticipate the drastic technological changes – from the steam-powered loom to the combine harvester – that would allow the production of food and the other necessities of life to expand even more rapidly than the number of hungry mouths. The key to economic progress is this ability to do more with the same investment of capital and labour.

The latest round of rapid innovation, driven by the advance of robots and AI, is likely to power continued improvements.

Recent research led by Guy Michaels at the London School of Economics looked at detailed data across 14 industries and 17 countries over more than a decade, and found that the adoption of robots boosted productivity and wages without significantly undermining jobs.

Robotisation has reduced the number of working hours needed to make things; but at the same time as workers have been laid off from production lines, new jobs have been created elsewhere, many of them more creative and less dirty. So far, fears of mass layoffs as the machines take over have proven almost as unfounded as those that have always accompanied other great technological leaps forward.

There is an important caveat to this reassuring picture, however. The relatively low-skilled factory workers who have been displaced by robots are rarely the same people who land up as app developers or analysts, and technological progress is already being blamed for exacerbating inequality, a trend Bank of America Merrill Lynch believes may continue in future.

So the rise of the machines may generate huge economic benefits; but unless it is carefully managed, those gains may be captured by shareholders and highly educated knowledge workers, exacerbating inequality and leaving some groups out in the cold. Heather Stewart