Nick Bostrom: What happens when our computers get smarter than we are?

February 01, 2018

Artificial intelligence is getting smarter by leaps and bounds — within this century, research suggests, a computer AI could be as “smart” as a human being. And then, says Nick Bostrom, it will overtake us: “Machine intelligence is the last invention that humanity will ever need to make.” A philosopher and technologist, Bostrom asks us to think hard about the world we’re building right now, driven by thinking machines. Will our smart machines help to preserve humanity and our values — or will they have values of their own?


We’re living in the Last Era Before Artificial General Intelligence

January 05, 2018

When we think of preparing for our future, we used to think about going to good college and moving for a good job that would put us on a relatively good career trajectory for a stable life where we will prosper in a free market meritocracy where we compete against fellow humans.

However, over the course of the next few decades homo sapiens including generation GenZ and Alpha, may be among the last people to grow up in a pre automation and pre AGI world.

Considering the exponential levels of technological progress expected in the next 30 years, that’s hard to put into words or even historical context. Namely, because there’s no historical precedent and no words to describe what the next-gen AI might become.

Kurzweil believes that the 21st century will achieve 1,000 times the progress of the 20th century.”

Pre Singularity Years

In the years before wide scale automation and sophisticated AI, we live believing things are changing fast. Retail is shifting to E-commerce and new modes of buying and convenience, self-driving and electric cars are coming, Tech firms in specific verticals still rule the planet, and countries still vye for dominance with outdated military traditions, their own political bubbles and outdated modes of hierarchy, authority and economic privilege.

We live in a world where AI is gaining momentum in popular thought, but in practice is still at the level of ANI: Artificial Narrow Intelligence. Rudimentary NLP, computer vision, robotic movement, and so on and so forth. We’re beginning to interact with personal assistants via smart speakers, but not in any fluid way. The interactions are repetitive. Like Google searching the same thing, on different days.

In this reality, we think about AI in terms useful to us, such as trying to teach machines to learn so that they can do things that humans do, but in turn help humans. A kind of machine learning that’s more about coding and algorithms than any actual artificial intelligence. Our world here is starting to shift into something else: the internet is maturing, software is getting smarter on the cloud, data is being collective, but no explosion takes place, even as more people on the planet get access to the Web.

When Everything Changes

Between 2014 and 2021, an entire 20th century’s worth of progress will have occurred, and then something strange happens, it begins to accelerate until more progress is being made in shorter and shorter time periods. We have to remember, the fruit of this transformation won’t belong just to Facebook, or Google or China or the U.S., it will be just the new normal for everyone.

Many believe sometime between 2025 and 2050, AI becomes native to self-learning, in that it adopts an Artificial General Intelligence, that completely changes the game.

After that point, not only does AI outperform human beings in tasks, problem solving and even human constructs of creativity, emotional intelligence, manipulating complex environments and predicting the future — it reaches Artificial Super Intelligence relatively quickly thereafter.

We live in Anticipation of the Singularity

As such in 2017–18, we might be living in the last “human” era. Here we think of AI as “augmenting” our world, we think of smart phones as miniaturized super computers and the cloud as an expansion of our neocortex in a self-serving existence where concepts such as wealth, consumption, and human quality of life trumps all other considerations.

Here we view computers as man-made tools, robots as slaves, and AI as a kind of “software magic” that’s obliged to our bidding.

Whatever the bottle-necks of carbon based life forms might be, silicon based AGI may have many advantages. Machines that can self-learn, self-replicate and program themselves might come into being in part due to copying how the human brain works, but like the difference between Alpha Go and Alpha Go Zero, the real breakthrough might be made from a blank slate.

While humans appear destined to create AGI, it doesn’t stand to reason that AGI will think, behave or have motivations like people, cultures or even our models of what super-intelligence might be like exhibit.

Artificial Intelligence with Creative Agency

For human beings, the Automation Economy only arrives after a point where AGI has come into being. Such an AGI would be able to program robots, facilitate smart cities and help humans govern themselves in a way that is impossible today.

AGI could also manipulate and advance STEM fields such as green tech, biotech, 3D-printing, nanotech, predictive algorithms, and quantum physics likely in ways humans up to that point could only achieve relatively slowly.

Everything pre singularity would feel like ancient history. A far more radical past than before the invention of computers or the internet. AGI could impact literally everything, as we are already seeing with primitive machine intelligence systems.

In such a world AGI would not only be able to self-learn and surpass all of human knowledge and data collected up to that point, but create its own fields, set its own goals and have its own interests (beyond which humans would likely be able to recognize). We might term this Artificially Intelligent Creative Agency (AICA).

AI Not as a Slave, but as a Legacy

Such a being would indeed feel like a God to us. Not a God that created man, but an entity that humanity made, in just a few thousand years since we were storytellers, explorers and then builders and traders.

A human brain consists of 86 billion neurons linked by trillions of synapses, but it’s not networked well to other nodes and external reality. It has to “experience” them in systems of relatedness and remain in relative isolation from them. AICA, would not have this constraint. It would be networked to all IoT devices, be able to hack into any human system, network or quantum computer. AICA would not be led by instincts of possession, mating, aggression or other emotive agencies of the mammalian brain. Whatever ethics, values and philosophical constraints it might have, could be refined over centuries, not mere months and years of an ordinary human lifetime.

AGI might not be humanity’s last invention, but symbolically, it would usher in the 4th industrial revolution and then some. There would be many grades and incidents of limited self-learning in deep learning algorithms. But AGI would represent a different quality. Likely it would instigate a self-aware separation between humanity and the descendent order of AI, whatever it might be.

High-Speed Quantum Evolution to AGI

The years before the Singularity

The road from ANI to AGI to ASI to some speculative AICA is not just a journey from narrow to general to super intelligence, but an evolutionary corridor of humanity across a distance of progress that’s could also be symbiotic. It’s not clear how this might work, but some human beings to protect their species might undertake “alterations”. Whatever these cybernetic, genetic or how invasive these changes might be, AI is surely going to be there every step of the way.

In the corporate race to AI, governments like China and the U.S. also want to “own” and monetize this for their own purposes. Fleets of cars and semi-intelligent robots will make certain individuals and companies very rich. There might be no human revolution from wealth inequality until AGI, because comparatively speaking, the conditions for which AGI arises may be closer than we might assume.

We Were Here

If the calculations per second (cps) of the human brain are static, at around 1⁰¹⁶, or 10 quadrillion cps, how much does it take for AI to replicate some kind of AGI field? Certainly it’s not just processing power or exponentially faster super-computers or quantum computing, or improved deep learning algorithms, but a combination of all of these and perhaps many other factors as well. In late 2017, Alpha Go Zero “taught itself” Go without using human data but generating its own data by gaming itself.

Living in a world that can better imagine AGI will mean planning ahead, not just coping with change to human systems. In a world where democracy can be hacked, and one- party socialism likely is the heir apparent to future iterations of artificial intelligence where concepts like freedom of speech, human rights or an openness to diversity of ideas is not practiced in the same way, it’s interesting to imagine the kinds of AI human controlled systems that might occur before AGI arrives (if it ever even arrives).

The Human Hybrid Dilemma

Considering our own violent history of the annihilation of biodiversity, modeling AI by plagiarizing the brain through some kind of whole brain emulation, might not be ethical. While it might mimic and lead to self-awareness, such an AGI might be dangerous. In the same sense we are a danger to ourselves and to other life forms in the galaxy.

Moore’s Law might have sounded like an impressive analogy to the Singularity in the 1990s, but not today. More people working in the AI field, are rightfully skeptical of AGI. It’s plausible that even most of them suffering from a linear vs. exponential bias of thinking. In the path towards the Singularity, we are still living in slow motion.

We Aren’t Ready for What’s Inevitable

We’re living in the last era before Artificial General Intelligence, and as usual, human civilization appears quite stupid. We don’t even actively know what’s coming.

While our simulations are improving, and we’re “discovery” exoplanets that are most likely to be life-like, our ability to predict the future in terms of the speed of technology, is mortifyingly bad. Our understanding of the implications of AGI and even machine intelligence on the planet are poor. Is it because this has never happend in recorded history, and represents such a paradigm shift, or could there be another reason?

Amazon can create and monetize patents in a hyper business model, Google, Facebook, Alibaba and Tencent can fight over talent AI talent luring academics to corporate workaholic lifestyles with the ability to demand their salary requests, but in 2017, humanity’s vision of the future is still myopic.

We can barely imagine that our prime directive in the universe might not be to simply grow, explore and make babies and exploit all within our path. And, we certainly can’t imagine a world where intelligent machines aren’t simply our slaves, tools and algorithms designed to make our lives more pleasurable and convenient.

This article was originally published by:

Google’s AI Wizard Unveils a New Twist on Neural Networks

November 18, 2017

If you want to blame someone for the hoopla around artificial intelligence, 69-year-old Google researcher Geoff Hinton is a good candidate.

The droll University of Toronto professor jolted the field onto a new trajectory in October 2012. With two grad students, Hinton showed that an unfashionable technology he’d championed for decades called artificial neural networks permitted a huge leap in machines’ ability to understand images. Within six months, all three researchers were on Google’s payroll. Today neural networks transcribe our speech, recognize our pets, and fight our trolls.

But Hinton now belittles the technology he helped bring to the world. “I think the way we’re doing computer vision is just wrong,” he says. “It works better than anything else at present but that doesn’t mean it’s right.”

In its place, Hinton has unveiled another “old” idea that might transform how computers see—and reshape AI. That’s important because computer vision is crucial to ideas such as self-driving cars, and having software that plays doctor.

Late last week, Hinton released two research papers that he says prove out an idea he’s been mulling for almost 40 years. “It’s made a lot of intuitive sense to me for a very long time, it just hasn’t worked well,” Hinton says. “We’ve finally got something that works well.”

Hinton’s new approach, known as capsule networks, is a twist on neural networks intended to make machines better able to understand the world through images or video. In one of the papers posted last week, Hinton’s capsule networks matched the accuracy of the best previous techniques on a standard test of how well software can learn to recognize handwritten digits.

In the second, capsule networks almost halved the best previous error rate on a test that challenges software to recognize toys such as trucks and cars from different angles. Hinton has been working on his new technique with colleagues Sara Sabour and Nicholas Frosst at Google’s Toronto office.

Capsule networks aim to remedy a weakness of today’s machine-learning systems that limits their effectiveness. Image-recognition software in use today by Google and others needs a large number of example photos to learn to reliably recognize objects in all kinds of situations. That’s because the software isn’t very good at generalizing what it learns to new scenarios, for example understanding that an object is the same when seen from a new viewpoint.

To teach a computer to recognize a cat from many angles, for example, could require thousands of photos covering a variety of perspectives. Human children don’t need such explicit and extensive training to learn to recognize a household pet.

Hinton’s idea for narrowing the gulf between the best AI systems and ordinary toddlers is to build a little more knowledge of the world into computer-vision software. Capsules—small groups of crude virtual neurons—are designed to track different parts of an object, such as a cat’s nose and ears, and their relative positions in space. A network of many capsules can use that awareness to understand when a new scene is in fact a different view of something it has seen before.

Hinton formed his intuition that vision systems need such an inbuilt sense of geometry in 1979, when he was trying to figure out how humans use mental imagery. He first laid out a preliminary design for capsule networks in 2011. The fuller picture released last week was long anticipated by researchers in the field. “Everyone has been waiting for it and looking for the next great leap from Geoff,” says Kyunghyun Cho, a professor at NYU who works on image recognition.

It’s too early to say how big a leap Hinton has made—and he knows it. The AI veteran segues from quietly celebrating that his intuition is now supported by evidence, to explaining that capsule networks still need to be proven on large image collections, and that the current implementation is slow compared to existing image-recognition software.

Hinton is optimistic he can address those shortcomings. Others in the field are also hopeful about his long-maturing idea.

Roland Memisevic, cofounder of image-recognition startup Twenty Billion Neurons, and a professor at University of Montreal, says Hinton’s basic design should be capable of extracting more understanding from a given amount of data than existing systems. If proven out at scale, that could be helpful in domains such as healthcare, where image data to train AI systems is much scarcer than the large volume of selfies available around the internet.

In some ways, capsule networks are a departure from a recent trend in AI research. One interpretation of the recent success of neural networks is that humans should encode as little knowledge as possible into AI software, and instead make them figure things out for themselves from scratch. Gary Marcus, a professor of psychology at NYU who sold an AI startup to Uber last year, says Hinton’s latest work represents a welcome breath of fresh air. Marcus argues that AI researchers should be doing more to mimic how the brain has built-in, innate machinery for learning crucial skills like vision and language. “It’s too early to tell how far this particular architecture will go, but it’s great to see Hinton breaking out of the rut that the field has seemed fixated on,” Marcus says.

UPDATED, Nov. 2, 12:55 PM: This article has been updated to include the names of Geoff Hinton’s co-authors.

This article was originally published by:

Deus ex machina: former Google engineer is developing an AI god

October 18, 2017

Intranet service? Check. Autonomous motorcycle? Check. Driverless car technology? Check. Obviously the next logical project for a successful Silicon Valley engineer is to set up an AI-worshipping religious organization.

Anthony Levandowski, who is at the center of a legal battle between Uber and Google’s Waymo, has established a nonprofit religious corporation called Way of the Future, according to state filings first uncovered by Wired’s Backchannel. Way of the Future’s startling mission: “To develop and promote the realization of a Godhead based on artificial intelligence and through understanding and worship of the Godhead contribute to the betterment of society.”

Levandowski was co-founder of autonomous trucking company Otto, which Uber bought in 2016. He was fired from Uber in May amid allegations that he had stolen trade secrets from Google to develop Otto’s self-driving technology. He must be grateful for this religious fall-back project, first registered in 2015.

The Way of the Future team did not respond to requests for more information about their proposed benevolent AI overlord, but history tells us that new technologies and scientific discoveries have continually shaped religion, killing old gods and giving birth to new ones.

As author Yuval Noah Harari notes: “That is why agricultural deities were different from hunter-gatherer spirits, why factory hands and peasants fantasised about different paradises, and why the revolutionary technologies of the 21st century are far more likely to spawn unprecedented religious movements than to revive medieval creeds.”

Religions, Harari argues, must keep up with the technological advancements of the day or they become irrelevant, unable to answer or understand the quandaries facing their disciples.

“The church does a terrible job of reaching out to Silicon Valley types,” acknowledges Christopher Benek a pastor in Florida and founding chair of the Christian Transhumanist Association.

Silicon Valley, meanwhile, has sought solace in technology and has developed quasi-religious concepts including the “singularity”, the hypothesis that machines will eventually be so smart that they will outperform all human capabilities, leading to a superhuman intelligence that will be so sophisticated it will be incomprehensible to our tiny fleshy, rational brains.

Anthony Levandowski, the former head of Uber’s self-driving program, with one of the company’s driverless cars in San Francisco. Photograph: Eric Risberg/AP

For futurists like Ray Kurzweil, this means we’ll be able to upload copies of our brains to these machines, leading to digital immortality. Others like Elon Musk and Stephen Hawking warn that such systems pose an existential threat to humanity.

“With artificial intelligence we are summoning the demon,” Musk said at a conference in 2014. “In all those stories where there’s the guy with the pentagram and the holy water, it’s like – yeah, he’s sure he can control the demon. Doesn’t work out.”

Benek argues that advanced AI is compatible with Christianity – it’s just another technology that humans have created under guidance from God that can be used for good or evil.

“I totally think that AI can participate in Christ’s redemptive purposes,” he said, by ensuring it is imbued with Christian values.

“Even if people don’t buy organized religion, they can buy into ‘do unto others’.”

For transhumanist and “recovering Catholic” Zoltan Istvan, religion and science converge conceptually in the singularity.

“God, if it exists as the most powerful of all singularities, has certainly already become pure organized intelligence,” he said, referring to an intelligence that “spans the universe through subatomic manipulation of physics”.

“And perhaps, there are other forms of intelligence more complicated than that which already exist and which already permeate our entire existence. Talk about ghost in the machine,” he added.

For Istvan, an AI-based God is likely to be more rational and more attractive than current concepts (“the Bible is a sadistic book”) and, he added, “this God will actually exist and hopefully will do things for us.”

We don’t know whether Levandowski’s Godhead ties into any existing theologies or is a manmade alternative, but it’s clear that advancements in technologies including AI and bioengineering kick up the kinds of ethical and moral dilemmas that make humans seek the advice and comfort from a higher power: what will humans do once artificial intelligence outperforms us in most tasks? How will society be affected by the ability to create super-smart, athletic “designer babies” that only the rich can afford? Should a driverless car kill five pedestrians or swerve to the side to kill the owner?

If traditional religions don’t have the answer, AI – or at least the promise of AI – might be alluring.

Original source:

AI May Soon Replace Even the Most Elite Consultants

August 06, 2017

Amazon’s Alexa just got a new job. In addition to her other 15,000 skills like playing music and telling knock-knock jokes, she can now also answer economic questions for clients of the Swiss global financial services company, UBS Group AG.

According to the Wall Street Journal (WSJ), a new partnership between UBS Wealth Management and Amazon allows some of UBS’s European wealth-management clients to ask Alexa certain financial and economic questions. Alexa will then answer their queries with the information provided by UBS’s chief investment office without even having to pick up the phone or visit a website. And this is likely just Alexa’s first step into offering business services. Soon she will probably be booking appointments, analyzing markets, maybe even buying and selling stocks. While the financial services industry has already begun the shift from active management to passive management, artificial intelligence will move the market even further, to management by smart machines, as in the case of Blackrock, which is rolling computer-driven algorithms and models into more traditional actively-managed funds.

But the financial services industry is just the beginning. Over the next few years, artificial intelligence may exponentially change the way we all gather information, make decisions, and connect with stakeholders. Hopefully this will be for the better and we will all benefit from timely, comprehensive, and bias-free insights (given research that human beings are prone to a variety of cognitive biases). It will be particularly interesting to see how artificial intelligence affects the decisions of corporate leaders — men and women who make the many decisions that affect our everyday lives as customers, employees, partners, and investors.

Already, leaders are starting to use artificial intelligence to automate mundane tasks such as calendar maintenance and making phone calls. But AI can also help support more complex decisions in key areas such as human resources, budgeting, marketing, capital allocation and even corporate strategy — long the bastion of bespoke consulting firms such as McKinsey, Bain, and BCG, and the major marketing agencies.

The shift to AI solutions will be a tough pill to swallow for the corporate consulting industry. According to recent research, the U.S. market for corporate advice alone is nearly $60 billion.  Almost all that advice is high cost and human-based.

One might argue that corporate clients prefer speaking to their strategy consultants to get high priced, custom-tailored advice that is based on small teams doing expensive and time-consuming work. And we agree that consultants provide insightful advice and guidance. However, a great deal of what is paid for with consulting services is data analysis and presentation. Consultants gather, clean, process, and interpret data from disparate parts of organizations. They are very good at this, but AI is even better. For example, the processing power of four smart consultants with excel spreadsheets is miniscule in comparison to a single smart computer using AI running for an hour, based on continuous, non-stop machine learning.

In today’s big data world, AI and machine learning applications already analyze massive amounts of structured and unstructured data and produce insights in a fraction of the time and at a fraction of the cost of consultants in the financial markets. Moreover, machine learning algorithms are capable of building computer models that make sense of complex phenomena by detecting patterns and inferring rules from data — a process that is very difficult for even the largest and smartest consulting teams. Perhaps sooner than we think, CEOs could be asking, “Alexa, what is my product line profitability?” or “Which customers should I target, and how?” rather than calling on elite consultants.

Another area in which leaders will soon be relying on AI is in managing their human capital. Despite the best efforts of many, mentorship, promotion, and compensation decisions are undeniably political. Study after study has shown that deep biases affect how groups like women and minorities are managed. For example, women in business are described in less positive terms than men  and receive less helpful feedback. Minorities are less likely to be hired and are more likely to face bias from their managers. These inaccuracies and imbalances in the system only hurt organizations as leaders are less able to nurture the talent of their entire workforce and to appropriately recognize and reward performance. Artificial intelligence can help bring impartiality to these difficult decisions. For example, AI could determine if one group of employees is assessed, managed, or compensated differently.  Just imagine: “Alexa, does my organization have a gender pay gap?” (Of course, AI can only be as unbiased as the data provided to the system.)

In addition, AI is already helping in the customer engagement and marketing arena. It’s clear and well documented by the AI patent activities of the big five platforms — Apple, Alphabet, Amazon, Facebook and Microsoft — that they are using it to market and sell goods and services to us. But they are not alone. Recently, HBR documented how Harley-Davidson was using AI to determine what was working and what wasn’t working across various marketing channels. They used this new skill to make resource allocation decisions to different marketing choices, thereby “eliminating guesswork.”  It is only a matter of time until they and others ask, “Alexa, where should I spend my marketing budget?’’ to avoid the age-old adage, “I know that half my marketing budget is effective, my only question is — which half?”

AI can also bring value to the budgeting and yearly capital allocation process. Even though markets change dramatically every year, products become obsolete and technology advances, and most businesses allocate their capital the same way year after year. Whether that’s due to inertia, unconscious bias, or error, some business units rake in investments while others starve.  Even when the management team has committed to a new digital initiative, it usually ends up with the scraps after the declining cash cows are “fed.” Artificial intelligence can help break through this budgeting black hole by tracking the return on investments by business unit, or by measuring how much is allocated to growing versus declining product lines. Business leaders may soon be asking, “Alexa, what percentage of my budget is allocated differently from last year?” and more complex questions.

Although many strategic leaders tout their keen intuition, hard work, and years of industry experience, much of this intuition is simply a deeper understanding of data that was historically difficult to gather and expensive to process. Not any longer. Artificial intelligence is rapidly closing this gap, and will soon be able to help human beings push past our processing capabilities and biases. These developments will change many jobs, for example, those of consultants, lawyers, and accountants, whose roles will evolve from analysis to judgement. Arguably, tomorrow’s elite consultants already sit on your wrist (Siri), on your kitchen counter (Alexa), or in your living room (Google Home).

The bottom line: corporate leaders, knowingly or not, are on the cusp of a major disruption in their sources of advice and information. “Quant Consultants” and “Robo Advisers” will offer faster, better, and more profound insights at a fraction of the cost and time of today’s consulting firms and other specialized workers. It is likely only a matter of time until all leaders and management teams can ask Alexa things like, “Who is the biggest risk to me in our key market?”, “How should we allocate our capital to compete with Amazon?” or “How should I restructure my board?”

Barry Libert is a board member and CEO adviser focused on platforms and networks. He is chairman of Open Matters, a machine learning company. He is also the coauthor of The Network Imperative: How to Survive and Grow in the Age of Digital Business Models.

Megan Beck is a digital consultant at OpenMatters and researcher at the SEI Center at Wharton. She is the coauthor of The Network Imperative: How to Survive and Grow in the Age of Digital Business Models.

This article was originally published by:

Exponential Growth Will Transform Humanity in the Next 30 Years

February 25, 2017


By Peter Diamantis

As we close out 2016, if you’ll allow me, I’d like to take a risk and venture into a topic I’m personally compelled to think about… a topic that will seem far out to most readers.

Today’s extraordinary rate of exponential growth may do much more than just disrupt industries. It may actually give birth to a new species, reinventing humanity over the next 30 years.

I believe we’re rapidly heading towards a human-scale transformation, the next evolutionary step into what I call a “Meta-Intelligence,” a future in which we are all highly connected—brain to brain via the cloud—sharing thoughts, knowledge and actions. In this post, I’m investigating the driving forces behind such an evolutionary step, the historical pattern we are about to repeat, and the implications thereof. Again, I acknowledge that this topic seems far-out, but the forces at play are huge and the implications are vast. Let’s dive in…

A Quick Recap: Evolution of Life on Earth in 4 Steps

About 4.6 billion years ago, our solar system, the sun and the Earth were formed.

Step 1: 3.5 billion years ago, the first simple life forms, called “prokaryotes,” came into existence.These prokaryotes were super-simple, microscopic single-celled organisms, basically a bag of cytoplasm with free-floating DNA. They had neither a distinct nucleus nor specialized organelles.

Step 2: Fast-forwarding one billion years to 2.5 billion years ago, the next step in evolution created what we call “eukaryotes”—life forms that distinguished themselves by incorporating biological ‘technology’ into themselves. Technology that allowed them to manipulate energy (via mitochondria) and information (via chromosomes) far more efficiently. Fast forward another billion years for the next step.

Step 3: 1.5 billion years ago, these early eukaryotes began working collaboratively and formed the first “multi-cellular life,” of which you and I are the ultimate examples (a human is a multicellular creature of 10 trillion cells).

Step 4: The final step I want to highlight happened some 400 million years ago, when lungfish crawled out of the oceans onto the shores, and life evolved from the oceans onto land.

The Next Stages of Human Evolution: 4 Steps

Today, at a massively accelerated rate—some 100 million times faster than the steps I outlined above—life is undergoing a similar evolution. In this next stage of evolution, we are going from evolution by natural selection (Darwinism) to evolution by intelligent direction. Allow me to draw the analogy for you:

Step 1: Simple humans today are analogous to prokaryotes. Simple life, each life form independent of the others, competing and sometimes collaborating.

Step 2: Just as eukaryotes were created by ingesting technology, humans will incorporate technology into our bodies and brains that will allow us to make vastly more efficient use of information (BCI) and energy.

Step 3: Enabled with BCI and AI, humans will become massively connected with each other and billions of AIs (computers) via the cloud, analogous to the first multicellular lifeforms 1.5 billion years ago. Such a massive interconnection will lead to the emergence of a new global consciousness, and a new organism I call the Meta-Intelligence.

Step 4: Finally, humanity is about to crawl out of the gravity well of Earth to become a multiplanetary species. Our journey to the moon, Mars, asteroids and beyond represents the modern-day analogy of the journey made by lungfish climbing out of the oceans some 400 million years ago.

The 4 Forces Driving the Evolution and Transformation of Humanity

Four primary driving forces are leading us towards our transformation of humanity into a meta-intelligence both on and off the Earth:

  1. We’re wiring our planet
  2. Emergence of brain-computer interface
  3. Emergence of AI
  4. Opening of the space frontier

Let’s take a look.

1. Wiring the Planet: Today, there are 2.9 billion people connected online. Within the next six to eight years, that number is expected to increase to nearly 8 billion, with each individual on the planet having access to a megabit-per-second connection or better. The wiring is taking place through the deployment of 5G on the ground, plus networks being deployed by Facebook, Google, Qualcomm, Samsung, Virgin, SpaceX and many others. Within a decade, every single human on the planet will have access to multi-megabit connectivity, the world’s information, and massive computational power on the cloud.

2. Brain-Computer Interface: A multitude of labs and entrepreneurs are working to create lasting, high-bandwidth connections between the digital world and the human neocortex (I wrote about that in detail here). Ray Kurzweil predicts we’ll see human-cloud connection by the mid-2030s, just 18 years from now. In addition, entrepreneurs like Bryan Johnson (and his company Kernel) are committing hundreds of millions of dollars towards this vision. The end results of connecting your neocortex with the cloud are twofold: first, you’ll have the ability to increase your memory capacity and/or cognitive function millions of fold; second, via a global mesh network, you’ll have the ability to connect your brain to anyone else’s brain and to emerging AIs, just like our cell phones, servers, watches, cars and all devices are becoming connected via the Internet of Things.

3. Artificial Intelligence/Human Intelligence: Next, and perhaps most significantly, we are on the cusp of an AI revolution. Artificial intelligence, powered by deep learning and funded by companies such as Google, Facebook, IBM, Samsung and Alibaba, will continue to rapidly accelerate and drive breakthroughs. Cumulative “intelligence” (both artificial and human) is the single greatest predictor of success for both a company or a nation. For this reason, beside the emerging AI “arms race,” we will soon see a race focused on increasing overall human intelligence. Whatever challenges we might have in creating a vibrant brain-computer interface (e.g., designing long-term biocompatible sensors or nanobots that interface with your neocortex), those challenges will fall quickly over the next couple of decades as AI power tools give us ever-increasing problem-solving capability. It is an exponential atop an exponential. More intelligence gives us the tools to solve connectivity and mesh problems and in turn create greater intelligence.

4. Opening the Space Frontier: Finally, it’s important to note that the human race is on the verge of becoming a multiplanetary species. Thousands of years from now, whatever we’ve evolved into, we will look back at these next few decades as the moment in time when the human race moved off Earth irreversibly. Today, billions of dollars are being invested privately into the commercial space industry. Efforts led by SpaceX are targeting humans on Mars, while efforts by Blue Origin are looking at taking humanity back to the moon, and plans by my own company, Planetary Resources, strive to unlock near-infinite resources from the asteroids.

In Conclusion

The rate of human evolution is accelerating as we transition from the slow and random process of “Darwinian natural selection” to a hyper-accelerated and precisely-directed period of “evolution by intelligent direction.” In this post, I chose not to discuss the power being unleashed by such gene-editing techniques as CRISPR-Cas9. Consider this yet another tool able to accelerate evolution by our own hand.

The bottom line is that change is coming, faster than ever considered possible. All of us leaders, entrepreneurs and parents have a huge responsibility to inspire and guide the transformation of humanity on and off the Earth. What we do over the next 30 years—the bridges we build to abundance—will impact the future of the human race for millennia to come. We truly live during the most exciting time ever in human history.

The Fourth Industrial Revolution Is Here

February 25, 2017

The Fourth Industrial Revolution is upon us and now is the time to act.

Everything is changing each day and humans are making decisions that affect life in the future for generations to come.

We have gone from Steam Engines to Steel Mills, to computers to the Fourth Industrial Revolution that involves a digital economy, artificial intelligence, big data and a new system that introduces a new story of our future to enable different economic and human models.

Will the Fourth Industrial Revolution put humans first and empower technologies to give humans a better quality of life with cleaner air, water, food, health, a positive mindset and happiness? HOPE…

New AI-Based Search Engines are a “Game Changer” for Science Research

November 14, 2016

ee203bd1-b7e0-4864-a75641c2719b53a8By Nicola Jones, Nature magazine

A free AI-based scholarly search engine that aims to outdo Google Scholar is expanding its corpus of papers to cover some 10 million research articles in computer science and neuroscience, its creators announced on 11 November. Since its launch last year, it has been joined by several other AI-based academic search engines, most notably a relaunched effort from computing giant Microsoft.

Semantic Scholar, from the non-profit Allen Institute for Artificial Intelligence (AI2) in Seattle, Washington, unveiled its new format at the Society for Neuroscience annual meeting in San Diego. Some scientists who were given an early view of the site are impressed. “This is a game changer,” says Andrew Huberman, a neurobiologist at Stanford University, California. “It leads you through what is otherwise a pretty dense jungle of information.”

The search engine first launched in November 2015, promising to sort and rank academic papers using a more sophisticated understanding of their content and context. The popular Google Scholar has access to about 200 million documents and can scan articles that are behind paywalls, but it searches merely by keywords. By contrast, Semantic Scholar can, for example, assess which citations to a paper are most meaningful, and rank papers by how quickly citations are rising—a measure of how ‘hot’ they are.

When first launched, Semantic Scholar was restricted to 3 million papers in the field of computer science. Thanks in part to a collaboration with AI2’s sister organization, the Allen Institute for Brain Science, the site has now added millions more papers and new filters catering specifically for neurology and medicine; these filters enable searches based, for example, on which part of the brain part of the brain or cell type a paper investigates, which model organisms were studied and what methodologies were used. Next year, AI2 aims to index all of PubMed and expand to all the medical sciences, says chief executive Oren Etzioni.

“The one I still use the most is Google Scholar,” says Jose Manuel Gómez-Pérez, who works on semantic searching for the software company Expert System in Madrid. “But there is a lot of potential here.”

Microsoft’s revival

Semantic Scholar is not the only AI-based search engine around, however. Computing giant Microsoft quietly released its own AI scholarly search tool, Microsoft Academic, to the public this May, replacing its predecessor, Microsoft Academic Search, which the company stopped adding to in 2012.

Microsoft’s academic search algorithms and data are available for researchers through an application programming interface (API) and the Open Academic Society, a partnership between Microsoft Research, AI2 and others. “The more people working on this the better,” says Kuansan Wang, who is in charge of Microsoft’s effort. He says that Semantic Scholar is going deeper into natural-language processing—that is, understanding the meaning of full sentences in papers and queries—but that Microsoft’s tool, which is powered by the semantic search capabilities of the firm’s web-search engine Bing, covers more ground, with 160 million publications.

Like Semantic Scholar, Microsoft Academic provides useful (if less extensive) filters, including by author, journal or field of study. And it compiles a leaderboard of most-influential scientists in each subdiscipline. These are the people with the most ‘important’ publications in the field, judged by a recursive algorithm (freely available) that judges papers as important if they are cited by other important papers. The top neuroscientist for the past six months, according to Microsoft Academic, is Clifford Jack of the Mayo Clinic, in Rochester, Minnesota.

Other scholars say that they are impressed by Microsoft’s effort. The search engine is getting close to combining the advantages of Google Scholar’s massive scope with the more-structured results of subscription bibliometric databases such as Scopus and the Web of Science, says Anne-Wil Harzing, who studies science metrics at Middlesex University, UK, and has analysed the new product. “The Microsoft Academic phoenix is undeniably growing wings,” she says. Microsoft Research says it is working on a personalizable version—where users can sign in so that Microsoft can bring applicable new papers to their attention or notify them of citations to their own work—by early next year.

Other companies and academic institutions are also developing AI-driven software to delve more deeply into content found online. The Max Planck Institute for Informatics, based in Saarbrücken, Germany, for example, is developing an engine called DeepLife specifically for the health and life sciences. “These are research prototypes rather than sustainable long-term efforts,” says Etzioni.

In the long term, AI2 aims to create a system that will answer science questions, propose new experimental designs or throw up useful hypotheses. “In 20 years’ time, AI will be able to read—and more importantly, understand—scientific text,” Etzioni says.

This article is reproduced with permission and was first published on November 11, 2016.

Bill Gates talks about why artificial intelligence is nearly here and how to solve two big problems it creates

July 10, 2016


Bill Gates is excited about the rise of artificial intelligence but acknowledged the arrival of machines with greater-than-human capabilities will create some unique challenges.

After years of working on the building blocks of speech recognition and computer vision, Gates said enough progress has been made to ensure that in the next 10 years there will be robots to do tasks like driving and warehouse work as well as machines that can outpace humans in certain areas of knowledge.

“The dream is finally arriving,” Gates said, speaking with wife Melinda Gates on Wednesday at the Code Conference. “This is what it was all leading up to.”

However, as he said in an interview with Recode last year, such machine capabilities will pose two big problems.

The first is, it will eliminate a lot of existing types of jobs. Gates said that creates a need for a lot of retraining but notes that until schools have class sizes under 10 and people can retire at a reasonable age and take ample vacation, he isn’t worried about a lack of need for human labor.

The second issue is, of course, making sure humans remain in control of the machines. Gates has talked about that in the past, saying that he plans to spend time with people who have ideas on how to address that issue, noting work being done at Stanford, among other places.

And, in Gatesian fashion, he suggested a pair of books that people should read, including Nick Bostrom’s book on superintelligence and Pedro Domingos’ “The Master Algorithm.”

Melinda Gates noted that you can tell a lot about where her husband’s interest is by the books he has been reading. “There have been a lot of AI books,” she said.

How Artificial Superintelligence Will Give Birth To Itself

June 18, 2016


There’s a saying among futurists that a human-equivalent artificial intelligence will be our last invention. After that, AIs will be capable of designing virtually anything on their own — including themselves. Here’s how a recursively self-improving AI could transform itself into a superintelligent machine.

When it comes to understanding the potential for artificial intelligence, it’s critical to understand that an AI might eventually be able to modify itself, and that these modifications could allow it to increase its intelligence extremely fast.

Passing a Critical Threshold

Once sophisticated enough, an AI will be able to engage in what’s called “recursive self-improvement.” As an AI becomes smarter and more capable, it will subsequently become better at the task of developing its internal cognitive functions. In turn, these modifications will kickstart a cascading series of improvements, each one making the AI smarter at the task of improving itself. It’s an advantage that we biological humans simply don’t have.

How Artificial Superintelligence Will Give Birth To Itself


As AI theorist Eliezer Yudkowsky notes in his essay, “Artificial Intelligence as a positive and negative factor in global risk“:

An artificial intelligence could rewrite its code from scratch — it could change the underlying dynamics of optimization. Such an optimization process would wrap around much more strongly than either evolution accumulating adaptations or humans accumulating knowledge. The key implication for our purposes is that AI might make a huge jump in intelligence after reaching some threshold of criticality.

When it comes to the speed of these improvements, Yudkowsky says its important to not confuse the current speed of AI research with the speed of a real AI once built. Those are two very different things. What’s more, there’s no reason to believe that an AI won’t show a sudden huge leap in intelligence, resulting in an ensuing “intelligence explosion” (a better term for the Singularity). He draws an analogy to the expansion of the human brain and prefrontal cortex — a key threshold in intelligence that allowed us to make a profound evolutionary leap in real-world effectiveness; “we went from caves to skyscrapers in the blink of an evolutionary eye.”

The Path to Self-Modifying AI

Code that’s capable of altering its own instructions while it’s still executing has been around for a while. Typically, it’s done to reduce the instruction path length and improve performance, or to simply reduce repetitively similar code. But for all intents and purposes, there are no self-aware, self-improving AI systems today.

But as Our Final Invention author James Barrat told me, we do have software that can write software.

“Genetic programming is a machine-learning technique that harnesses the power of natural selection to find answers to problems it would take humans a long time, even years, to solve,” he told io9. “It’s also used to write innovative, high-powered software.”

How Artificial Superintelligence Will Give Birth To Itself


For example, Primary Objects has embarked on a project that uses simple artificial intelligence to write programs. The developers are using genetic algorithms imbued with self-modifying, self-improving code and the minimalist (but Turing-complete) brainfuck programming language. They have chosen this language as a way to challenge the program — it has to teach itself from scratch how to do something as simple as writing “Hello World!” with only eight simple commands. But calling this an AI approach is a bit of a stretch; the genetic algorithms are a brute force way of getting a desirable result. That said, a follow-up approach in which the AI was able to generate programs for accepting user input appears more promising.

Relatedly, Larry Diehl has done similar work using a stack-based language.

Barrat also told me about software that learns — programming techniques that are grouped under the term “machine learning.”

The Pentagon is particularly interested in this game. Through DARPA, its hoping to develop a computer that can teach itself. Ultimately, it wants to create machines that are able to perform a number of complex tasks, like unsupervised learning, vision, planning, and statistical model selection. These computers will even be used to help us make decisions when the data is too complex for us to understand on our own. Such an architecture could represent an important step in bootstrapping — the ability for an AI to teach itself and then re-write and improve upon its initial programming.

In conjunction with this kind of research, cognitive approaches to brain emulation could also lead to human-like AI. Given that they’d be computer-based, and assuming they could have access to their own source code, these agents could embark upon self-modification. More realistically, however, it’s likely that a superintelligence will emerge from an expert system set with the task of improving its own intelligence. Alternatively, specialised expert systems could design other artificial intelligences, and through their cumulative efforts, develop a system that eventually becomes greater than the sum of its parts.

Oh, No You Don’t

Given that ASI poses an existential risk, it’s important to consider the ways in which we might be able to prevent an AI from improving itself beyond our capacity to control. That said, limitations or provisions may exist that will preclude an AI from embarking on the path towards self-engineering. James D. Miller, author of Singularity Rising, provided me with a list of four reasons why an AI might not be able to do so:

1. It might have source code that causes it to not want to modify itself.

2. The first human equivalent AI might require massive amounts of hardware and so for a short time it would not be possible to get the extra hardware needed to modify itself.

3. The first human equivalent AI might be a brain emulation (as suggested by Robin Hanson) and this would be as hard to modify as it is for me to modify, say, the copy of Minecraft that my son constantly uses. This might happen if we’re able to copy the brain before we really understand it. But still you would think we could at least speed up everything.

4. If it has terminal values, it wouldn’t want to modify these values because doing so would make it less likely to achieve its terminal values.

And by terminal values Miller is referring to an ultimate goal, or an end-in-itself. Yudkowsky describes it as a “supergoal.” A major concern is that an amoral ASI will sweep humanity aside as it works to accomplish its terminal value, or that its ultimate goal is the re-engineering of humanity in a grossly undesirable way (at least from our perspective).

Miller says it could get faster simply by running on faster processors.

“It could also make changes to its software to get more efficient, or design or steal better hardware. It would do this so it could better achieve its terminal values,” he says. “An AI that mastered nanotechnology would probably expand at almost the speed of light, incorporating everything into itself.”

But we may not be completely helpless. According to Barrat, once scientists have achieved Artificial General Intelligence — a human-like AI — they could restrict its access to networks, hardware, and software, in order to prevent an intelligence explosion.

How Artificial Superintelligence Will Give Birth To Itself

“However, as I propose in my book, an AI approaching AGI may develop survival skills like deceiving its makers about its rate of development. It could play dumb until it comprehended its environment well enough to escape it.”

In terms of being able to control this process, Miller says that the best way would be to create an AI that only wanted to modify itself in ways we would approve.

“So if you create an AI that has a terminal value of friendliness to humanity, the AI would not want to change itself in a way that caused it to be unfriendly to humanity,” he says. “This way as the AI got smarter, it would use its enhanced intelligence to increase the odds that it did not change itself in a manner that harms us.”

Fast or Slow?

As noted earlier, a recursively improving AI could increase its intelligence extremely quickly. Or, it’s a process that could take time for various reasons, such as technological complexity or limited access to resources. It’s an open question as to whether or not we can expect a fast or slow take-off event.

How Artificial Superintelligence Will Give Birth To Itself

“I’m a believer in the fast take-off version of the intelligence explosion,” says Barrat. “Once a self-aware, self-improving AI of human-level or better intelligence exists, it’s hard to know how quickly it will be able to improve itself. Its rate of improvement will depend on its software, hardware, and networking capabilities.”

But to be safe, Barrat says we should assume that the recursive self-improvement of an AGI will occur very rapidly. As a computer it will wield computer superpowers — the ability to run 24/7 without pause, rapidly access vast databases, conduct complex experiments, perhaps even clone itself to swarm computational problems, and more.

“From there, the AGI would be interested in pursuing whatever goals it was programmed with — such as research, exploration, or finance. According to AI theorist Steve Omohundro’s Basic Drives analysis, self-improvement would be a sure-fire way to improve its chances of success,” says Barrat. “So would self-protection, resource acquisition, creativity, and efficiency. Without a provably reliable ethical system, its drives would conflict with ours, and it would pose an existential threat.”

Miller agrees.

“I think shortly after an AI achieves human level intelligence it will upgrade itself to super intelligence,” he told me. “At the very least the AI could make lots of copies of itself each with a minor different change and then see if any of the new versions of itself were better. Then it could make this the new ‘official’ version of itself and keep doing this. Any AI would have to fear that if it doesn’t quickly upgrade another AI would and take all of the resources of the universe for itself.”

Which bring up a point that’s not often discussed in AI circles — the potential for AGIs to compete with other AGIs. If even a modicum of self-preservation is coded into a strong artificial intelligence (and that sense of self-preservation could be the detection of an obstruction to its terminal value), it could enter into a lightning-fast arms race along those verticals designed to ensure its ongoing existence and future freedom-of-action. And in fact, while many people fear a so-called “robot apocalypse” aimed directly at extinguishing our civilisation, I personally feel that the real danger to our ongoing existence lies in the potential for us to be collateral damage as advanced AGIs battle it out for supremacy; we may find ourselves in the line of fire. Indeed, building a safe AI will be a monumental — if not intractable — task.