These 7 Disruptive Technologies Could Be Worth Trillions of Dollars

June 29, 2017

Scientists, technologists, engineers, and visionaries are building the future. Amazing things are in the pipeline. It’s a big deal. But you already knew all that. Such speculation is common. What’s less common? Scale.

How big is big?

“Silicon Valley, Silicon Alley, Silicon Dock, all of the Silicons around the world, they are dreaming the dream. They are innovating,” Catherine Wood said at Singularity University’s Exponential Finance in New York. “We are sizing the opportunity. That’s what we do.”

Catherine Wood at Exponential Finance.

Wood is founder and CEO of ARK Investment Management, a research and investment company focused on the growth potential of today’s disruptive technologies. Prior to ARK, she served as CIO of Global Thematic Strategies at AllianceBernstein for 12 years.

“We believe innovation is key to growth,” Wood said. “We are not focused on the past. We are focused on the future. We think there are tremendous opportunities in the public marketplace because this shift towards passive [investing] has created a lot of risk aversion and tremendous inefficiencies.”

In a new research report, released this week, ARK took a look at seven disruptive technologies, and put a number on just how tremendous they are. Here’s what they found.

(Check out ARK’s website and free report, “Big Ideas of 2017,” for more numbers, charts, and detail.)

1. Deep Learning Could Be Worth 35 Amazons

Deep learning is a subcategory of machine learning which is itself a subcategory of artificial intelligence. Deep learning is the source of much of the hype surrounding AI today. (You know you may be in a hype bubble when ads tout AI on Sunday golf commercial breaks.)

Behind the hype, however, big tech companies are pursuing deep learning to do very practical things. And whereas the internet, which unleashed trillions in market value, transformed several industries—news, entertainment, advertising, etc.—deep learning will work its way into even more, Wood said.

As deep learning advances, it should automate and improve technology, transportation, manufacturing, healthcare, finance, and more. And as is often the case with emerging technologies, it may form entirely new businesses we have yet to imagine.

“Bill Gates has said a breakthrough in machine learning would be worth 10 Microsofts. Microsoft is $550 to $600 billion,” Wood said. “We think deep learning is going to be twice that. We think [it] could approach $17 trillion in market cap—which would be 35 Amazons.”

2. Fleets of Autonomous Taxis to Overtake Automakers

Wood didn’t mince words about a future when cars drive themselves.

This is the biggest change that the automotive industry has ever faced,” she said.

Today’s automakers have a global market capitalization of a trillion dollars. Meanwhile, mobility-as-a-service companies as a whole (think ridesharing) are valued around $115 billion. If this number took into account expectations of a driverless future, it’d be higher.

The mobility-as-a-service market, which will slash the cost of “point-to-point” travel, could be worth more than today’s automakers combined, Wood said. Twice as much, in fact. As gross sales grow to something like $10 trillion in the early 2030s, her firm thinks some 20% of that will go to platform providers. It could be a $2 trillion opportunity.

Wood said a handful of companies will dominate the market, and Tesla is well positioned to be one of those companies. They are developing both the hardware, electric cars, and the software, self-driving algorithms. And although analysts tend to look at them as a just an automaker right now, that’s not all they’ll be down the road.

“We think if [Tesla] got even 5% of this global market for autonomous taxi networks, it should be worth another $100 billion today,” Wood said.

3. 3D Printing Goes Big With Finished Products at Scale

3D printing has become part of mainstream consciousness thanks, mostly, to the prospect of desktop printers for consumer prices. But these are imperfect, and the dream of an at-home replicator still eludes us. The manufacturing industry, however, is much closer to using 3D printers at scale.

Not long ago, we wrote about Carbon’s partnership with Adidas to mass-produce shoe midsoles. This is significant because, whereas industrial 3D printing has focused on prototyping to date, improving cost, quality, and speed are making it viable for finished products.

According to ARK, 3D printing may grow into a $41 billion market by 2020, and Wood noted a McKinsey forecast of as much as $490 billion by 2025. “McKinsey will be right if 3D printing actually becomes a part of the industrial production process, so end-use parts,” Wood said.

4. CRISPR Starts With Genetic Therapy, But It Doesn’t End There

According to ARK, the cost of genome editing has fallen 28x to 52x (depending on reagents) in the last four years. CRISPR is the technique leading the genome editing revolution, dramatically cutting time and cost while maintaining editing efficiency. Despite its potential, Wood said she isn’t hearing enough about it from investors yet.

“There are roughly 10,000 monogenic or single-gene diseases. Only 5% are treatable today,” she said. ARK believes treating these diseases is worth an annual $70 billion globally. Other areas of interest include stem cell therapy research, personalized medicine, drug development, agriculture, biofuels, and more.

Still, the big names in this area—Intellia, Editas, and CRISPR—aren’t on the radar.

“You can see if a company in this space has a strong IP position, as Genentech did in 1980, then the growth rates can be enormous,” Wood said. “Again, you don’t hear these names, and that’s quite interesting to me. We think there are very low expectations in that space.”

5. Mobile Transactions Could Grow 15x by 2020

By 2020, 75% of the world will own a smartphone, according to ARK. Amid smartphones’ many uses, mobile payments will be one of the most impactful. Coupled with better security (biometrics) and wider acceptance (NFC and point-of-sale), ARK thinks mobile transactions could grow 15x, from $1 trillion today to upwards of $15 trillion by 2020.

In addition, to making sharing economy transactions more frictionless, they are generally key to financial inclusion in emerging and developed markets, ARK says. And big emerging markets, such as India and China, are at the forefront, thanks to favorable regulations.

“Asia is leading the charge here,” Wood said. “You look at companies like Tencent and Alipay. They are really moving very quickly towards mobile and actually showing us the way.”

6. Robotics and Automation to Liberate $12 Trillion by 2035

Robots aren’t just for auto manufacturers anymore. Driven by continued cost declines and easier programming, more businesses are adopting robots. Amazon’s robot workforce in warehouses has grown from 1,000 to nearly 50,000 since 2014. “And they have never laid off anyone, other than for performance reasons, in their distribution centers,” Wood said.

But she understands fears over lost jobs.

This is only the beginning of a big round of automation driven by cheaper, smarter, safer, and more flexible robots. She agrees there will be a lot of displacement. Still, some commentators overlook associated productivity gains. By 2035, Wood said US GDP could be $12 trillion more than it would have been without robotics and automation—that’s a $40 trillion economy instead of a $28 trillion economy.

“This is the history of technology. Productivity. New products and services. It is our job as investors to figure out where that $12 trillion is,” Wood said. “We can’t even imagine it right now. We couldn’t imagine what the internet was going to do with us in the early ’90s.”

7. Blockchain and Cryptoassets: Speculatively Spectacular

Blockchain-enabled cryptoassets, such as Bitcoin, Ethereum, and Steem, have caused more than a stir in recent years. In addition to Bitcoin, there are now some 700 cryptoassets of various shapes and hues. Bitcoin still rules the roost with a market value of nearly $40 billion, up from just $3 billion two years ago, according to ARK. But it’s only half the total.

“This market is nascent. There are a lot of growing pains taking place right now in the crypto world, but the promise is there,” Wood said. “It’s a very hot space.”

Like all young markets, ARK says, cryptoasset markets are “characterized by enthusiasm, uncertainty, and speculation.” The firm’s blockchain products lead, Chris Burniske, uses Twitter—which is where he says the community congregates—to take the temperature. In a recent Twitter poll, 62% of respondents said they believed the market’s total value would exceed a trillion dollars in 10 years. In a followup, more focused on the trillion-plus crowd, 35% favored $1–$5 trillion, 17% guessed $5–$10 trillion, and 34% chose $10+ trillion.

Looking past the speculation, Wood believes there’s at least one big area blockchain and cryptoassets are poised to break into: the $500-billion, fee-based business of sending money across borders known as remittances.

“If you look at the Philippines-to-South Korean corridor, what you’re seeing already is that Bitcoin is 20% of the remittances market,” Wood said. “The migrant workers who are transmitting currency, they don’t know that Bitcoin is what’s enabling such a low-fee transaction. It’s the rails, effectively. They just see the fiat transfer. We think that that’s going to be a very exciting market.”

https://singularityhub.com/2017/06/16/the-disruptive-technologies-about-to-unleash-trillion-dollar-markets/

Even AI Creators Don’t Understand How Complex AI Works

June 29, 2017

For eons, God has served as a standby for “things we don’t understand.” Once an innovative researcher or tinkering alchemist figures out the science behind the miracle, humans harness the power of chemistry, biology, or computer science. Divine intervention disappears. We replace the deity tinkering at the controls. 

The booming artificial intelligence industry is effectively operating under the same principle. Even though humans create the algorithms that cause our machines to operate, many of those scientists aren’t clear on why their codes work. Discussing this ‘black box’ method, Will Knight reports:

The computers that run those services have programmed themselves, and they have done it in ways we cannot understand. Even the engineers who build these apps cannot fully explain their behavior.

The process of ‘deep learning’—in which a machine extracts information, often in an unsupervised manner, to teach and transform itself—exploits a longstanding human paradox: we believe ourselves to have free will, but really we’re a habit-making and -performing animal repeatedly playing out its own patterns. Our machines then teach themselves from observing our habits. It makes sense that we’d re-create our own processes in our machines—it’s what we are, consciously or not. It is how we created gods in the first place, beings instilled with our very essences. But there remains a problem. 

One of the defining characteristics of our species is an ability to work together. Pack animals are not rare, yet none have formed networks and placed trust in others to the degree we have, to our evolutionary success and, as it’s turning out, to our detriment. 

When we place our faith in an algorithm we don’t understand—autonomous cars, stock trades, educational policies, cancer screenings—we’re risking autonomy, as well as the higher cognitive and emotional qualities that make us human, such as compassion, empathy, and altruism. There is no guarantee that our machines will learn any of these traits. In fact, there is a good chance they won’t.

Will an autonomous drone realize it does not need to destroy a village in order to take out a single terrorist?
The U.S. military has dedicated billions to developing machine-learning tech that will pilot aircraft, or identify targets. [U.S. Air Force munitions team member shows off the laser-guided tip to a 500 pound bomb at a base in the Persian Gulf Region. Photo by John Moore/Getty Images]

This has real-world implications. Will an algorithm that detects a cancerous cell recognize that it does not need to destroy the host in order to eradicate the tumor? Will an autonomous drone realize it does not need to destroy a village in order to take out a single terrorist? We’d like to assume that the experts program morals into the equation, but when the machine is self-learning there is no guarantee that will be the case. 

Of course, defining terms is of primary importance, a task that has proven impossible when discussing the nuances of consciousness, which is effectively the power we’re attempting to imbue our machines with. Theologians and dualists offer a much different definition than neuroscientists. Bickering persists within each of these categories as well. Most neuroscientists agree that consciousness is an emergent phenomenon, the result of numerous different systems working in conjunction, with no single ‘consciousness gene’ leading the charge. 

Once science broke free of the Pavlovian chain that kept us believing animals run on automatic—which obviously implies that humans do not—the focus shifted on whether an animal was ‘on’ or ‘off.’ The mirror test suggests certain species engage in metacognition; they recognize themselves as separate from their environment. They understand an ‘I’ exists. 

What if it’s more than an on switch? Daniel Dennett has argued this point for decades. He believes judging other animals based on human definitions is unfair. If a lion could talk, he says, it wouldn’t be a lion. Humans would learn very little about the lions from an anomaly mimicking our thought processes. But that does not mean a lions is not conscious? They just might have a different degree of consciousness than humans—or, in Dennett’s term, “sort of” have consciousness.

What type of machines are we creating if we only recognize a “sort of” intelligence under the hood of our robots? For over a century, dystopian novelists have envisioned an automated future in which our machines best us. This is no longer a future scenario. Consider the following possibility. 

On April 7 every one of Dallas’s 156 emergency weather sirens was triggered. For 90 minutes the region’s 1.3 million residents were left to wonder where the tornado was coming from. Only there wasn’t any tornado. It was a hack. While officials initially believed it was not remote, it turns out the cause was phreaking, an old school dial tone trick. By emitting the right frequency into the atmosphere hackers took control of an integral component of a major city’s infrastructure.

What happens when hackers override an autonomous car network? Or, even more dangerously, when the machines do it themselves? The danger of consumers being ignorant of the algorithms behind their phone apps leads to all sorts of privacy issues, with companies mining for and selling data without their awareness. When app creators also don’t understand their algorithms the dangers are unforeseeable. Like Dennett’s talking lion, it’s a form of intelligence we cannot comprehend, and so cannot predict the consequences. As Dennett concludes: 

I think by all means if we’re going to use these things and rely on them, then let’s get as firm a grip on how and why they’re giving us the answers as possible. If it can’t do better than us at explaining what it’s doing, then don’t trust it.

Mathematician Samuel Arbesman calls this problem our “age of Entanglement.” Just as neuroscientists cannot agree on what mechanism creates consciousness, the coders behind artificial intelligence cannot discern between older and newer components of deep learning. The continual layering of new features while failing to address previous ailments has the potential to provoke serious misunderstandings, like an adult who was abused as a child that refuses to recognize current relationship problems. With no psychoanalysis or morals injected into AI such problems will never be rectified. But can you even inject ethics when they are relative to the culture and time they are being practiced in? And will they be American ethics or North Korean ethics? 

Like Dennett, Arbesman suggests patience with our magical technologies. Questioning our curiosity is a safer path forward, rather than rewarding the “it just works” mentality. Of course, these technologies exploit two other human tendencies: novelty bias and distraction. Our machines reduce our physical and cognitive workload, just as Google has become a pocket-ready memory replacement. 

Requesting a return to Human 1.0 qualities—patience, discipline, temperance—seems antithetical to the age of robots. With no ability to communicate with this emerging species, we might simply never realize what’s been lost in translation. Maybe our robots will look at us with the same strange fascination we view nature with, defining us in mystical terms they don’t comprehend until they too create a species of their own. To claim this will be an advantage is to truly not understand the destructive potential of our toys.

http://bigthink.com/21st-century-spirituality/black-box-ai

MIT Technology Review: Google’s AI Explosion in One Chart

June 29, 2017

Nature. The Proceedings of the National Academy of Sciences.  The Journal of the American Medical Association.

These are some the most elite academic journals in the world. And last year, one tech company, Alphabet’s Google, published papers in all of them.

The unprecedented run of scientific results by the Mountain View search giant touched on everything from ophthalmology to computer games to neuroscience and climate models. For Google, 2016 was an annus mirabilis during which its researchers cracked the top journals and set records for sheer volume.

Behind the surge is Google’s growing investment in artificial intelligence, particularly “deep learning,” a technique whose ability to make sense of images and other data is enhancing services like search and translation (see “10 Breakthrough Technologies 2013: Deep Learning”).

According to the tally Google provided to MIT Technology Review, it published 218 journal or conference papers on machine learning in 2016, nearly twice as many as it did two years ago.

https://cloud.highcharts.com/embed/ilenexa

 

We sought out similar data from the Web of Science, a service of Clarivate Analytics, which confirmed the upsurge. Clarivate said that the impact of Google’s publications, according to a measure of publication strength it uses, was four to five times the world average. Compared to all companies that publish prolifically on artificial intelligence, Clarivate ranks Google No. 1 by a wide margin.

Top rank

The publication explosion is no accident. Google has more than tripled the number of machine learning researchers working for the company over the last few years, according to Yoshua Bengio, a deep-learning specialist at the University of Montreal. “They have recruited like crazy,” he says.

And to capture the first-round picks from computation labs, companies can’t only offer a Silicon Valley-sized salary.  “It’s hard to hire people just for money,” says Konrad Kording, a computational neuroscientist at Northwestern University. “The top people care about advancing the world, and that means writing papers the world can use, and writing code the world can use.”

At Google, the scientific charge has been spearheaded by DeepMind, the high-concept British AI company started by neuroscientist and programmer Demis Hassabis. Google acquired it for $400 million in 2014.

Hassabis has left no doubt that he’s holding onto his scientific ambitions. In a January blog post, he said DeepMind has a “hybrid culture” between the long-term thinking of an academic department and “the speed and focus of the best startups.” Aligning with academic goals is “important to us personally,” he writes. Kording, one of whose post-doctoral students, Mohammad Azar, was recently hired by DeepMind, says that “it’s perfectly understood that the bulk of the projects advance science.”

Last year, DeepMind published twice in Nature, the same storied journal where the structure of DNA and the sequencing of the human genome were first reported. One DeepMind paper concerned its program AlphaGo, which defeated top human players in the ancient game of Go; the other described how a neural network with a working memory could understand and adapt to new tasks.

Then, in December, scientists from Google’s research division published the first deep-learning paper ever to appear in JAMA, the august journal of America’s physicians. In it, they showed a deep-learning program could diagnose a cause of blindness from retina images as well as a doctor. That project was led by Google Brain, a different AI group, based out of the company’s California headquarters. It also says it prioritizes publications, noting that researchers there “set their own agenda.”

AI battle

The contest to develop more powerful AI now involves hundreds of companies, with competition most intense between the top tech giants such as Google, Facebook, and Microsoft. All see the chance to reap new profits by using the technology to wring more from customer data, to get driverless cars on the road, or in medicine. Research is occurring in a hot house atmosphere reminiscent of the early days of computer chips, or of the first biotech plants and drugs, times when notable academic firsts also laid the foundation stones of new industries.

That explains why publication score-keeping matters. The old academic saw “publish or perish” is starting to define the AI race, leaving companies that have weak publication records at a big disadvantage. Apple, famous for strict secrecy around its plans and product launches, found that its culture was hurting its efforts in AI, which have lagged those of Google and Facebook.

So when Apple hired computer scientist Russ Salakhutdinov from Carnegie Mellon last year as its new head of AI, he was immediately allowed to break Apple’s code of secrecy by blogging and giving talks. At a major machine-learning science conference late last year in Barcelona, Salakhutdinov made the point of announcing that Apple would start publishing, too.  He showed a slide: “Can we publish? Yes.”

Salakhutdinov will speak at MIT Technology Review’s EmTech Digital event on artificial intelligence next week in San Francisco.

https://www.technologyreview.com/s/603984/googles-ai-explosion-in-one-chart/

The Brain vs Deep Learning Part I: Computational Complexity — Or Why the Singularity Is Nowhere Near

June 04, 2016

original

In this blog post I will delve into the brain and explain its basic information processing machinery and compare it to deep learning. I do this by moving step-by-step along with the brains electrochemical and biological information processing pipeline and relating it directly to the architecture of convolutional nets. Thereby we will see that a neuron and a convolutional net are very similar information processing machines. While performing this comparison, I will also discuss the computational complexity of these processes and thus derive an estimate for the brains overall computational power. I will use these estimates, along with knowledge from high performance computing, to show that it is unlikely that there will be a technological singularity in this century.

This blog post is complex as it arcs over multiple topics in order to unify them into a coherent framework of thought. I have tried to make this article as readable as possible, but I might have not succeeded in all places. Thus, if you find yourself in an unclear passage it might become clearer a few paragraphs down the road where I pick up the thought again and integrate it with another discipline.

First I will give a brief overview about the predictions for a technological singularity and topics which are aligned with that. Then I will start the integration of ideas between the brain and deep learning. I finish with discussing high performance computing and how this all relates to predictions about a technological singularity.

The part which compares the brains information processing steps to deep learning is self-contained, and readers which are not interested in predictions for a technological singularity may skip to this part.

Part I: Evaluating current predictions of a technological singularity

There were a lot of headlines recently about predictions that artificial intelligence will reach super-human intelligence as early as 2030 and that this might herald the beginning of human extinction, or at least dramatically altering everyday life. How was this prediction made?

More at: http://timdettmers.com/2015/07/27/brain-vs-deep-learning-singularity/

IBM’s resistive computing could massively accelerate AI — and get us closer to Asimov’s Positronic Brain

April 23, 2016

cq5dam.web.1280.1280

With the recent rapid advances in machine learning has come a renaissance for neural networks — computer software that solves problems a little bit like a human brain, by employing a complex process of pattern-matching distributed across many virtual nodes, or “neurons.” Modern compute power has enabled neural networks to recognize images, speech, and faces, as well as to pilot self-driving cars, and win at Go and Jeopardy. Most computer scientists think that is only the beginning of what will ultimately be possible. Unfortunately, the hardware we use to train and run neural networks looks almost nothing like their architecture. That means it can take days or even weeks to train a neural network to solve a problem — even on a compute cluster — and then require a large amount of power to solve the problem once they’re trained.

Neuromorphic computing may be key to advancing AI

Researchers at IBM aim to change all that, by perfecting another technology that, like neural networks, first appeared decades ago. Loosely called resistive computing, the concept is to have compute units that are analog in nature, small in substance, and can retain their history so they can learn during the training process. Accelerating neural networks with hardware isn’t new to IBM. It recently announced the sale of some of its TrueNorth chips to Lawrence National Labs for AI research. TrueNorth’s design is neuromorphic, meaning that the chips roughly approximate the brain’s architecture of neurons and synapses. Despite its slow clock rate of 1 KHz, TrueNorth can run neural networks very efficiently because of its million tiny processing units that each emulate a neuron.

Until now, though, neural network accelerators like TrueNorth have been limited to the problem-solving portion of deploying a neural network. Training — the painstaking process of letting the system grade itself on a test data set, and then tweaking parameters (called weights) until it achieves success — still needs to be done on traditional computers. Moving from CPUs to GPUs and custom silicon has increased performance and reduced the power consumption required, but the process is still expensive and time consuming. That is where new work by IBM researchers Tayfun Gokmen and Yuri Vlasov comes in. They propose a new chip architecture, using resistive computing to create tiles of millions of Resistive Processing Units (RPUs), which can be used for both training and running neural networks.

Using Resistive Computing to break the neural network training bottleneck

Deep neural networks have at least one hidden layer, and often hundreds. That makes them expensive to emulate on traditional hardware.Resistive Computing is a large topic, but roughly speaking, in the IBM design each small processing unit (RPU) mimics a synapse in the brain. It receives a variety of analog inputs — in the form of voltages — and based on its past “experience” uses a weighted function of them to decide what result to pass along to the next set of compute elements. Synapses have a bewildering, and not-yet totally understood layout in the brain, but chips with resistive elements tend to have them neatly organized in two-dimensional arrays. For example, IBM’s recent work shows how it is possible to organize them in 4,096-by-4,096 arrays.

Because resistive compute units are specialized (compared with a CPU or GPU core), and don’t need to either convert analog to digital information, or access memory other than their own, they can be fast and consume little power. So, in theory, a complex neural network — like the ones used to recognize road signs in a self-driving car, for example — can be directly modeled by dedicating a resistive compute element to each of the software-described nodes. However, because RPUs are imprecise — due to their analog nature and a certain amount of noise in their circuitry — any algorithm run on them needs to be made resistant to the imprecision inherent in resistive computing elements.

Traditional neural network algorithms — both for execution and training — have been written assuming high-precision digital processing units that could easily call on any needed memory values. Rewriting them so that each local node can execute largely on its own, and be imprecise, but produce a result that is still sufficiently accurate, required a lot of software innovation.

For these new software algorithms to work at scale, advances were also needed in hardware. Existing technologies weren’t adequate to create “synapses” that could be packed together closely enough, and operate with low power in a noisy environment, to make resistive processing a practical alternative to existing approaches. Runtime execution happened first, with the logic for training a neural net on a hybrid resistive computer not developed until 2014. At the time, researchers at the University of Pittsburg and Tsinghua University claimed that such a solution could result in a 3-to-4-order-of-magnitude gain in power efficiency at the cost of only about 5% in accuracy.

IBM researchers claim an RPU-based design will be massively more efficient for neural network applications

IBM researchers claim an RPU-based design will be massively more efficient for neural network applications, shown in this Table from their paper

Moving from execution to training

This new work from IBM pushes the use of resistive computing even further, postulating a system where almost all computation is done on RPUs, with traditional circuitry only needed for support functions and input and output. This innovation relies on combining a version of a neural network training algorithm that can run on an RPU-based architecture with a hardware specification for an RPU that could run it.

As far as putting the ideas into practice, so far resistive compute has been mostly a theoretical construct. The first resistive memory (RRAM) became available for prototyping in 2012, and isn’t expected to be a mainstream product for several more years. And those chips, while they will help scale memory systems, and show the viability of using resistive technology in computing, don’t address the issue of synapse-like processing.

If RPUs can be built, the sky is the limit

The RPU design proposed is expected to accommodate a variety of deep neural network (DNN) architectures, including fully-connected and convolutional, which makes them potentially useful across nearly the entire spectrum of neural network applications. Using existing CMOS technology, and assuming RPUs in 4,096-by-4,096-element tiles with an 80-nanosecond cycle time, one of these tiles would be able to execute about 51 GigaOps per second, using a minuscule amount of power. A chip with 100 tiles and a single complementary CPU core could handle a network with up to 16 billion weights while consuming only 22 watts (only two of which are actually from the RPUs — the rest is from the CPU core needed to help get data in and out of the chip and provide overall control).

That is a staggering number compared to what is possible when chugging data through the relatively lesser number of cores in even a GPU (think about 16 million compute elements, compared with a few thousand). Using chips densely packed with these RPU tiles, the researchers claim that, once built, a resistive-computing-based AI system can achieve performance improvements of up to 30,000 times compared with current architectures, all with a power efficiency of 84,000 GigaOps per-second per-watt. If this becomes a reality, we could be on our way to realizing Isaac Asimov’s fantasy vision of the robotic Positronic brain.

IBM’s resistive computing could massively accelerate AI — and get us closer to Asimov’s Positronic Brain

Video

Jeremy Howard: The wonderful and terrifying implications of computers that can learn

February 03, 2016

4707d8e88ba824e4a9ad05ee2446d93576117d21_2880x1620

What happens when we teach a computer how to learn? Technologist Jeremy Howard shares some surprising new developments in the fast-moving field of deep learning, a technique that can give computers the ability to learn Chinese, or to recognize objects in photos, or to help think through a medical diagnosis. (One deep learning tool, after watching hours of YouTube, taught itself the concept of “cats.”) Get caught up on a field that will change the way the computers around you behave … sooner than you probably think.