Even AI Creators Don’t Understand How Complex AI Works

June 29, 2017

For eons, God has served as a standby for “things we don’t understand.” Once an innovative researcher or tinkering alchemist figures out the science behind the miracle, humans harness the power of chemistry, biology, or computer science. Divine intervention disappears. We replace the deity tinkering at the controls. 

The booming artificial intelligence industry is effectively operating under the same principle. Even though humans create the algorithms that cause our machines to operate, many of those scientists aren’t clear on why their codes work. Discussing this ‘black box’ method, Will Knight reports:

The computers that run those services have programmed themselves, and they have done it in ways we cannot understand. Even the engineers who build these apps cannot fully explain their behavior.

The process of ‘deep learning’—in which a machine extracts information, often in an unsupervised manner, to teach and transform itself—exploits a longstanding human paradox: we believe ourselves to have free will, but really we’re a habit-making and -performing animal repeatedly playing out its own patterns. Our machines then teach themselves from observing our habits. It makes sense that we’d re-create our own processes in our machines—it’s what we are, consciously or not. It is how we created gods in the first place, beings instilled with our very essences. But there remains a problem. 

One of the defining characteristics of our species is an ability to work together. Pack animals are not rare, yet none have formed networks and placed trust in others to the degree we have, to our evolutionary success and, as it’s turning out, to our detriment. 

When we place our faith in an algorithm we don’t understand—autonomous cars, stock trades, educational policies, cancer screenings—we’re risking autonomy, as well as the higher cognitive and emotional qualities that make us human, such as compassion, empathy, and altruism. There is no guarantee that our machines will learn any of these traits. In fact, there is a good chance they won’t.

Will an autonomous drone realize it does not need to destroy a village in order to take out a single terrorist?
The U.S. military has dedicated billions to developing machine-learning tech that will pilot aircraft, or identify targets. [U.S. Air Force munitions team member shows off the laser-guided tip to a 500 pound bomb at a base in the Persian Gulf Region. Photo by John Moore/Getty Images]

This has real-world implications. Will an algorithm that detects a cancerous cell recognize that it does not need to destroy the host in order to eradicate the tumor? Will an autonomous drone realize it does not need to destroy a village in order to take out a single terrorist? We’d like to assume that the experts program morals into the equation, but when the machine is self-learning there is no guarantee that will be the case. 

Of course, defining terms is of primary importance, a task that has proven impossible when discussing the nuances of consciousness, which is effectively the power we’re attempting to imbue our machines with. Theologians and dualists offer a much different definition than neuroscientists. Bickering persists within each of these categories as well. Most neuroscientists agree that consciousness is an emergent phenomenon, the result of numerous different systems working in conjunction, with no single ‘consciousness gene’ leading the charge. 

Once science broke free of the Pavlovian chain that kept us believing animals run on automatic—which obviously implies that humans do not—the focus shifted on whether an animal was ‘on’ or ‘off.’ The mirror test suggests certain species engage in metacognition; they recognize themselves as separate from their environment. They understand an ‘I’ exists. 

What if it’s more than an on switch? Daniel Dennett has argued this point for decades. He believes judging other animals based on human definitions is unfair. If a lion could talk, he says, it wouldn’t be a lion. Humans would learn very little about the lions from an anomaly mimicking our thought processes. But that does not mean a lions is not conscious? They just might have a different degree of consciousness than humans—or, in Dennett’s term, “sort of” have consciousness.

What type of machines are we creating if we only recognize a “sort of” intelligence under the hood of our robots? For over a century, dystopian novelists have envisioned an automated future in which our machines best us. This is no longer a future scenario. Consider the following possibility. 

On April 7 every one of Dallas’s 156 emergency weather sirens was triggered. For 90 minutes the region’s 1.3 million residents were left to wonder where the tornado was coming from. Only there wasn’t any tornado. It was a hack. While officials initially believed it was not remote, it turns out the cause was phreaking, an old school dial tone trick. By emitting the right frequency into the atmosphere hackers took control of an integral component of a major city’s infrastructure.

What happens when hackers override an autonomous car network? Or, even more dangerously, when the machines do it themselves? The danger of consumers being ignorant of the algorithms behind their phone apps leads to all sorts of privacy issues, with companies mining for and selling data without their awareness. When app creators also don’t understand their algorithms the dangers are unforeseeable. Like Dennett’s talking lion, it’s a form of intelligence we cannot comprehend, and so cannot predict the consequences. As Dennett concludes: 

I think by all means if we’re going to use these things and rely on them, then let’s get as firm a grip on how and why they’re giving us the answers as possible. If it can’t do better than us at explaining what it’s doing, then don’t trust it.

Mathematician Samuel Arbesman calls this problem our “age of Entanglement.” Just as neuroscientists cannot agree on what mechanism creates consciousness, the coders behind artificial intelligence cannot discern between older and newer components of deep learning. The continual layering of new features while failing to address previous ailments has the potential to provoke serious misunderstandings, like an adult who was abused as a child that refuses to recognize current relationship problems. With no psychoanalysis or morals injected into AI such problems will never be rectified. But can you even inject ethics when they are relative to the culture and time they are being practiced in? And will they be American ethics or North Korean ethics? 

Like Dennett, Arbesman suggests patience with our magical technologies. Questioning our curiosity is a safer path forward, rather than rewarding the “it just works” mentality. Of course, these technologies exploit two other human tendencies: novelty bias and distraction. Our machines reduce our physical and cognitive workload, just as Google has become a pocket-ready memory replacement. 

Requesting a return to Human 1.0 qualities—patience, discipline, temperance—seems antithetical to the age of robots. With no ability to communicate with this emerging species, we might simply never realize what’s been lost in translation. Maybe our robots will look at us with the same strange fascination we view nature with, defining us in mystical terms they don’t comprehend until they too create a species of their own. To claim this will be an advantage is to truly not understand the destructive potential of our toys.

http://bigthink.com/21st-century-spirituality/black-box-ai

Video

MIT’s Robotic Cheetah Can Now Run And Jump While Untethered

September 15, 2014

Well, we knew it had to happen someday. A DARPA-funded robotic cheetah has been released into the wild, so to speak. A new algorithm developed by MIT researchers now allows their quadruped to run and jump — while untethered — across a field of grass.

The Pentagon, in an effort to investigate technologies that allow machines to traverse terrain in unique ways (well, at least that’s what they tell us), has been funding (via DARPA) the development of a robotic cheetah. Back in 2012, Boston Dynamics’ version smashed the landspeed record for the fastest mechanical mammal of Earth, reaching a top speed of 28.3 miles (45.5 km) per hour.

Researchers at MIT have their own version of robo-cheetah, and they’ve taken the concept in a new direction by imbuing it with the ability to run and bound while completely untethered.

MIT News reports:

The key to the bounding algorithm is in programming each of the robot’s legs to exert a certain amount of force in the split second during which it hits the ground, in order to maintain a given speed: In general, the faster the desired speed, the more force must be applied to propel the robot forward. Sangbae Kim, an associate professor of mechanical engineering at MIT, hypothesizes that this force-control approach to robotic running is similar, in principle, to the way world-class sprinters race.

“Many sprinters, like Usain Bolt, don’t cycle their legs really fast,” Kim says. “They actually increase their stride length by pushing downward harder and increasing their ground force, so they can fly more while keeping the same frequency.”

Kim says that by adapting a force-based approach, the cheetah-bot is able to handle rougher terrain, such as bounding across a grassy field. In treadmill experiments, the team found that the robot handled slight bumps in its path, maintaining its speed even as it ran over a foam obstacle.

“Most robots are sluggish and heavy, and thus they cannot control force in high-speed situations,” Kim says. “That’s what makes the MIT cheetah so special: You can actually control the force profile for a very short period of time, followed by a hefty impact with the ground, which makes it more stable, agile, and dynamic.”

This particular model, which weighs just as much as a real cheetah, can reach speeds of up to 10 mph (16 km/h) in the lab, even after clearing a 13-inch (33 cm) high hurdle. The MIT researchers estimate that their current version may eventually reach speeds of up to 30 mph (48 km/h).

It’s an impressive achievement, but Boston Dynamics’ WildCat is still the scariest free-running bot on the planet.

http://io9.com/mits-robotic-cheetah-can-now-run-and-jump-while-untethe-1634799433

Bioinspired drones of the future

May 26, 2014

harvard-drone1

Using mechanisms adopted by birds, bats, insects and snakes, 14 research teams have developed ideas for improving drone-flying performance in complex urban environments.

The research teams presented their work May 23 in a special open-access issue of IOP Publishing’s journal Bioinspiration and Biomimeticsdevoted to bio-inspired flight control. Here are a few examples.

An algorithm developed by Hungarian researchers allows multiple drones to fly together like a flock of birds to improve search and rescue operations. In a test, it was able to direct the movements of a flock of nine quadcopters following a moving car.

A millimeter-sized microrobot drone developed by researchers from Harvard University can take off and land and hover in the air for sustained periods of time, with a new development: simple, fly-like manoeuvers. Such drones could be used in assisted agriculture pollination and reconnaissance in the future.

Hawk moths that can handle strong winds and whirlwinds were developed by a research team from the University of North Carolina at Chapel Hill, the University of California, and The Johns Hopkins University.

Stage performance of the Stanford jump glider versus a theoretical ballistic jump (photo credit: Alexis Desbiens)

A “jumpglider design” that could reduce the power required to operate drones has been developed by researchers at Université de Sherbrooke and Stanford University. Inspired by vertebrates like the flying squirrel, the flying fish and the flying snake (which use their aerodynamic bodies to extend their jumping range to avoid predators), it combines an aeroplane-shaped body with a spring-based mechanical foot that propels the robot into the air. It could be used to improve search and rescue efforts by being able to navigate around obstacles and over rough terrain.

Flight-control challenges

Flying animals can be found everywhere in our cities, notes special-issue Guest Editor David Lentink, PhD, from Stanford University. “From scavenging pigeons to alcohol-sniffing fruit flies that make precision landings on our wine glasses, these animals have quickly learned how to control their flight through urban environments to exploit our resources. To enable our drones to fly equally well in wind and clutter, we need to solve several flight control challenges during all flight phases: takeoff, cruising, and landing.

“This special issue provides a unique integration between biological studies of animals and bio-inspired engineering solutions. Each of the 14 papers presented in this special issue offers a unique perspective on bio-mimetic flight, providing insights and solutions to the take-off, obstacle avoidance, in-flight grasping, swarming, and landing capabilities that urban drones need to succeed.”