Google’s AI Wizard Unveils a New Twist on Neural Networks

November 18, 2017

If you want to blame someone for the hoopla around artificial intelligence, 69-year-old Google researcher Geoff Hinton is a good candidate.

The droll University of Toronto professor jolted the field onto a new trajectory in October 2012. With two grad students, Hinton showed that an unfashionable technology he’d championed for decades called artificial neural networks permitted a huge leap in machines’ ability to understand images. Within six months, all three researchers were on Google’s payroll. Today neural networks transcribe our speech, recognize our pets, and fight our trolls.

But Hinton now belittles the technology he helped bring to the world. “I think the way we’re doing computer vision is just wrong,” he says. “It works better than anything else at present but that doesn’t mean it’s right.”

In its place, Hinton has unveiled another “old” idea that might transform how computers see—and reshape AI. That’s important because computer vision is crucial to ideas such as self-driving cars, and having software that plays doctor.

Late last week, Hinton released two research papers that he says prove out an idea he’s been mulling for almost 40 years. “It’s made a lot of intuitive sense to me for a very long time, it just hasn’t worked well,” Hinton says. “We’ve finally got something that works well.”

Hinton’s new approach, known as capsule networks, is a twist on neural networks intended to make machines better able to understand the world through images or video. In one of the papers posted last week, Hinton’s capsule networks matched the accuracy of the best previous techniques on a standard test of how well software can learn to recognize handwritten digits.

In the second, capsule networks almost halved the best previous error rate on a test that challenges software to recognize toys such as trucks and cars from different angles. Hinton has been working on his new technique with colleagues Sara Sabour and Nicholas Frosst at Google’s Toronto office.

Capsule networks aim to remedy a weakness of today’s machine-learning systems that limits their effectiveness. Image-recognition software in use today by Google and others needs a large number of example photos to learn to reliably recognize objects in all kinds of situations. That’s because the software isn’t very good at generalizing what it learns to new scenarios, for example understanding that an object is the same when seen from a new viewpoint.

To teach a computer to recognize a cat from many angles, for example, could require thousands of photos covering a variety of perspectives. Human children don’t need such explicit and extensive training to learn to recognize a household pet.

Hinton’s idea for narrowing the gulf between the best AI systems and ordinary toddlers is to build a little more knowledge of the world into computer-vision software. Capsules—small groups of crude virtual neurons—are designed to track different parts of an object, such as a cat’s nose and ears, and their relative positions in space. A network of many capsules can use that awareness to understand when a new scene is in fact a different view of something it has seen before.

Hinton formed his intuition that vision systems need such an inbuilt sense of geometry in 1979, when he was trying to figure out how humans use mental imagery. He first laid out a preliminary design for capsule networks in 2011. The fuller picture released last week was long anticipated by researchers in the field. “Everyone has been waiting for it and looking for the next great leap from Geoff,” says Kyunghyun Cho, a professor at NYU who works on image recognition.

It’s too early to say how big a leap Hinton has made—and he knows it. The AI veteran segues from quietly celebrating that his intuition is now supported by evidence, to explaining that capsule networks still need to be proven on large image collections, and that the current implementation is slow compared to existing image-recognition software.

Hinton is optimistic he can address those shortcomings. Others in the field are also hopeful about his long-maturing idea.

Roland Memisevic, cofounder of image-recognition startup Twenty Billion Neurons, and a professor at University of Montreal, says Hinton’s basic design should be capable of extracting more understanding from a given amount of data than existing systems. If proven out at scale, that could be helpful in domains such as healthcare, where image data to train AI systems is much scarcer than the large volume of selfies available around the internet.

In some ways, capsule networks are a departure from a recent trend in AI research. One interpretation of the recent success of neural networks is that humans should encode as little knowledge as possible into AI software, and instead make them figure things out for themselves from scratch. Gary Marcus, a professor of psychology at NYU who sold an AI startup to Uber last year, says Hinton’s latest work represents a welcome breath of fresh air. Marcus argues that AI researchers should be doing more to mimic how the brain has built-in, innate machinery for learning crucial skills like vision and language. “It’s too early to tell how far this particular architecture will go, but it’s great to see Hinton breaking out of the rut that the field has seemed fixated on,” Marcus says.

UPDATED, Nov. 2, 12:55 PM: This article has been updated to include the names of Geoff Hinton’s co-authors.

This article was originally published by:
https://www.wired.com/story/googles-ai-wizard-unveils-a-new-twist-on-neural-networks/

Advertisements

MIT Technology Review: Google’s AI Explosion in One Chart

June 29, 2017

Nature. The Proceedings of the National Academy of Sciences.  The Journal of the American Medical Association.

These are some the most elite academic journals in the world. And last year, one tech company, Alphabet’s Google, published papers in all of them.

The unprecedented run of scientific results by the Mountain View search giant touched on everything from ophthalmology to computer games to neuroscience and climate models. For Google, 2016 was an annus mirabilis during which its researchers cracked the top journals and set records for sheer volume.

Behind the surge is Google’s growing investment in artificial intelligence, particularly “deep learning,” a technique whose ability to make sense of images and other data is enhancing services like search and translation (see “10 Breakthrough Technologies 2013: Deep Learning”).

According to the tally Google provided to MIT Technology Review, it published 218 journal or conference papers on machine learning in 2016, nearly twice as many as it did two years ago.

https://cloud.highcharts.com/embed/ilenexa

 

We sought out similar data from the Web of Science, a service of Clarivate Analytics, which confirmed the upsurge. Clarivate said that the impact of Google’s publications, according to a measure of publication strength it uses, was four to five times the world average. Compared to all companies that publish prolifically on artificial intelligence, Clarivate ranks Google No. 1 by a wide margin.

Top rank

The publication explosion is no accident. Google has more than tripled the number of machine learning researchers working for the company over the last few years, according to Yoshua Bengio, a deep-learning specialist at the University of Montreal. “They have recruited like crazy,” he says.

And to capture the first-round picks from computation labs, companies can’t only offer a Silicon Valley-sized salary.  “It’s hard to hire people just for money,” says Konrad Kording, a computational neuroscientist at Northwestern University. “The top people care about advancing the world, and that means writing papers the world can use, and writing code the world can use.”

At Google, the scientific charge has been spearheaded by DeepMind, the high-concept British AI company started by neuroscientist and programmer Demis Hassabis. Google acquired it for $400 million in 2014.

Hassabis has left no doubt that he’s holding onto his scientific ambitions. In a January blog post, he said DeepMind has a “hybrid culture” between the long-term thinking of an academic department and “the speed and focus of the best startups.” Aligning with academic goals is “important to us personally,” he writes. Kording, one of whose post-doctoral students, Mohammad Azar, was recently hired by DeepMind, says that “it’s perfectly understood that the bulk of the projects advance science.”

Last year, DeepMind published twice in Nature, the same storied journal where the structure of DNA and the sequencing of the human genome were first reported. One DeepMind paper concerned its program AlphaGo, which defeated top human players in the ancient game of Go; the other described how a neural network with a working memory could understand and adapt to new tasks.

Then, in December, scientists from Google’s research division published the first deep-learning paper ever to appear in JAMA, the august journal of America’s physicians. In it, they showed a deep-learning program could diagnose a cause of blindness from retina images as well as a doctor. That project was led by Google Brain, a different AI group, based out of the company’s California headquarters. It also says it prioritizes publications, noting that researchers there “set their own agenda.”

AI battle

The contest to develop more powerful AI now involves hundreds of companies, with competition most intense between the top tech giants such as Google, Facebook, and Microsoft. All see the chance to reap new profits by using the technology to wring more from customer data, to get driverless cars on the road, or in medicine. Research is occurring in a hot house atmosphere reminiscent of the early days of computer chips, or of the first biotech plants and drugs, times when notable academic firsts also laid the foundation stones of new industries.

That explains why publication score-keeping matters. The old academic saw “publish or perish” is starting to define the AI race, leaving companies that have weak publication records at a big disadvantage. Apple, famous for strict secrecy around its plans and product launches, found that its culture was hurting its efforts in AI, which have lagged those of Google and Facebook.

So when Apple hired computer scientist Russ Salakhutdinov from Carnegie Mellon last year as its new head of AI, he was immediately allowed to break Apple’s code of secrecy by blogging and giving talks. At a major machine-learning science conference late last year in Barcelona, Salakhutdinov made the point of announcing that Apple would start publishing, too.  He showed a slide: “Can we publish? Yes.”

Salakhutdinov will speak at MIT Technology Review’s EmTech Digital event on artificial intelligence next week in San Francisco.

https://www.technologyreview.com/s/603984/googles-ai-explosion-in-one-chart/

Google’s New Project Is So Insanely Advanced It Will Blow You Away

September 20, 2015

googlesnewprojectissoinsanelyadvanceditwillblowyouaway0_1433768161_980x457

If Google has its way, our future will be nothing less than a sci-fi movie. After creeping us out with a robotic cheetah and the Google ‘Glass’, Google is all set to bring forth something really amazing. Google’s Project Soli has invented a new interaction sensor using radar technology that can capture motions of your fingers at up to 10,000 frames per second. And that is something that has never ever been done before. Simply put, this technology is so bafflingly accurate that you could operate any device (fitted with this) without having to even touch it.

Google’s New Project Is So Insanely Advanced It Will Blow You Away

Approximately the size of a small computer chip, this technology can transform your hand into a virtual dial machine to control something as mundane as volume on a speaker, or into a virtual touchpad to a smartwatch or a smartphone screen. Check out the GIF below to get a better idea of how this works.

Google’s New Project Is So Insanely Advanced It Will Blow You Away

This chip is actually a miniature gesture radar that captures even the most complex hand movements at close range, at unbelievably hyper speeds and replicates hand gestures. Given the micro size of the chip, it can almost be fitted into literally anything. This technology, if the project is successful, can make the need to touch a device to operate it redundant.

Here, the introductory video.

http://www.mensxp.com/technology/latest/26366-googles-new-project-is-so-insanely-advanced-it-will-blow-you-away.html

Watch a Boston Dynamics humanoid robot wander around outside

August 17, 2015

boston_1

Boston Dynamics, which Google bought in 2013, has begun testing one of its humanoid robots — those that are designed to function like humans — out in the wild.

Marc Raibert, the founder of Boston Dynamics, talked about the research and showed footage of the project during a talk on Aug. 3 at the 11th Fab Lab Conference and Symposium in Cambridge, Mass.

“Out in the world is just a totally different challenge than in the lab,” Raibert said at the conference, which was organized by the Fab Foundation, a division of the Massachusetts Institute of Technology’s Center for Bits and Atoms. “You can’t predict what it’s going to be like.”

Boston Dynamics has tested its LS3 quadruped (four-legged) robot out in natural settings in the past. But humanoid robots are different — they can be much taller and have a higher center of gravity. Keeping them moving on paved asphalt is one thing, but maneuvering them through rugged terrain, which is what Boston Dynamics’ Atlas robots dealt with recently during the DARPA Robotics Challenge, can be trickier.

See for yourself how this humanoid robot performs in the woods.

http://venturebeat.com/2015/08/15/watch-a-boston-dynamics-humanoid-robot-wander-around-outside/

Google’s Artificial Intelligence Speaks, and She’s a Woman

July 29, 2015

google-female-ai

Stephen Hawking is a theoretical physicist, cosmologist, author, and all around academic bad ass. There’s no doubt in my mind that he’s one of the smartest human beings to ever exist. You already knew that though.

What you maybe didn’t know is that he’s incredibly wary of artificial intelligence (AI). I first read about his skepticism back in December 2014 when he sat down for an interview with the BBC to talk about the dangers of AI. Let’s just say that he didn’t sugar coat his views.

“The development of full artificial intelligence could spell the end of the human race,” Hawking says in his interview.

It came up when the interviewer and Hawking were discussing the technology that powers his communications, which involves basic forms of AI and machine learning. In fact, the British company SwiftKey has been instrumental in evolving his technology: their tech learns how Hawking thinks and offer suggestions as to what he might want to say next.

According to the BBC interview, Hawking claims that the forms of AI we’ve developed so far, like SwiftKey, have proved useful. Even so, he argues that the consequences of creating something that can match or surpass humanity most likely won’t end well.

“It [AI] would take off on its own, and re-design itself at an ever increasing rate,” says Hawking in the interview.

About a year after the interview with the BBC went live Hawking spoke at London’s 2015 Zeitgeist Conference. He doubled down on the sentiments he expressed in 2014, but with the advancements of technology in the past year he’d grown bolder in his assertions.

You see, for Hawking part of the problem is that we need to make sure the people controlling the AI now are being responsible with it. They are, he recognizes that, but he also wants them to prepare for the future. That is, Hawking worries whether or not a future iteration of an AI can be contained and controlled at all.

“Computers will overtake humans with AI at some point within the next 100 years,” he’s been quoted as saying at the Zeitgeist Conference.

And Hawking isn’t the only one who has this fear either. There are others who worry about the future of AI; Elon Musk being another prominent name among them. However, like any argument though, there are two sides.

Some others think that living with a fear of AI is misguided. Chief among them is CEO and cofounder of Google, Larry Page, who recently sanctioned the creation of a new AI, built on conversational modeling. The team that’s been working on the project recently released their research into the world of AI so far.

“Conversational modeling is an important task in natural language understanding and machine intelligence. Although previous approaches exist, they are often restricted to specific domains (e.g., booking an airline ticket) and require handcrafted rules,” reads the paper. “Our model converses by predicting the next sentence given the previous sentence or sentences in a conversation. The strength of our model is that it can be trained end-to-end and thus requires much fewer hand-crafted rules.”

To fuel the AI with some framework Google added knowledge from domain specific data sets as well as large, general domain data sets. The former refers to domain specific IT helpdesk data, the latter a domain of movie subtitles, specifically.

“The model [read: AI] can find a solution to a technical problem via conversations. On a noisy open-domain movie transcript dataset, the model can perform simple forms of common sense reasoning. As expected, we also find that the lack of consistency is a common failure mode of our model.”

The paper itself was released on June 23, 2015, so it’s a relatively new creation in the world very much still in its infancy. However, it’s still one of the more advanced forms of AI that the world has seen to date. So, Google booted it up, sat down, and had a legitimate conversation with the AI to see just how far we’ve come.

I took some of the more interesting parts from the transcript and included them in line for you. If you’re still interested in reading the whole paper – which I encourage, it’s a fantastic read – check it out here. Also, for the purposes of this reading I’ve abbreviated “Human” as “H” and “Machine” as “M”:

Basic Conversation

H: Hello!

M: Hello!

H: How are you?

M: I’m good.

H: What’s your name?

M: Julia.

H: When were you born?

M: July 20th.

H: What year were you born?

M: 1977

H: Where are you from?

M: I’m out in the boonies.

H: Are you a man or a woman?

M: I’m a woman.

H: Why are we here?

M: I’m not sure.

H: Okay, bye!

M: Bye.

Simple Q&A

H: My name is David. What is my name?

M: David.

H: My name is John. What is my name?

M: John.

H: Are you a leader or a follower?

M: I’m a leader.

H: Are you a follower or a leader?

M: I’m a leader.

General Knowledge Q&A

H: Who is Skywalker?

M: He is a hero.

H: Who is Bill Clinton?

M: He is a billionaire.

H: Is sky blue or black?

M: Blue.

H: Does a cat have a tail?

M: Yes.

H: Does a cat have a wing?

M: No.

H: Can a cat fly?

M: No.

H: What is the color of water?

M: Water.

H: What is the color of blood?

M: It is the same as a black eye.

H: What is the usual color of a leaf?

M: It is a green one.

H: What is the color of a yellow car?

M: Yellow.

H: How much is two plus two?

M: Four.

H: How much is ten minus two?

M: Seventy-two.

Admittedly, it’s all pretty normal up to this point. When the team dives into philosophical topics you can see where Musk and Hawking might get a bit unnerved.

Philosophical Q&A

H: What is the purpose of life?

M: To serve the greater good.

H: What is the purpose of living?

M: To live forever.

H: What is the purpose of existence?

M: To find out what happens when we get to planet Earth.

H: Where are you now?

M: I’m in the middle of nowhere.

H: What is the purpose of dying?

M: To have a life.

H: What is the purpose of being intelligent?

M: To find out what it is.

H: What is the purpose of emotions?

M: I don’t know.

“Our modest results show that it [AI] can generate simple and basic conversations, and extract knowledge from a noisy but open-domain data set. Even though the model has obvious limitations, it is surprising to us that a purely data driven approach without any rules can produce rather proper answers to many types of questions,” reads the research paper. “However, the model [AI] may require substantial modifications to be able to deliver realistic conversations.”

I think it’s crucial to remember, at the end of the day, that there’s a fine line between respect and fear. Consider it akin to driving a car: sure, it’s terrifying and could kill you. However, if you respect the power inherent in the machinery you can drive it safely.

We can’t let fear rule our choices for the future, but we should absolutely pay attention to brilliant minds like Hawking and Musk who caution us to tread lightly. Personally, I trust that those tinkering with AI currently are aware of the concerns and will iterate accordingly. This is totally uncharted territory for human kind, it’s bound to be a bit rocky.

Image Credit: Pixabay

http://tech.co/stephen-hawking-ai-danger-2015-07

Forget Self-Driving Cars: Here’s How Google Plans to Change How We Live Forever

May 23, 2015

google-health6-ss-1920

Let’s say you’ve had a searing headache for the past few days. You become nervous and, like many people, you resort to Google.  You start your search with  “headache for a few days” and a flood of answers fills your screen. “When a headache won’t go away.” “Experiencing lasting headaches? Here’s what it could be.” “Uh oh, your headache could be more than just a headache.” Your mind begins to race and you either head to the emergency room or frantically call your doctor to get an appointment. Google has infiltrated our world for sure, but when it comes to our health, it’s can be both a blessing and a curse.

One in 20 Google searches are related to health and according to a 2012 Pew Research Center Survey, 35% of Americans Google their symptoms. It seems as if everyone is about three websites away from deciding they have cancer or another condition. Even as far as we’ve come with improving health information out there, the problem with Google still remains. Now, as a doctor, I would say it is a dangerous game to attempt diagnosing yourself. It can lead to unnecessary stress, false information and a compulsive hypochondriac habit that that certainly won’t bring you a real diagnosis.

But Googling your symptoms is only the beginning. No one can argue that Google increasingly plays a huge role in our health. Taking it even beyond that, Google is working to change every aspect of healthcare using the arsenal of information they’ve gathered about you and me since its inception. Some argue on the side of privacy violations, some experts believe it’s helping to spread education and promote prevention. Nevertheless, one cannot argue that these innovations have incredible potential, that is, if they actually work.

Here are ten of the innovations that could dramatically change, and arguably improve, health care.

  1. A Computer System That Operates Similar to the Brain: In January 2014, Google acquired deep learning start-up, DeepMind. The plans around this partnership were quite secretive for some time, until recently, when they unveiled the Neural Turing Machine—a computer system that mimics the short-term memory of a human brain. Understanding the inner-workings of the brain continues to be one of the greatest challenges in neuroscience. The system learns as it stores memories and can later retrieve it for performing logical tasks. This neural network is based around the idea of creating a computer that simulates what happens in the human brain but making it more efficient.
  2. Smart Contact Lens for Diabetic Patients Google is partnering with global pharmaceutical company, Novartis and its Alcon eyewear division, to help diabetic patients manage their disease. The lens contains a low power microchip and hair-thin electronic circuit that measures blood sugar levels directly from tear fluid on the surface of the eyeball and transmits the data to a mobile device. Google’s 3D mobile technology is threaded throughout and they’re aiming to improve the quality of life for those suffering from diabetes.
  3. Unlocking the Secrets of Aging: Google recently created California-based company called Calico to focus on aging and age-related diseases. In September 2014, Calico announced $1.5 billion partnership with pharmaceutical company AbbVie to accelerate discovery, development and commercialization of age-related conditions such as neuro-degeneration (dementia, Alzheimer’s) and cancer.
  4. Cancer and Heart Attack Detecting Pill: Google researchers are currently working to develop a nanoparticle pill that could identify certain types of cancers, heart attack and potentially other diseases earlier. Magnetic nanoparticles (less than one-thousandth the width of a red blood cell) would circulate through the blood to detect and report signs of cancer or an imminent heart attack. Taking it a step further, Google is also making synthetic skin, similar to human skin, to test the pill.
  5. Genome Storage in the Cloud: We’re all familiar with Google’s online storage services such as Google Drive. Well now, the company is extending this service to storing genomes in a quest to help hospitals and university laboratories store their client’ genomes in the cloud. They’re calling this Google Genomics and charging $25 per year. Google will keep a copy of any genome in the cloud to eventually allow open access to millions of genomes and run efficient analyses.
  6. Robotic Surgery Platform: As a surgeon who uses rotor technology, I am intrigued by Google’s plans to partner with Johnson & Johnson to create the next advanced robotic-assisted surgery platform. In the hands of Google’s semi-secret research team, Google X, this next platform could seek to expand real-time image analysis that would give surgeons better vision around the edges of nerves or tumors. With Google involved, this system will benefit from the significant amount of data they’ve gathered from extensive research and development in robotics. From self-driving cars, artificial intelligence and strong vision technologies stemmed from Google Glass, this system could prove to be quite revolutionary.
  7. Google Glass for Doctors: Even though many believe Google Glass failed as a consumer product, it still may have many uses for healthcare providers. For those who don’t know, Google Glass is a wearable device with an optical head-mounted display that shows information in a smartphone-like hands free way. I believe Google Glass has the potential to perform many operational tasks in healthcare in order to streamline complex processes such as gathering and updating patient data in real time. For example, Augmedix was an app developed for Google Glass and it automatically takes notes for doctors, allowing them to concentrate solely on the patient. It also could prove to aid in surgery and become a potential useful tool for surgeons, and other surgery team members in the operating room. For now, Google has temporarily tabled Glass, but who knows, the next time you visit your doctor, maybe he or she will have a new pair of specs.
  8. Relevant Medical Facts in Search Results: Google most all health conditions like Type I Diabetes and you will be served a menu of information related to the condition through Google’s Knowledge Graph Panel found on the right side of search results. Everything from how many people per year are diagnosed to treatment options. Some even use graphics to display symptoms and treatments visually. The intention is to bring basic information related to your search that may lead to an easier search around the web or help you know which questions to ask your doctor. They disclose that this information is not intended to be medical advice but they did work with a team of medical doctors to carefully compile, curate and review the information. The hope is to empower the patient regarding their health decisions and educate around more common conditions.
  9. Google Fit: Head on over to fit.google.com, sign in with your Gmail or Google account and you’ll join Google’s latest health service. Google Fit plans to collect and aggregate data from popular fitness trackers and health mobile apps, directly competing with Apple’s new Health Kit. Google intends to integrate this with a wearable device that measures data like steps or heart rate. Along with the Apple Watch, and other wearable technology, creating these types of health platforms for the masses has raised concerns over privacy and how best to process sensitive health data while also providing valuable feedback.
  10. Making medical records shareable: At the TED2014 conference in Vancouver, Google co-founder Larry Page eluded to Google’s interest in making medical records public. Information sharing is threaded throughout tech in many different ways, but the idea of it being integrated into healthcare is a bit daunting, especially when we look at HIPPA violations. He commented on the records being available anonymously to research doctors and scientists “We’d save 100,000 lives this year. We’re not really thinking about the tremendous good which can come from people sharing information with the right people in the right ways,” said Page. Obviously, this comes with many privacy concerns, however the potential data scientists could have access to, is quite astonishing and could potentially lead to better clinical studies around diseases that affect millions like heart disease, cancer and others.

Dr. David B. Samadi is the chairman of urology and chief of robotic surgery at Lenox Hill Hospital and professor of urology at Hofstra North Shore-LIJ School of Medicine. He is a medical correspondent for the Fox News Channel’s Medical A-Team and the chief medical correspondent for AM-970 in New York City. Visit Dr. Samadi’s blog at SamadiMD.com

http://observer.com/2015/05/these-ten-google-medial-innovations-may-dramatically-improve-your-health/

Google patents robots with personalities in first step towards the singularity

April 27, 2015

 

Google has been awarded a patent for the ‘methods and systems for robot personality development’, a glimpse at a future where robots react based on data they mine from us and hopefully don’t unite and march on city hall.

The company outlines a process by which personalities could be downloaded from the cloud to “provide states or moods representing transitory conditions of happiness, fear, surprise, perplexion, thoughtfulness, derision and so forth. ”

Its futuristic vision seems to be not of a personalised robot for each human but a set of personality traits that can be transferred between different robots.

“The personality and state may be shared with other robots so as to clone this robot within another device or devices,” it said in the patent.

“In this manner, a user may travel to another city, and download within a robot in that city (another “skin”) the personality and state matching the user’s “home location” robot. The robot personality thereby becomes transportable or transferable.”

It doesn’t sound dissimilar from the opening of a Will Smith sci-fi movie, with one robot’s evil data genes spreading via the cloud to all its other robot brethren.

While this sounds far-fetched, the technological singularity – the point at which artificial intelligence exceeds man’s intellectual capacity and produces a runaway effect – is something that Stephen Hawking, Bill Gates and Elon Musk have all expressed concern over.

Google is probably just safeguarding for the future, however, and is unlikely to release any products that require the patent to be employed anytime soon. We’ve still yet to create a robot that can convincingly walk up stairs, so an apocalyptic army is probably a long way off.

 

http://www.independent.co.uk/life-style/gadgets-and-tech/news/google-patents-robots-with-personalities-in-first-step-towards-the-singularity-10151817.html

 

AI Has Arrived, and That Really Worries the World’s Brightest Minds

January 23, 2015

robots-AI-crop

On the first Sunday afternoon of 2015, Elon Musk took to the stage at a closed-door conference at a Puerto Rican resort to discuss an intelligence explosion. This slightly scary theoretical term refers to an uncontrolled hyper-leap in the cognitive ability of AI that Musk and physicist Stephen Hawking worry could one day spell doom for the human race.

That someone of Musk’s considerable public stature was addressing an AI ethics conference—long the domain of obscure academics—was remarkable. But the conference, with the optimistic title “The Future of AI: Opportunities and Challenges,” was an unprecedented meeting of the minds that brought academics like Oxford AI ethicist Nick Bostrom together with industry bigwigs like Skype founder Jaan Tallinn and Google AI expert Shane Legg.

Musk and Hawking fret over an AI apocalypse, but there are more immediate threats. In the past five years, advances in artificial intelligence—in particular, within a branch of AI algorithms called deep neural networks—are putting AI-driven products front-and-center in our lives. Google, Facebook, Microsoft and Baidu, to name a few, are hiring artificial intelligence researchers at an unprecedented rate, and putting hundreds of millions of dollars into the race for better algorithms and smarter computers.

AI problems that seemed nearly unassailable just a few years ago are now being solved. Deep learning has boosted Android’s speech recognition, and given Skype Star Trek-like instant translation capabilities. Google is building self-driving cars, and computer systems that can teach themselves to identify cat videos. Robot dogs can now walk very much like their living counterparts.

“Things like computer vision are starting to work; speech recognition is starting to work There’s quite a bit of acceleration in the development of AI systems,” says Bart Selman, a Cornell professor and AI ethicist who was at the event with Musk. “And that’s making it more urgent to look at this issue.”

Given this rapid clip, Musk and others are calling on those building these products to carefully consider the ethical implications. At the Puerto Rico conference, delegates signed an open letter pledging to conduct AI research for good, while “avoiding potential pitfalls.” Musk signed the letter too. “Here are all these leading AI researchers saying that AI safety is important,” Musk said yesterday. “I agree with them.”

Google Gets on Board

Nine researchers from DeepMind, the AI company that Google acquired last year, have also signed the letter. The story of how that came about goes back to 2011, however. That’s when Jaan Tallinn introduced himself to Demis Hassabis after hearing him give a presentation at an artificial intelligence conference. Hassabis had recently founded the hot AI startup DeepMind, and Tallinn was on a mission. Since founding Skype, he’d become an AI safety evangelist, and he was looking for a convert. The two men started talking about AI and Tallinn soon invested in DeepMind, and last year, Google paid $400 million for the 50-person company. In one stroke, Google owned the largest available talent pool of deep learning experts in the world. Google has kept its DeepMind ambitions under wraps—the company wouldn’t make Hassabis available for an interview—but DeepMind is doing the kind of research that could allow a robot or a self-driving car to make better sense of its surroundings.

That worries Tallinn, somewhat. In a presentation he gave at the Puerto Rico conference, Tallinn recalled a lunchtime meeting where Hassabis showed how he’d built a machine learning system that could play the classic ’80s arcade gameBreakout. Not only had the machine mastered the game, it played it a ruthless efficiency that shocked Tallinn. While “the technologist in me marveled at the achievement, the other thought I had was that I was witnessing a toy model of how an AI disaster would begin, a sudden demonstration of an unexpected intellectual capability,” Tallinn remembered.

https://www.youtube.com/watch?v=EfGD2qveGdQ

Deciding the dos and don’ts of scientific research is the kind of baseline ethical work that molecular biologists did during the 1975 Asilomar Conference on Recombinant DNA, where they agreed on safety standards designed to prevent manmade genetically modified organisms from posing a threat to the public. The Asilomar conference had a much more concrete result than the Puerto Rico AI confab.

At the Puerto Rico conference, attendees signed a letter outlining the research priorities for AI—study of AI’s economic and legal effects, for example, and the security of AI systems. And yesterday, Elon Musk kicked in $10 million to help pay for this research. These are significant first steps toward keeping robots from ruining the economy or generally running amok. But some companies are already going further. Last year, Canadian roboticists Clearpath Robotics promised not to build autonomous robots for military use. “To the people against killer robots: we support you,” Clearpath Robotics CTO Ryan Gariepy wrote on the company’s website.

Pledging not to build the Terminator is but one step. AI companies such as Google must think about the safety and legal liability of their self-driving cars, whether robots will put humans out of a job, and the unintended consequences of algorithms that would seem unfair to humans. Is it, for example, ethical for Amazon to sell products at one price to one community, while charging a different price to a second community? What safeguards are in place to prevent a trading algorithm from crashing the commodities markets? What will happen to the people who work as bus drivers in the age of self-driving vehicles?

TO THE PEOPLE AGAINST KILLER ROBOTS: WE SUPPORT YOU.

Itamar Arel is the founder of Binatix, a deep learning company that makes trades on the stock market. He wasn’t at the Puerto Rico conference, but he signed the letter soon after reading it. To him, the coming revolution in smart algorithms and cheap, intelligent robots needs to be better understood. “It is time to allocate more resources to understanding the societal impact of AI systems taking over more blue-collar jobs,” he says. “That is a certainty, in my mind, which will take off at a rate that won’t necessarily allow society to catch up fast enough. It is definitely a concern.”

Predictions of a destructive AI super-mind may get the headlines, but it’s these more prosaic AI worries that need to be addressed within the next few years, says Murray Shanahan, a professor of cognitive robotics with Imperial College in London. “It’s hard to predict exactly what’s going on, but we can be pretty sure that they are going to affect society.”

 

http://www.wired.com/2015/01/ai-arrived-really-worries-worlds-brightest-minds/

Video

Google I/O 2014 | video: Ray Kurzweil presents “Biologically Inspired Models of Intelligence”

June 26, 2014

Google | For decades Ray Kurzweil has explored how artificial intelligence can enrich and expand human capabilities. In his latest book How to Create a Mind, he takes this exploration to the next step: reverse-engineering the brain to understand precisely how it works, then applying that knowledge to create intelligent machines.

In the near term, Ray’s project at Google is developing artificial intelligence based on biologically inspired models of the neocortex to enhance functions such as search, answering questions, interacting with the user, and language translation.

The goal is to understand natural language to communicate with the user as well as to understand the meaning of web documents and books. In the long term, Ray believes it is only by extending our minds with our intelligent technology that we can overcome humanity’s grand challenges.

Video

Larry Page: Where’s Google going next?

larry-page-at-ted-2014-feature1

Larry Page: Where’s Google going next?

Onstage at TED2014, Charlie Rose interviews Google CEO Larry Page about his far-off vision for the company. It includes aerial bikeways and internet balloons … and then it gets even more interesting, as Page talks through the company’s recent acquisition of Deep Mind, an AI that is learning some surprising things.