Artificial intelligence could build new drugs faster than any human team

May 7, 2017

Artificial intelligence algorithms are being taught to generate art, human voices, and even fiction stories all on their own—why not give them a shot at building new ways to treat disease?

Atomwise, a San Francisco-based startup and Y Combinator alum, has built a system it calls AtomNet (pdf), which attempts to generate potential drugs for diseases like Ebola and multiple sclerosis. The company has invited academic and non-profit researchers from around the country to detail which diseases they’re trying to generate treatments for, so AtomNet can take a shot. The academic labs will receive 72 different drugs that the neural network has found to have the highest probability of interacting with the disease, based on the molecular data it’s seen.

Atomwise’s system only generates potential drugs—the compounds created by the neural network aren’t guaranteed to be safe, and need to go through the same drug trials and safety checks as anything else on the market. The company believes that the speed at which it can generate trial-ready drugs based on previous safe molecular interactions is what sets it apart.

Atomwise touts two projects that show the potential of AtomNet, drugs for multiple sclerosis and Ebola. The MS drug has been licensed to an undisclosed UK pharmacology firm, according to Atomwise, and the Ebola drug is being prepared for submission to a peer-reviewed publication.

Alexander Levy, the company’s COO and cofounder, said that AtomNet learns the interactions between molecules much like artificial intelligence learns to recognize images. Image recognition finds reduces patterns in images’ pixels to simpler representations, teaching itself the bounds of an idea like a horse or a desk lamp through seeing hundreds or thousands of examples.

“It turns out that the same thing that works in images, also works in chemistry,” Levy says. “You can take an interaction between a drug and huge biological system and you can decompose that to smaller and smaller interactive groups. If you study enough historical examples of molecules … and we’ve studied tens of millions of those, you can then make predictions that are extremely accurate yet also extremely fast.”

Atomwise isn’t the only company working on this technique. Startup BenevolentAI, working with Johnson & Johnson subsidiary Janssen, is also developing new ways to find drugs. TwoXAR is working on an AI-driven glaucoma medication, and Berg is working on algorithmically-built cancer treatments.

One of Atomwise’s advantages, Levy says, is that the network works with 3D models. To generate the drugs, the model starts with a 3D model of a molecule—for example a protein that gives a cancer cell a growth advantage. The neural network then generates a series of synthetic compounds (simulated drugs), and predicts how likely it would be for the two molecules to interact. If a drug is likely to interact with the specific molecule, it can be synthesized and tested.

Levy likens the idea to the automated systems used to model airplane aerodynamics or computer chip design, where millions of scenarios are mapped out within software that accurately represents how the physical world works.

“Imagine if you knew what a biological mechanism looked like, atom by atom. Could you reason your way to a compound that did the thing that you wanted?” Levy says.

Artificial intelligence could build new drugs faster than any human team

Exponential Growth Will Transform Humanity in the Next 30 Years

February 25, 2017


By Peter Diamantis

As we close out 2016, if you’ll allow me, I’d like to take a risk and venture into a topic I’m personally compelled to think about… a topic that will seem far out to most readers.

Today’s extraordinary rate of exponential growth may do much more than just disrupt industries. It may actually give birth to a new species, reinventing humanity over the next 30 years.

I believe we’re rapidly heading towards a human-scale transformation, the next evolutionary step into what I call a “Meta-Intelligence,” a future in which we are all highly connected—brain to brain via the cloud—sharing thoughts, knowledge and actions. In this post, I’m investigating the driving forces behind such an evolutionary step, the historical pattern we are about to repeat, and the implications thereof. Again, I acknowledge that this topic seems far-out, but the forces at play are huge and the implications are vast. Let’s dive in…

A Quick Recap: Evolution of Life on Earth in 4 Steps

About 4.6 billion years ago, our solar system, the sun and the Earth were formed.

Step 1: 3.5 billion years ago, the first simple life forms, called “prokaryotes,” came into existence.These prokaryotes were super-simple, microscopic single-celled organisms, basically a bag of cytoplasm with free-floating DNA. They had neither a distinct nucleus nor specialized organelles.

Step 2: Fast-forwarding one billion years to 2.5 billion years ago, the next step in evolution created what we call “eukaryotes”—life forms that distinguished themselves by incorporating biological ‘technology’ into themselves. Technology that allowed them to manipulate energy (via mitochondria) and information (via chromosomes) far more efficiently. Fast forward another billion years for the next step.

Step 3: 1.5 billion years ago, these early eukaryotes began working collaboratively and formed the first “multi-cellular life,” of which you and I are the ultimate examples (a human is a multicellular creature of 10 trillion cells).

Step 4: The final step I want to highlight happened some 400 million years ago, when lungfish crawled out of the oceans onto the shores, and life evolved from the oceans onto land.

The Next Stages of Human Evolution: 4 Steps

Today, at a massively accelerated rate—some 100 million times faster than the steps I outlined above—life is undergoing a similar evolution. In this next stage of evolution, we are going from evolution by natural selection (Darwinism) to evolution by intelligent direction. Allow me to draw the analogy for you:

Step 1: Simple humans today are analogous to prokaryotes. Simple life, each life form independent of the others, competing and sometimes collaborating.

Step 2: Just as eukaryotes were created by ingesting technology, humans will incorporate technology into our bodies and brains that will allow us to make vastly more efficient use of information (BCI) and energy.

Step 3: Enabled with BCI and AI, humans will become massively connected with each other and billions of AIs (computers) via the cloud, analogous to the first multicellular lifeforms 1.5 billion years ago. Such a massive interconnection will lead to the emergence of a new global consciousness, and a new organism I call the Meta-Intelligence.

Step 4: Finally, humanity is about to crawl out of the gravity well of Earth to become a multiplanetary species. Our journey to the moon, Mars, asteroids and beyond represents the modern-day analogy of the journey made by lungfish climbing out of the oceans some 400 million years ago.

The 4 Forces Driving the Evolution and Transformation of Humanity

Four primary driving forces are leading us towards our transformation of humanity into a meta-intelligence both on and off the Earth:

  1. We’re wiring our planet
  2. Emergence of brain-computer interface
  3. Emergence of AI
  4. Opening of the space frontier

Let’s take a look.

1. Wiring the Planet: Today, there are 2.9 billion people connected online. Within the next six to eight years, that number is expected to increase to nearly 8 billion, with each individual on the planet having access to a megabit-per-second connection or better. The wiring is taking place through the deployment of 5G on the ground, plus networks being deployed by Facebook, Google, Qualcomm, Samsung, Virgin, SpaceX and many others. Within a decade, every single human on the planet will have access to multi-megabit connectivity, the world’s information, and massive computational power on the cloud.

2. Brain-Computer Interface: A multitude of labs and entrepreneurs are working to create lasting, high-bandwidth connections between the digital world and the human neocortex (I wrote about that in detail here). Ray Kurzweil predicts we’ll see human-cloud connection by the mid-2030s, just 18 years from now. In addition, entrepreneurs like Bryan Johnson (and his company Kernel) are committing hundreds of millions of dollars towards this vision. The end results of connecting your neocortex with the cloud are twofold: first, you’ll have the ability to increase your memory capacity and/or cognitive function millions of fold; second, via a global mesh network, you’ll have the ability to connect your brain to anyone else’s brain and to emerging AIs, just like our cell phones, servers, watches, cars and all devices are becoming connected via the Internet of Things.

3. Artificial Intelligence/Human Intelligence: Next, and perhaps most significantly, we are on the cusp of an AI revolution. Artificial intelligence, powered by deep learning and funded by companies such as Google, Facebook, IBM, Samsung and Alibaba, will continue to rapidly accelerate and drive breakthroughs. Cumulative “intelligence” (both artificial and human) is the single greatest predictor of success for both a company or a nation. For this reason, beside the emerging AI “arms race,” we will soon see a race focused on increasing overall human intelligence. Whatever challenges we might have in creating a vibrant brain-computer interface (e.g., designing long-term biocompatible sensors or nanobots that interface with your neocortex), those challenges will fall quickly over the next couple of decades as AI power tools give us ever-increasing problem-solving capability. It is an exponential atop an exponential. More intelligence gives us the tools to solve connectivity and mesh problems and in turn create greater intelligence.

4. Opening the Space Frontier: Finally, it’s important to note that the human race is on the verge of becoming a multiplanetary species. Thousands of years from now, whatever we’ve evolved into, we will look back at these next few decades as the moment in time when the human race moved off Earth irreversibly. Today, billions of dollars are being invested privately into the commercial space industry. Efforts led by SpaceX are targeting humans on Mars, while efforts by Blue Origin are looking at taking humanity back to the moon, and plans by my own company, Planetary Resources, strive to unlock near-infinite resources from the asteroids.

In Conclusion

The rate of human evolution is accelerating as we transition from the slow and random process of “Darwinian natural selection” to a hyper-accelerated and precisely-directed period of “evolution by intelligent direction.” In this post, I chose not to discuss the power being unleashed by such gene-editing techniques as CRISPR-Cas9. Consider this yet another tool able to accelerate evolution by our own hand.

The bottom line is that change is coming, faster than ever considered possible. All of us leaders, entrepreneurs and parents have a huge responsibility to inspire and guide the transformation of humanity on and off the Earth. What we do over the next 30 years—the bridges we build to abundance—will impact the future of the human race for millennia to come. We truly live during the most exciting time ever in human history.

The Fourth Industrial Revolution Is Here

February 25, 2017

The Fourth Industrial Revolution is upon us and now is the time to act.

Everything is changing each day and humans are making decisions that affect life in the future for generations to come.

We have gone from Steam Engines to Steel Mills, to computers to the Fourth Industrial Revolution that involves a digital economy, artificial intelligence, big data and a new system that introduces a new story of our future to enable different economic and human models.

Will the Fourth Industrial Revolution put humans first and empower technologies to give humans a better quality of life with cleaner air, water, food, health, a positive mindset and happiness? HOPE…

Artificial intelligence to generate new cancer drugs on demand

December 18, 2016



  • Clinical trial failure rates for small molecules in oncology exceed 94% for molecules previously tested in animals and the costs to bring a new drug to market exceed $2.5 billion
  • There are around 2,000 drugs approved for therapeutic use by the regulators with very few providing complete cures
  • Advances in deep learning demonstrated superhuman accuracy in many areas and are expected to transform industries, where large amounts of training data is available
  • Generative Adversarial Networks (GANs), a new technology introduced in 2014 represent the “cutting edge” in artificial intelligence, where new images, videos and voice can be produced by the deep neural networks on demand
  • Here for the first time we demonstrate the application of Generative Adversarial Autoencoders (AAEs), a new type of GAN, for generation of molecular fingerprints of molecules that kill cancer cells at specific concentrations
  • This work is the proof of concept, which opens the door for the cornucopia of meaningful molecular leads created according to the given criteria
  • The study was published in Oncotarget and the open-access manuscript is available in the Advance Open Publications section
  • Authors speculate that in 2017 the conservative pharmaceutical industry will experience a transformation similar to the automotive industry with deep learned drug discovery pipelines integrated into the many business processes
  • The extension of this work will be presented at the “4th Annual R&D Data Intelligence Leaders Forum” in Basel, Switzerland, Jan 24-26th, 2017

Thursday, 22nd of December Baltimore, MD – Scientists at the Pharmaceutical Artificial Intelligence (pharma.AI) group of Insilico Medicine, Inc, today announced the publication of a seminal paper demonstrating the application of generative adversarial autoencoders (AAEs) to generating new molecular fingerprints on demand. The study was published in Oncotarget on 22nd of December, 2016. The study represents the proof of concept for applying Generative Adversarial Networks (GANs) to drug discovery. The authors significantly extended this model to generate new leads according to multiple requested characteristics and plan to launch a comprehensive GAN-based drug discovery engine producing promising therapeutic treatments to significantly accelerate pharmaceutical R&D and improve the success rates in clinical trials.

Since 2010 deep learning systems demonstrated unprecedented results in image, voice and text recognition, in many cases surpassing human accuracy and enabling autonomous driving, automated creation of pleasant art and even composition of pleasant music.

GAN is a fresh direction in deep learning invented by Ian Goodfellow in 2014. In recent years GANs produced extraordinary results in generating meaningful images according to the desired descriptions. Similar principles can be applied to drug discovery and biomarker development. This paper represents a proof of concept of an artificially-intelligent drug discovery engine, where AAEs are used to generate new molecular fingerprints with the desired molecular properties.

“At Insilico Medicine we want to be the supplier of meaningful, high-value drug leads in many disease areas with high probability of passing the Phase I/II clinical trials. While this publication is a proof of concept and only generates the molecular fingerprints with the very basic molecular properties, internally we can now generate entire molecular structures according to a large number of parameters. These structures can be fed into our multi-modal drug discovery pipeline, which predicts therapeutic class, efficacy, side effects and many other parameters. Imagine an intelligent system, which one can instruct to produce a set of molecules with specified properties that kill certain cancer cells at a specified dose in a specific subset of the patient population, then predict the age-adjusted and specific biomarker-adjusted efficacy, predict the adverse effects and evaluate the probability of passing the human clinical trials. This is our big vision”, said Alex Zhavoronkov, PhD, CEO of Insilico Medicine, Inc.

Previously, Insilico Medicine demonstrated the predictive power of its discovery systems in the nutraceutical industry. In 2017 Life Extension will launch a range of natural products developed using Insilico Medicine’s discovery pipelines. Earlier this year the pharmaceutical artificial intelligence division of Insilico Medicine published several seminal proof of concept papers demonstrating the applications of deep learning to drug discovery, biomarker development and aging research. Recently the authors published a tool in Nature Communications, which is used for dimensionality reduction in transcriptomic data for training deep neural networks (DNNs). The paper published in Molecular Pharmaceutics demonstrating the applications of deep neural networks for predicting the therapeutic class of the molecule using the transcriptional response data received the American Chemical Society Editors’ Choice Award. Another paper demonstrating the ability to predict the chronological age of the patient using a simple blood test, published in Aging, became the second most popular paper in the journal’s history.

“Generative AAE is a radically new way to discover drugs according to the required parameters. At Pharma.AI we have a comprehensive drug discovery pipeline with reasonably accurate predictors of efficacy and adverse effects that work on the structural data and transcriptional response data and utilize the advanced signaling pathway activation analysis and deep learning. We use this pipeline to uncover the prospective uses of molecules, where these types of data are available. But the generative models allow us to generate completely new molecular structures that can be run through our pipelines and then tested in vitro and in vivo. And while it is too early to make ostentatious claims before our predictions are validated in vivo, it is clear that generative adversarial networks coupled with the more traditional deep learning tools and biomarkers are likely to transform the way drugs are discovered”, said Alex Aliper, president, European R&D at the Pharma.AI group of Insilico Medicine.

Recent advances in deep learning and specifically in generative adversarial networks have demonstrated surprising results in generating new images and videos upon request, even when using natural language as input. In this study the group developed a 7-layer AAE architecture with the latent middle layer serving as a discriminator. As an input and output AAE uses a vector of binary fingerprints and concentration of the molecule. In the latent layer the group introduced a neuron responsible for tumor growth inhibition index, which when negative it indicates the reduction in the number of tumour cells after the treatment. To train AAE, the authors used the NCI-60 cell line assay data for 6252 compounds profiled on MCF-7 cell line. The output of the AAE was used to screen 72 million compounds in PubChem and select candidate molecules with potential anti-cancer properties.

“I am very happy to work alongside the Pharma.AI scientists at Insilico Medicine on getting the GANs to generate meaningful leads in cancer and, most importantly, age-related diseases and aging itself. This is humanity’s most pressing cause and everyone in machine learning and data science should be contributing. The pipelines these guys are developing will play a transformative role in the pharmaceutical industry and in extending human longevity and we will continue our collaboration and invite other scientists to follow this path”, said Artur Kadurin, the head of the segmentation group at Mail.Ru, one of the largest IT companies in Eastern Europe and the first author on the paper.


About Insilico Medicine, Inc

Insilico Medicine, Inc. is a bioinformatics company located at the Emerging Technology Centers at the Johns Hopkins University Eastern campus in Baltimore with Research and Development (“R&D”) resources in Belgium, UK and Russia hiring talent through hackathons and competitions. The company utilizes advances in genomics, big data analysis, and deep learning for in silico drug discovery and drug repurposing for aging and age-related diseases. The company pursues internal drug discovery programs in cancer, Parkinson’s Disease, Alzheimer’s Disease, sarcopenia, and geroprotector discovery. Through its Pharma.AI division, the company provides advanced machine learning services to biotechnology, pharmaceutical, and skin care companies. Brief company video:


New AI Mental Health Tools Beat Human Doctors at Assessing Patients

December 18, 2016


About 20 percent of youth in the United States live with a mental health condition, according to the National Institute of Mental Health.

That’s the bad news.

The good news is that mental health professionals have smarter tools than ever before, with artificial intelligence-related technology coming to the forefront to help diagnose patients, often with much greater accuracy than humans.

A new study published in the journal Suicide and Life-Threatening Behavior, for example, showed that machine learning is up to 93 percent accurate in identifying a suicidal person. The research, led by John Pestian, a professor at Cincinnati Children’s Hospital Medical Center, involved 379 teenage patients from three area hospitals.

Each patient completed standardized behavioral rating scales and participated in a semi-structured interview, answering five open-ended questions such as “Are you angry?” to stimulate conversation, according to a press release from the university.

The researchers analyzed both verbal and non-verbal language from the data, then sent the information through a machine-learning algorithm that was able to determine with remarkable accuracy whether the person was suicidal, mentally ill but not suicidal, or neither.

“These computational approaches provide novel opportunities to apply technological innovations in suicide care and prevention, and it surely is needed,” Pentian says in the press release.

In 2014, suicide was ranked as the tenth leading cause of death in the United States, but the No. 2 cause of death for people age 15 to 24, according to the American Association of Suicidology.

A study just published in the journal Psychological Bulletin further punctuated the need for better tools to help with suicide prevention. A meta-analysis of 365 studies conducted over the last 50 years found that the ability of mental health experts to predict if someone will attempt suicide is “no better than chance.”

“One of the major reasons for this is that researchers have almost always tried to use a single factor (e.g., a depression diagnosis) to predict these things,” says lead author Joseph Franklin of Harvard University in an email exchange with Singularity Hub.

Franklin says that the complex nature behind such thoughts and behaviors requires consideration of tens if not hundreds of factors to make accurate predictions. He and others argue in a correspondence piece published earlier this year in Psychological Medicine that machine learning and related techniques are an ideal option. A search engine using only one factor would be ineffective at returning results; the same is true of today’s attempts to predict suicidal behavior.

He notes that researchers in Boston, including colleague Matthew K. Nock at Harvard, have already used machine learning to predict suicidal behaviors with 70 to 85 percent accuracy. Calling the work “amazing,” Franklin notes that the research is still in the preliminary stages, with small sample sizes.

“The work by the Pestian group is also interesting, with their use of vocal patterns/natural language processing being unique from most other work in this area so far,” Franklin says, adding that there are also limits as to what can be drawn from their findings at this point. “Nevertheless, this is a very interesting line of work that also represents a sharp and promising departure from what the field has been doing for the past 50 years.”

Machine learning has yet to be used in therapy, according to Franklin, while most conventional treatments for suicide fall short.

“So even though several groups are on the verge of being able to accurately predict suicidality on the scale of entire healthcare systems [with AI], it’s unclear what we should do with these at-risk people to reduce their risk,” Franklin says.

To that end, Franklin and colleagues have developed a free app called Tec-Tec that appears effective at “reducing self-cutting, suicide plans, and suicidal behaviors.”

The app is based on a psychological technique called evaluative conditioning. By continually pairing certain words and images, it changes associations with certain objects and concepts, according to the website, so that within a game-like design, Tec-Tec seeks to change associations with certain factors that may increase risk for self-injurious behaviors.

“We’re working on [additional] trials and soon hope to use machine learning to tailor the app to each individual over time,” Franklin says, “and to connect the people most in need with the app.”

Catching schizophrenic speech

Last year, researchers in a study published in the journal Schizophrenia also had promising results in using machine-learning algorithms to predict later psychosis onset in high-risk youths.

Thirty-four participants were interviewed and assessed quarterly for two and a half years. Using automated analysis, transcripts of the interviews were evaluated for coherence and two syntactic markers of speech complexity—the length of a sentence and the number of clauses it contained.

The speech features analyzed by the computer predicted later psychosis development with 100 percent accuracy, outperforming classification from clinical interviews, according to the researchers.

“Recent developments in computer science, including natural language processing, could provide the foundation for future development of objective clinical tests for psychiatry,” they wrote.

Diagnosing ADHD early

In a project now under way, scientists at the University of Texas at Arlington and Yale University will combine computing power and psychiatric expertise to design an AI system that can assess a common disorder among youth: attention-deficit/hyperactivity disorder (ADHD), which the Centers for Disease Control and Prevention (CDC) says affects 8.5 percent of children ages 8 to 15.

The research uses “the latest methods in computer vision, machine learning and data mining” to assess children while they are performing certain physical and computer exercises, according to a press release from UTA. The exercises test a child’s attention, decision-making and ability to manage emotions. The data are then analyzed to determine the best type of intervention.

“We believe that the proposed computational methods will help provide quantifiable early diagnosis and allow us to monitor progress over time. In particular, it will help children overcome learning difficulties and lead them to healthy and productive lives,” says Fillia Makedon, a professor in UTA’s Department of Computer Science and Engineering.

Keeping an eye out for autism

Meanwhile, a group at the University of Buffalo has developed a mobile app that can detect autism spectrum disorder (ASD) in children as young as two years old with nearly 94 percent accuracy. The results were recently presented at the IEEE Wireless Health conference at the National Institutes of Health.

The app tracks eye movements of a child looking at pictures of social scenes, such as those showing multiple people, according to a press release from the university. The eye movements of someone with ASD are often different from those of a person without autism.

About one in 68 children in the United States has been diagnosed with ASD, according to the CDC. The UB study included 32 children ranging in age from two to 10. A larger study is planned for the future.

It takes less than a minute to administer the test, which can be done by a parent at home to determine if a child requires professional evaluation.

“This technology fills the gap between someone suffering from autism to diagnosis and treatment,” says Wenyao Xu, an assistant professor in UB’s School of Engineering and Applied Sciences.

Technology that helps treat our most vulnerable populations? Turns out, there is an app for that.

New AI-Based Search Engines are a “Game Changer” for Science Research

November 14, 2016

ee203bd1-b7e0-4864-a75641c2719b53a8By Nicola Jones, Nature magazine

A free AI-based scholarly search engine that aims to outdo Google Scholar is expanding its corpus of papers to cover some 10 million research articles in computer science and neuroscience, its creators announced on 11 November. Since its launch last year, it has been joined by several other AI-based academic search engines, most notably a relaunched effort from computing giant Microsoft.

Semantic Scholar, from the non-profit Allen Institute for Artificial Intelligence (AI2) in Seattle, Washington, unveiled its new format at the Society for Neuroscience annual meeting in San Diego. Some scientists who were given an early view of the site are impressed. “This is a game changer,” says Andrew Huberman, a neurobiologist at Stanford University, California. “It leads you through what is otherwise a pretty dense jungle of information.”

The search engine first launched in November 2015, promising to sort and rank academic papers using a more sophisticated understanding of their content and context. The popular Google Scholar has access to about 200 million documents and can scan articles that are behind paywalls, but it searches merely by keywords. By contrast, Semantic Scholar can, for example, assess which citations to a paper are most meaningful, and rank papers by how quickly citations are rising—a measure of how ‘hot’ they are.

When first launched, Semantic Scholar was restricted to 3 million papers in the field of computer science. Thanks in part to a collaboration with AI2’s sister organization, the Allen Institute for Brain Science, the site has now added millions more papers and new filters catering specifically for neurology and medicine; these filters enable searches based, for example, on which part of the brain part of the brain or cell type a paper investigates, which model organisms were studied and what methodologies were used. Next year, AI2 aims to index all of PubMed and expand to all the medical sciences, says chief executive Oren Etzioni.

“The one I still use the most is Google Scholar,” says Jose Manuel Gómez-Pérez, who works on semantic searching for the software company Expert System in Madrid. “But there is a lot of potential here.”

Microsoft’s revival

Semantic Scholar is not the only AI-based search engine around, however. Computing giant Microsoft quietly released its own AI scholarly search tool, Microsoft Academic, to the public this May, replacing its predecessor, Microsoft Academic Search, which the company stopped adding to in 2012.

Microsoft’s academic search algorithms and data are available for researchers through an application programming interface (API) and the Open Academic Society, a partnership between Microsoft Research, AI2 and others. “The more people working on this the better,” says Kuansan Wang, who is in charge of Microsoft’s effort. He says that Semantic Scholar is going deeper into natural-language processing—that is, understanding the meaning of full sentences in papers and queries—but that Microsoft’s tool, which is powered by the semantic search capabilities of the firm’s web-search engine Bing, covers more ground, with 160 million publications.

Like Semantic Scholar, Microsoft Academic provides useful (if less extensive) filters, including by author, journal or field of study. And it compiles a leaderboard of most-influential scientists in each subdiscipline. These are the people with the most ‘important’ publications in the field, judged by a recursive algorithm (freely available) that judges papers as important if they are cited by other important papers. The top neuroscientist for the past six months, according to Microsoft Academic, is Clifford Jack of the Mayo Clinic, in Rochester, Minnesota.

Other scholars say that they are impressed by Microsoft’s effort. The search engine is getting close to combining the advantages of Google Scholar’s massive scope with the more-structured results of subscription bibliometric databases such as Scopus and the Web of Science, says Anne-Wil Harzing, who studies science metrics at Middlesex University, UK, and has analysed the new product. “The Microsoft Academic phoenix is undeniably growing wings,” she says. Microsoft Research says it is working on a personalizable version—where users can sign in so that Microsoft can bring applicable new papers to their attention or notify them of citations to their own work—by early next year.

Other companies and academic institutions are also developing AI-driven software to delve more deeply into content found online. The Max Planck Institute for Informatics, based in Saarbrücken, Germany, for example, is developing an engine called DeepLife specifically for the health and life sciences. “These are research prototypes rather than sustainable long-term efforts,” says Etzioni.

In the long term, AI2 aims to create a system that will answer science questions, propose new experimental designs or throw up useful hypotheses. “In 20 years’ time, AI will be able to read—and more importantly, understand—scientific text,” Etzioni says.

This article is reproduced with permission and was first published on November 11, 2016.

What artificial intelligence will look like in 2030

November 14, 2016


Artificial intelligence (AI) has already transformed our lives — from the autonomous cars on the roads to the robotic vacuums and smart thermostats in our homes. Over the next 15 years, AI technologies will continue to make inroads in nearly every area of our lives, from education to entertainment, health care to security.

The question is, are we ready? Do we have the answers to the legal and ethical quandaries that will certainly arise from the increasing integration of AI into our daily lives? Are we even asking the right questions?

Now, a panel of academics and industry thinkers has looked ahead to 2030 to forecast how advances in AI might affect life in a typical North American city and spark discussion about how to ensure the safe, fair, and beneficial development of these rapidly developing technologies.

“Artificial Intelligence and Life in 2030” is the first product of the One Hundred Year Study on Artificial Intelligence (AI100), an ongoing project hosted by Stanford University to inform debate and provide guidance on the ethical development of smart software, sensors, and machines. Every five years for the next 100 years, the AI100 project will release a report that evaluates the status of AI technologies and their potential impact on the world.

 AI Landscape: Global Quarterly Financing History

Image: CB Insights


“Now is the time to consider the design, ethical, and policy challenges that AI technologies raise,” said Grosz. “If we tackle these issues now and take them seriously, we will have systems that are better designed in the future and more appropriate policies to guide their use.”

“We believe specialized AI applications will become both increasingly common and more useful by 2030, improving our economy and quality of life,” said Peter Stone, a computer scientist at the University of Texas, Austin, and chair of the report. “But this technology will also create profound challenges, affecting jobs and incomes and other issues that we should begin addressing now to ensure that the benefits of AI are broadly shared.”

The report investigates eight areas of human activity in which AI technologies are already affecting urban life and will be even more pervasive by 2030: transportation, home/service robots, health care, education, entertainment, low-resource communities, public safety and security, employment, and the workplace.

Some of the biggest challenges in the next 15 years will be creating safe and reliable hardware for autonomous cars and health care robots; gaining public trust for AI systems, especially in low-resource communities; and overcoming fears that the technology will marginalize humans in the workplace.

Issues of liability and accountability also arise with questions such as: Who is responsible when a self-driven car crashes or an intelligent medical device fails? How can we prevent AI applications from being used for racial discrimination or financial cheating?

The report doesn’t offer solutions but rather is intended to start a conversation between scientists, ethicists, policymakers, industry leaders, and the general public.

Grosz said she hopes the AI 100 report “initiates a century-long conversation about ways AI-enhanced technologies might be shaped to improve life and societies.”

Read the report:

Will the coming robot nanny era turn us into technophiles?

November 14, 2016

A vector illustration of a robot ironing clothes

Robots intrigue us. We all like them. But most of us don’t love them. That may dramatically change over the next 10 years as the “robot nanny” makes its way into our households.

In as little time as a decade, affordable robots that can bottle-feed babies, change diapers and put a child to sleep might be here. The human-machine bond that a new generation of kids grows up with may be unbreakable. We may end up literally loving our machines almost like we do our mothers and fathers.

I’ve already seen some of this bonding in action. I have a four-foot interactive Meccanoid robot aboard my Immortality Bus, which I’ve occasionally used for my presidential campaign. The robot can do about 1,000 functions, including basic interaction with people, like talking, answering questions and making wisecracks. When my five-year-old rides with me on the bus, she adores it. After being introduced to it, she obsessively wanted to watch Inspector Gadget videos and read books on robots.

My two daughters (the other one is two years old) have always been near technology, and both were able to successfully navigate YouTube watching videos on iPhones by the time they were 12 months old. Yet, while my kids love the iPhone, and they want to use it regularly, it doesn’t bond them to technology in a maternal sense like the Meccanoid robot does. More importantly, the smartphone doesn’t bond them to technology in an anthropomorphic sense — where one gives technology human attributes, like personalities.

My kids instinctively know the iPhone is a tool. But Meccanoid is a friend. If you kick the robot, leave it in the rain or lock it away in the closet, my kids will freak out. To them, the robot is personal — and the love is real.

If some of this reminds you of Rosie the Robot — the cleaning, cooking nanny robot from the Jetsons — you’re not alone. Humans will soon regularly engage with machines as fellow companions in life, giving psychologists, anthropologists and Congress new ideas to consider. There is already chatter all across the internet in the transhumanist community about humans wanting the right to marry machines — and all that goes with that. In fact, in the Transhumanist Bill of Rights I delivered to Washington, DC, we explicitly aim to give future conscious beings personhood — as well as other rights covered by the 1948-adopted United Nations Universal Declaration of Human Rights.

Despite the thorniness of some of the issues between humans and robots, the reason we are entering this robot age is because of one simple fact: functionality. Robots will make our lives far easier. In fact, the robot nanny is a prime example: It will be adored by parents — and likely much more so than the human nannies who are known to call in sick, show up to work late and, on occasion, sue their employers when they hurt themselves on the job. Robot nannies will replace real nannies like the automobile replaced the horse and cart — allowing parents much new free time and opportunity to pursue careers.

One major factor going for the development of robot nannies is their cost effectiveness. I’ve been either watching my kids or hiring nannies for the last five years. About $200,000 later (which is what 8-hour weekday childcare costs in San Francisco for five years), it’s safe to say a robot nanny is not going to cost as much as I’ve spent. And once my kids are old enough and no longer need immediate supervision, I’ll be left with the robot to sell or give to a family in need.

But essential questions remain: Will some robots be allowed to watch kids when parents go out for the night or off to work — and other robots not? Who will make that determination? The parent? The manufacturer? The government?

Will robots that can perform CPR, put out fires, squish poisonous spiders and perform the Heimlich maneuver on a choking child be authorized while others are not? Will robots that can detect smoke and carbon monoxide, where others can’t, make the “nanny-worthy” grade?

And then come the questions ethicists and programmers are already facing with driverless cars. If an autonomous vehicle is forced into a choice to hit a young family of five or an old man, what does it choose? Nanny robots may one day be programmed with similar instructions and values.

But what if a robot nanny is watching twins, and both start choking at the same time? Which child will it choose to help first? Will programmers allow parents to program which child should be helped first?

The questions are endless. I suspect, like the U.S. Department of Transportation’s National Highway Traffic Safety Administration’s Federal Motor Vehicle Safety Standards and Regulations, a robot equivalent will have to be established.

It’s been years since the American household has gotten a new fixture that all households must have. One of the last major ones was the computer — and now nearly 85 percent of American households have one. I suspect nanny robots will be one of the next commonplace items we have in our homes. And our love for them will grow as they influence and play an integral part of the next generation’s upbringing.

Will the coming robot nanny era turn us into technophiles?

Bill Gates talks about why artificial intelligence is nearly here and how to solve two big problems it creates

July 10, 2016


Bill Gates is excited about the rise of artificial intelligence but acknowledged the arrival of machines with greater-than-human capabilities will create some unique challenges.

After years of working on the building blocks of speech recognition and computer vision, Gates said enough progress has been made to ensure that in the next 10 years there will be robots to do tasks like driving and warehouse work as well as machines that can outpace humans in certain areas of knowledge.

“The dream is finally arriving,” Gates said, speaking with wife Melinda Gates on Wednesday at the Code Conference. “This is what it was all leading up to.”

However, as he said in an interview with Recode last year, such machine capabilities will pose two big problems.

The first is, it will eliminate a lot of existing types of jobs. Gates said that creates a need for a lot of retraining but notes that until schools have class sizes under 10 and people can retire at a reasonable age and take ample vacation, he isn’t worried about a lack of need for human labor.

The second issue is, of course, making sure humans remain in control of the machines. Gates has talked about that in the past, saying that he plans to spend time with people who have ideas on how to address that issue, noting work being done at Stanford, among other places.

And, in Gatesian fashion, he suggested a pair of books that people should read, including Nick Bostrom’s book on superintelligence and Pedro Domingos’ “The Master Algorithm.”

Melinda Gates noted that you can tell a lot about where her husband’s interest is by the books he has been reading. “There have been a lot of AI books,” she said.

Why Haven’t We Met Aliens Yet? Because They’ve Evolved into AI

June 18, 2016


While traveling in Western Samoa many years ago, I met a young Harvard University graduate student researching ants. He invited me on a hike into the jungles to assist with his search for the tiny insect. He told me his goal was to discover a new species of ant, in hopes it might be named after him one day.

Whenever I look up at the stars at night pondering the cosmos, I think of my ant collector friend, kneeling in the jungle with a magnifying glass, scouring the earth. I think of him, because I believe in aliens—and I’ve often wondered if aliens are doing the same to us.

Believing in aliens—or insanely smart artificial intelligences existing in the universe—has become very fashionable in the last 10 years. And discussing its central dilemma: the Fermi paradox, has become even more so. The Fermi paradox states that the universe is very big—with maybe a trillion galaxies that might contain 500 billion stars and planets each—and out of that insanely large number, it would only take a tiny fraction of them to have habitable planets capable of bringing forth life.

Whatever you think, the numbers point to the insane fact that aliens don’t just exist, but probably billions of species of aliens exist. And the Fermi paradox asks: With so many alien civilizations out there, why haven’t we found them? Or why haven’t they found us?

The Fermi paradox’s Wikipedia page has dozens of answers about why we haven’t heard from superintelligent aliens, ranging from “it is too expensive to spread physically throughout the galaxy” to “intelligent civilizations are too far apart in space or time” to crazy talk like “it is the nature of intelligent life to destroy itself.”

Millions of singularities have already happened, but we’re similar to blind bacteria in our bodies running around cluelessly

Given that our planet is only 4.5 billion years old in a universe that many experts think is pushing 14 billion years, it’s safe to say most aliens are way smarter than us. After all, with intelligence, there is a massive divide between the quality of intelligences. There’s ant level intelligence. There’s human intelligence. And then there’s the hypothetical intelligence of aliens—presumably ones who have reached the singularity.

The singularity, David Kelley, co-founder of Wired Magazine, says, is the point at which “all the change in the last million years will be superseded by the change in the next five minutes.”

If Kelley is correct about how fast the singularity accelerates change—and I think he is—in all probability, many alien species will be trillions of times more intelligent than people.

Put yourself in the shoes of extraterrestrial intelligence and consider what that means. If you were a trillion times smarter than a human being, would you notice the human race at all? Or if you did, would you care? After all, do you notice the 100 trillion microbes or more in your body? No, unless they happen to give you health problems, like E. coli and other sicknesses. More on that later.

One of the big problems with our understandings of aliens has to do with Hollywood. Movies and television have led us to think of aliens as green, slimy creatures traveling around in flying saucers. Nonsense. I think if advanced aliens have just 250 years more evolution than us, they almost certainly won’t be static physical beings anymore—at least not in the molecular sense. They also won’t be artificial intelligences living in machines either, which is what I believe humans are evolving into this century. No, becoming machine intelligence is just another passing phase of evolution—one that might only last a few decades for humans, if that.

Truly advanced intelligence will likely be organized intelligently on the atomic scale, and likely even on scales far smaller. Aliens will evolve until they are pure, willful conscious energy—and maybe even something beyond that. They long ago realized that biology and ones and zeroes in machines was literally too rudimentary to be very functional. True advanced intelligence will be spirit-like—maybe even on par with some people’s ideas of ghosts.

On a long enough time horizon, every biological species would at some point evolve into machines, and then evolve into intelligent energy with a consciousness. Such brilliant life might have the ability to span millions of lights years nearly instantaneously throughout the universe, morphing into whatever form it wanted.

Like all evolving life, the key to attaining the highest form of being and intelligence possible was to intimately become and control the best universal elements—those that are conducive to such goals, especially personal power over nature. Everything else in advanced alien evolution is discarded as nonfunctional and nonessential.

All intelligence in the universe, like all matter and energy, follows patterns—based on rules of physics. We engage—and often battle—those patterns and rules, until we understand them, and utilize them as best as possible. Such is evolution. And the universe is imbued with wanting life to arise and evolve, as MIT physicist Jeremy England, points out in this Quanta Magazine article titled A New Physics Theory of Life.

Back to my ant collector friend in Western Samoa. It would be nice to believe that the difference between the ant collector and the ant’s intelligence was the same between humans and very sophisticated aliens. Sadly, that is not the case. Not even close.

The difference between a species that has just 100 more years of evolution than us could be a billion times that of an ant versus a human—given the acceleration of intelligence. Now consider an added billion years of evolution. This is way beyond comparing apples and oranges.

The crux of the problem with aliens and humans is we’re not hearing or seeing them because we don’t have ways to understand their language. It’s simply beyond our comprehension and physical abilities. Millions of singularities have already happened, but we’re similar to blind bacteria in our bodies running around cluelessly.

The good news, though, is we’re about to make contact with the best of the aliens out there. Or rather they’re about to school us. The reason: The universe is precious, and in approximately a century’s time, humans may be able to conduct physics experiments that could level the entire universe—such as building massive particle accelerators that make the God particle swallow the cosmos whole.

Like a grumpy landlord at the door, alien intelligence will make contact and let us know what we can and can’t do when it comes to messing with the real estate of the universe. Knock. Knock.

Zoltan Istvan is a futurist, journalist, and author of the novel The Transhumanist Wager. He writes an occasional column for Motherboard in which he ruminates on the future beyond natural human ability.