The meaning of life in a world without work

May 18, 2017

Most jobs that exist today might disappear within decades. As artificial intelligence outperforms humans in more and more tasks, it will replace humans in more and more jobs. Many new professions are likely to appear: virtual-world designers, for example. But such professions will probably require more creativity and flexibility, and it is unclear whether 40-year-old unemployed taxi drivers or insurance agents will be able to reinvent themselves as virtual-world designers (try to imagine a virtual world created by an insurance agent!). And even if the ex-insurance agent somehow makes the transition into a virtual-world designer, the pace of progress is such that within another decade he might have to reinvent himself yet again.

The crucial problem isn’t creating new jobs. The crucial problem is creating new jobs that humans perform better than algorithms. Consequently, by 2050 a new class of people might emerge – the useless class. People who are not just unemployed, but unemployable.

The same technology that renders humans useless might also make it feasible to feed and support the unemployable masses through some scheme of universal basic income. The real problem will then be to keep the masses occupied and content. People must engage in purposeful activities, or they go crazy. So what will the useless class do all day?

One answer might be computer games. Economically redundant people might spend increasing amounts of time within 3D virtual reality worlds, which would provide them with far more excitement and emotional engagement than the “real world” outside. This, in fact, is a very old solution. For thousands of years, billions of people have found meaning in playing virtual reality games. In the past, we have called these virtual reality games “religions”.

What is a religion if not a big virtual reality game played by millions of people together? Religions such as Islam and Christianity invent imaginary laws, such as “don’t eat pork”, “repeat the same prayers a set number of times each day”, “don’t have sex with somebody from your own gender” and so forth. These laws exist only in the human imagination. No natural law requires the repetition of magical formulas, and no natural law forbids homosexuality or eating pork. Muslims and Christians go through life trying to gain points in their favorite virtual reality game. If you pray every day, you get points. If you forget to pray, you lose points. If by the end of your life you gain enough points, then after you die you go to the next level of the game (aka heaven).

As religions show us, the virtual reality need not be encased inside an isolated box. Rather, it can be superimposed on the physical reality. In the past this was done with the human imagination and with sacred books, and in the 21st century it can be done with smartphones.

Some time ago I went with my six-year-old nephew Matan to hunt for Pokémon. As we walked down the street, Matan kept looking at his smartphone, which enabled him to spot Pokémon all around us. I didn’t see any Pokémon at all, because I didn’t carry a smartphone. Then we saw two others kids on the street who were hunting the same Pokémon, and we almost got into a fight with them. It struck me how similar the situation was to the conflict between Jews and Muslims about the holy city of Jerusalem. When you look at the objective reality of Jerusalem, all you see are stones and buildings. There is no holiness anywhere. But when you look through the medium of smartbooks (such as the Bible and the Qur’an), you see holy places and angels everywhere.

The idea of finding meaning in life by playing virtual reality games is of course common not just to religions, but also to secular ideologies and lifestyles. Consumerism too is a virtual reality game. You gain points by acquiring new cars, buying expensive brands and taking vacations abroad, and if you have more points than everybody else, you tell yourself you won the game.

You might object that people really enjoy their cars and vacations. That’s certainly true. But the religious really enjoy praying and performing ceremonies, and my nephew really enjoys hunting Pokémon. In the end, the real action always takes place inside the human brain. Does it matter whether the neurons are stimulated by observing pixels on a computer screen, by looking outside the windows of a Caribbean resort, or by seeing heaven in our mind’s eyes? In all cases, the meaning we ascribe to what we see is generated by our own minds. It is not really “out there”. To the best of our scientific knowledge, human life has no meaning. The meaning of life is always a fictional story created by us humans.

In his groundbreaking essay, Deep Play: Notes on the Balinese Cockfight (1973), the anthropologist Clifford Geertz describes how on the island of Bali, people spent much time and money betting on cockfights. The betting and the fights involved elaborate rituals, and the outcomes had substantial impact on the social, economic and political standing of both players and spectators.

The cockfights were so important to the Balinese that when the Indonesian government declared the practice illegal, people ignored the law and risked arrest and hefty fines. For the Balinese, cockfights were “deep play” – a made-up game that is invested with so much meaning that it becomes reality. A Balinese anthropologist could arguably have written similar essays on football in Argentina or Judaism in Israel.

Indeed, one particularly interesting section of Israeli society provides a unique laboratory for how to live a contented life in a post-work world. In Israel, a significant percentage of ultra-orthodox Jewish men never work. They spend their entire lives studying holy scriptures and performing religion rituals. They and their families don’t starve to death partly because the wives often work, and partly because the government provides them with generous subsidies. Though they usually live in poverty, government support means that they never lack for the basic necessities of life.

That’s universal basic income in action. Though they are poor and never work, in survey after survey these ultra-orthodox Jewish men report higher levels of life-satisfaction than any other section of Israeli society. In global surveys of life satisfaction, Israel is almost always at the very top, thanks in part to the contribution of these unemployed deep players.

You don’t need to go all the way to Israel to see the world of post-work. If you have at home a teenage son who likes computer games, you can conduct your own experiment. Provide him with a minimum subsidy of Coke and pizza, and then remove all demands for work and all parental supervision. The likely outcome is that he will remain in his room for days, glued to the screen. He won’t do any homework or housework, will skip school, skip meals and even skip showers and sleep. Yet he is unlikely to suffer from boredom or a sense of purposelessness. At least not in the short term.

Hence virtual realities are likely to be key to providing meaning to the useless class of the post-work world. Maybe these virtual realities will be generated inside computers. Maybe they will be generated outside computers, in the shape of new religions and ideologies. Maybe it will be a combination of the two. The possibilities are endless, and nobody knows for sure what kind of deep plays will engage us in 2050.

In any case, the end of work will not necessarily mean the end of meaning, because meaning is generated by imagining rather than by working. Work is essential for meaning only according to some ideologies and lifestyles. Eighteenth-century English country squires, present-day ultra-orthodox Jews, and children in all cultures and eras have found a lot of interest and meaning in life even without working. People in 2050 will probably be able to play deeper games and to construct more complex virtual worlds than in any previous time in history.

But what about truth? What about reality? Do we really want to live in a world in which billions of people are immersed in fantasies, pursuing make-believe goals and obeying imaginary laws? Well, like it or not, that’s the world we have been living in for thousands of years already.

  • Yuval Noah Harari lectures at the Hebrew University of Jerusalem and is the author of Sapiens: A Brief History of Humankind and Homo Deus: A Brief History of Tomorrow

https://www.theguardian.com/technology/2017/may/08/virtual-reality-religion-robots-sapiens-book#img-1

Advertisements

Artificial intelligence could build new drugs faster than any human team

May 7, 2017

Artificial intelligence algorithms are being taught to generate art, human voices, and even fiction stories all on their own—why not give them a shot at building new ways to treat disease?

Atomwise, a San Francisco-based startup and Y Combinator alum, has built a system it calls AtomNet (pdf), which attempts to generate potential drugs for diseases like Ebola and multiple sclerosis. The company has invited academic and non-profit researchers from around the country to detail which diseases they’re trying to generate treatments for, so AtomNet can take a shot. The academic labs will receive 72 different drugs that the neural network has found to have the highest probability of interacting with the disease, based on the molecular data it’s seen.

Atomwise’s system only generates potential drugs—the compounds created by the neural network aren’t guaranteed to be safe, and need to go through the same drug trials and safety checks as anything else on the market. The company believes that the speed at which it can generate trial-ready drugs based on previous safe molecular interactions is what sets it apart.

Atomwise touts two projects that show the potential of AtomNet, drugs for multiple sclerosis and Ebola. The MS drug has been licensed to an undisclosed UK pharmacology firm, according to Atomwise, and the Ebola drug is being prepared for submission to a peer-reviewed publication.

Alexander Levy, the company’s COO and cofounder, said that AtomNet learns the interactions between molecules much like artificial intelligence learns to recognize images. Image recognition finds reduces patterns in images’ pixels to simpler representations, teaching itself the bounds of an idea like a horse or a desk lamp through seeing hundreds or thousands of examples.

“It turns out that the same thing that works in images, also works in chemistry,” Levy says. “You can take an interaction between a drug and huge biological system and you can decompose that to smaller and smaller interactive groups. If you study enough historical examples of molecules … and we’ve studied tens of millions of those, you can then make predictions that are extremely accurate yet also extremely fast.”

Atomwise isn’t the only company working on this technique. Startup BenevolentAI, working with Johnson & Johnson subsidiary Janssen, is also developing new ways to find drugs. TwoXAR is working on an AI-driven glaucoma medication, and Berg is working on algorithmically-built cancer treatments.

One of Atomwise’s advantages, Levy says, is that the network works with 3D models. To generate the drugs, the model starts with a 3D model of a molecule—for example a protein that gives a cancer cell a growth advantage. The neural network then generates a series of synthetic compounds (simulated drugs), and predicts how likely it would be for the two molecules to interact. If a drug is likely to interact with the specific molecule, it can be synthesized and tested.

Levy likens the idea to the automated systems used to model airplane aerodynamics or computer chip design, where millions of scenarios are mapped out within software that accurately represents how the physical world works.

“Imagine if you knew what a biological mechanism looked like, atom by atom. Could you reason your way to a compound that did the thing that you wanted?” Levy says.

Artificial intelligence could build new drugs faster than any human team

10 Tech Trends That Made the World Better in 2016

March 30, 2017

2016 was an incredible year for technology, and for humanity.

Despite all the negative political-related news, there were 10 tech trends this year that positively transformed humanity.

For this “2017 Kick-Off” post, I reviewed 52 weeks of science and technology breakthroughs, and categorized them into the top 10 tech trends changing our world.

I’m blown away by how palpable the feeling of exponential change has become.

I’m also certain that 99.99% of humanity doesn’t understand or appreciate the ramifications of what is coming.

In this post, enjoy the top 10 tech trends of the past 12 months and why they are important to you.

Let’s dive in…

1. We Are Hyper-Connecting the World

In 2010, 1.8 billion people were connected. Today, that number is about 3 billion, and by 2022 – 2025, that number will expand to include every human on the planet, approaching 8 billion humans.

Unlike when I was connected 20 years ago at 9,600 baud via AOL, the world today is coming online at one megabit per second or greater, with access to the world’s information on Google, access to the world’s products on Amazon, access to massive computing power on AWS and artificial intelligence with Watson… not to mention crowdfunding for capital and crowdsourcing for expertise.

Looking back at 2016, you can feel the acceleration. Here are seven stories that highlight the major advances in our race for global connectivity:

a) Google’s 5G Solar Drones Internet Service: Project Skybender is Google’s secretive 5G Internet drone initiative. News broke this year that they have been testing these solar-powered drones at Spaceport America in New Mexico to explore ways to deliver high-speed Internet from the air. Their purported millimeter wave technology could deliver data from drones up to 40 times faster than 4G.

b) Facebook’s Solar Drone Internet Service: Even before Google, Facebook has been experimenting with a solar-powered drone, also for the express purpose of providing Internet to billions. The drone has the wingspan of an airliner and flies with roughly the power of three blowdryers.

c) ViaSat Plans 1 Terabit Internet Service: ViaSat, a U.S.-based satellite company, has teamed up with Boeing to launch three satellites to provide 1 terabit-per-second Internet connections to remote areas, aircraft and maritime vehicles. ViaSat is scheduled to launch its satellite ViaSat2 in early 2017.

d) OneWeb Raises $1.2B for 900 Satellite Constellation: An ambitious low-Earth orbit satellite system proposed by my friends Greg Wyler, Paul Jacobs and Richard Branson just closed $1.2 billion in financing. This 900-satellite system will offer global internet services as soon as 2019.

e) Musk Announces 4,425 Internet Satellite System: Perhaps the most ambitious plan for global internet domination was proposed this year by SpaceX founder Elon Musk, with plans for SpaceX to deploy a 4,425 low-Earth orbit satellite system to blanket the entire planet in broadband.

2. Solar/Renewables Cheaper Than Coal

We’ve just exceeded a historic inflection point. 2016 was the year solar and renewable energy became cheaper than coal.

In December, the World Economic Forum reported that solar and wind energy is now the same price or cheaper than new fossil fuel capacity in more than 30 countries.

“As prices for solar and wind power continue their precipitous fall, two-thirds of all nations will reach the point known as ‘grid parity’ within a few years, even without subsidies,” they added.

This is one of the most important developments in the history of humanity, and this year marked a number of major milestones for renewable energy.

Here’s 10 data points (stories) I’ve hand-picked to hammer home the historic nature of this 2016 achievement.

a) 25 percent of the World’s Power Comes From Renewables: REN21, a global renewable energy policy network, published a report showing that a quarter of the world’s power now comes from renewable energy. International investment in renewable energy reached $286 billion last year (with solar accounting for over $160b of this), and it’s accelerating.

b) In India, Solar Is Now Cheaper Than Coal: An amazing milestone indeed, and India is now on track to deploy >100 gigawatts of solar power by 2022.

c) The UK Is Generating More Energy From Solar Than Coal: For the first time in history, this year the U.K. has produced an estimated 6,964 GWh of electricity from solar cells, 10% higher than the 6,342 GWh generated by coal.

d) Coal Plants Being Replaced by Solar Farms: The Nanticoke Generating Station in Ontario, once North America’s largest coal plant, will be turned into a solar farm.

e) Coal Will Never Recover: The coal industry, once the backbone of U.S. energy, is fading fast on account of renewables like solar and wind. Official and expert reports now state that it will never recover (e.g., coal power generation in Texas is down from 39% in early 2015 to 24.8% in May 2016).

f) Scotland Generated 106% Energy From Wind: This year, high winds boosted renewable energy output to provide 106% of Scotland’s electricity needs for a day.

g) Costa Rica Ran on Renewables for 2+ Months: The country ran on 100% renewable energy for 76 days.

h) Google to Run 100% on Renewable Energy: Google has announced its entire global business will be powered by renewable energy in 2017.

i) Las Vegas’ City Government Meets Goal of 100% Power by Renewables: Las Vegas is now the largest city government in the country to run entirely on renewable energy.

j) Tesla’s Gigafactory: Tesla’s $5 billion structure in Nevada will produce 500,000 lithium ion batteries annually and Tesla’s Model III vehicle. It is now over 30 percent complete… the 10 million square foot structure is set to be done by 2020. Musk projected that a total of 100 Gigafactories could provide enough storage capacity to run the entire planet on renewables.

3. Glimpsing the End of Cancer and Disease

Though it may seem hard to believe, the end of cancer and disease is near.

Scientists and researchers have been working diligently to find novel approaches to combating these diseases, and 2016 saw some extraordinary progress in this regard.

Here’re my top 10 picks that give me great faith about our abilities to cure cancer and most diseases:

a) Cancer Immunotherapy Makes Strides (Extraordinary Results): Immunotherapy involves using a patient’s own immune system (in this case, T cells) to fight cancer. Doctors remove immune cells from patients, tag them with “receptor” molecules that target the specific cancer, and then infuse the cells back in the body. During the study, 94% of patients with acute lymphoblastic leukemia (ALL) saw symptoms vanish completely. Patients with other blood cancers had response rates greater than 80%, and more than half experienced complete remission.

b) In China, CRISPR/Cas9 used in First Human Trial: A team of scientists in China (Sichuan University) became the first to treat a human patient with an aggressive form of lung cancer with the groundbreaking CRISPR-Cas9 gene-editing technique.

c) NIH Approves Human Trials Using CRISPR: A team of physicians at the University of Pennsylvania’s School of Medicine had their project of modifying the immune cells of 18 different cancer patients with the CRISPR-Cas9 system approved by the National Institute of Health. Results are TBD.

d) Giant Leap in Treatment of Diabetes from Harvard: For the first time, Harvard stem cell researchers created “insulin producing” islet cells to cure diabetes in mice. This offers a promising cure in humans as well.

e) HIV Genes Cut Out of Live Animals Using CRISPR: Scientists at the Comprehensive NeuroAIDS Center at Temple University were able to successfully cut out the HIV genes from live animals, and they had over a 50% success rate.

f) New Treatment Causes HIV Infected Cells to Vanish: A team of scientists in the U.K. discovered a new treatment for HIV. The patient was treated with vaccines that helped the body recognize the HIV-infected cells. Then, the drug Vorinostat was administered to activate the dormant cells so they could be spotted by the immune system.

g) CRISPR Cures Mice of Sickle Cell Disease: CRISPR was used to completely cure sickle cell by editing the errant DNA sequence in mice. The treatment may soon be used to cure this disease, which affects about 100,000 Americans.

h) Eradicating Measles (in the U.S.): The World Health Organization (WHO) announced that after 50 years, they have successfully eradicated measles in the U.S. This is one of the most contagious diseases around the world.

i) New Ebola Vaccine Proved to be 100% Effective: None of the nearly 6,000 individuals vaccinated with rVSV-ZEBOV in Guinea, a country with more than 3,000 confirmed cases of Ebola, showed any signs of contracting the disease.

j) Eradicating Polio: The World Health Organization has announced that it expects to fully eradicate polio worldwide by Early 2017.

4. Progress on Extending Human Life

I am personally convinced that we are on the verge of significantly impacting human longevity. At a minimum, making “100 years old the new 60,” as we say at Human Longevity Inc.

This year, hundreds of millions of dollars were poured into research initiatives and companies focused on extending life.

Here are five of the top stories from 2016 in longevity research:

a) 500-Year-Old Shark Discovered: A Greenland shark that could have been over 500 years old was discovered this year, making the species the longest-lived vertebrate in the world.

b) Genetically Reversing Aging: With an experiment that replicated stem cell-like conditions, Salk Institute researchers made human skin cells in a dish look and behave young again, and mice with premature aging disease were rejuvenated with a 30% increase in lifespan. The Salk Institute expects to see this work in human trials in less than 10 years.

c) 25% Life Extension Based on Removal of Senescent Cells: Published in the medical journal Nature, cell biologists Darren Baker and Jan van Deursen have found that systematically removing a category of living, stagnant cells can extend the life of mice by 25 percent.

d) Funding for Anti-Aging Startups: Jeff Bezos and the Mayo Clinic-backed Anti-Aging Startup Unity Biotechnology with $116 million. The company will focus on medicines to slow the effects of age-related diseases by removing senescent cells (as mentioned in the article above).

e) Young Blood Experiments Show Promising Results for Longevity: Sakura Minami and her colleagues at Alkahest, a company specializing in blood-derived therapies for neurodegenerative diseases, have found that simply injecting older mice with the plasma of young humans twice a week improved the mice’s cognitive functions as well as their physical performance. This practice has seen a 30% increase in lifespan, and increase in muscle tissue and cognitive function.

More at: https://singularityhub.com/2017/01/05/10-tech-trends-that-made-the-world-better-in-2016/

Exponential Growth Will Transform Humanity in the Next 30 Years

February 25, 2017

aaeaaqaaaaaaaambaaaajgqyndzhmtlilwu4yzctndlkns04mwrhltdjmdi4nwi3yzrlng

By Peter Diamantis

As we close out 2016, if you’ll allow me, I’d like to take a risk and venture into a topic I’m personally compelled to think about… a topic that will seem far out to most readers.

Today’s extraordinary rate of exponential growth may do much more than just disrupt industries. It may actually give birth to a new species, reinventing humanity over the next 30 years.

I believe we’re rapidly heading towards a human-scale transformation, the next evolutionary step into what I call a “Meta-Intelligence,” a future in which we are all highly connected—brain to brain via the cloud—sharing thoughts, knowledge and actions. In this post, I’m investigating the driving forces behind such an evolutionary step, the historical pattern we are about to repeat, and the implications thereof. Again, I acknowledge that this topic seems far-out, but the forces at play are huge and the implications are vast. Let’s dive in…

A Quick Recap: Evolution of Life on Earth in 4 Steps

About 4.6 billion years ago, our solar system, the sun and the Earth were formed.

Step 1: 3.5 billion years ago, the first simple life forms, called “prokaryotes,” came into existence.These prokaryotes were super-simple, microscopic single-celled organisms, basically a bag of cytoplasm with free-floating DNA. They had neither a distinct nucleus nor specialized organelles.

Step 2: Fast-forwarding one billion years to 2.5 billion years ago, the next step in evolution created what we call “eukaryotes”—life forms that distinguished themselves by incorporating biological ‘technology’ into themselves. Technology that allowed them to manipulate energy (via mitochondria) and information (via chromosomes) far more efficiently. Fast forward another billion years for the next step.

Step 3: 1.5 billion years ago, these early eukaryotes began working collaboratively and formed the first “multi-cellular life,” of which you and I are the ultimate examples (a human is a multicellular creature of 10 trillion cells).

Step 4: The final step I want to highlight happened some 400 million years ago, when lungfish crawled out of the oceans onto the shores, and life evolved from the oceans onto land.

The Next Stages of Human Evolution: 4 Steps

Today, at a massively accelerated rate—some 100 million times faster than the steps I outlined above—life is undergoing a similar evolution. In this next stage of evolution, we are going from evolution by natural selection (Darwinism) to evolution by intelligent direction. Allow me to draw the analogy for you:

Step 1: Simple humans today are analogous to prokaryotes. Simple life, each life form independent of the others, competing and sometimes collaborating.

Step 2: Just as eukaryotes were created by ingesting technology, humans will incorporate technology into our bodies and brains that will allow us to make vastly more efficient use of information (BCI) and energy.

Step 3: Enabled with BCI and AI, humans will become massively connected with each other and billions of AIs (computers) via the cloud, analogous to the first multicellular lifeforms 1.5 billion years ago. Such a massive interconnection will lead to the emergence of a new global consciousness, and a new organism I call the Meta-Intelligence.

Step 4: Finally, humanity is about to crawl out of the gravity well of Earth to become a multiplanetary species. Our journey to the moon, Mars, asteroids and beyond represents the modern-day analogy of the journey made by lungfish climbing out of the oceans some 400 million years ago.

The 4 Forces Driving the Evolution and Transformation of Humanity

Four primary driving forces are leading us towards our transformation of humanity into a meta-intelligence both on and off the Earth:

  1. We’re wiring our planet
  2. Emergence of brain-computer interface
  3. Emergence of AI
  4. Opening of the space frontier

Let’s take a look.

1. Wiring the Planet: Today, there are 2.9 billion people connected online. Within the next six to eight years, that number is expected to increase to nearly 8 billion, with each individual on the planet having access to a megabit-per-second connection or better. The wiring is taking place through the deployment of 5G on the ground, plus networks being deployed by Facebook, Google, Qualcomm, Samsung, Virgin, SpaceX and many others. Within a decade, every single human on the planet will have access to multi-megabit connectivity, the world’s information, and massive computational power on the cloud.

2. Brain-Computer Interface: A multitude of labs and entrepreneurs are working to create lasting, high-bandwidth connections between the digital world and the human neocortex (I wrote about that in detail here). Ray Kurzweil predicts we’ll see human-cloud connection by the mid-2030s, just 18 years from now. In addition, entrepreneurs like Bryan Johnson (and his company Kernel) are committing hundreds of millions of dollars towards this vision. The end results of connecting your neocortex with the cloud are twofold: first, you’ll have the ability to increase your memory capacity and/or cognitive function millions of fold; second, via a global mesh network, you’ll have the ability to connect your brain to anyone else’s brain and to emerging AIs, just like our cell phones, servers, watches, cars and all devices are becoming connected via the Internet of Things.

3. Artificial Intelligence/Human Intelligence: Next, and perhaps most significantly, we are on the cusp of an AI revolution. Artificial intelligence, powered by deep learning and funded by companies such as Google, Facebook, IBM, Samsung and Alibaba, will continue to rapidly accelerate and drive breakthroughs. Cumulative “intelligence” (both artificial and human) is the single greatest predictor of success for both a company or a nation. For this reason, beside the emerging AI “arms race,” we will soon see a race focused on increasing overall human intelligence. Whatever challenges we might have in creating a vibrant brain-computer interface (e.g., designing long-term biocompatible sensors or nanobots that interface with your neocortex), those challenges will fall quickly over the next couple of decades as AI power tools give us ever-increasing problem-solving capability. It is an exponential atop an exponential. More intelligence gives us the tools to solve connectivity and mesh problems and in turn create greater intelligence.

4. Opening the Space Frontier: Finally, it’s important to note that the human race is on the verge of becoming a multiplanetary species. Thousands of years from now, whatever we’ve evolved into, we will look back at these next few decades as the moment in time when the human race moved off Earth irreversibly. Today, billions of dollars are being invested privately into the commercial space industry. Efforts led by SpaceX are targeting humans on Mars, while efforts by Blue Origin are looking at taking humanity back to the moon, and plans by my own company, Planetary Resources, strive to unlock near-infinite resources from the asteroids.

In Conclusion

The rate of human evolution is accelerating as we transition from the slow and random process of “Darwinian natural selection” to a hyper-accelerated and precisely-directed period of “evolution by intelligent direction.” In this post, I chose not to discuss the power being unleashed by such gene-editing techniques as CRISPR-Cas9. Consider this yet another tool able to accelerate evolution by our own hand.

The bottom line is that change is coming, faster than ever considered possible. All of us leaders, entrepreneurs and parents have a huge responsibility to inspire and guide the transformation of humanity on and off the Earth. What we do over the next 30 years—the bridges we build to abundance—will impact the future of the human race for millennia to come. We truly live during the most exciting time ever in human history.

https://singularityhub.com/2016/12/21/exponential-growth-will-transform-humanity-in-the-next-30-years/

The Fourth Industrial Revolution Is Here

February 25, 2017

The Fourth Industrial Revolution is upon us and now is the time to act.

Everything is changing each day and humans are making decisions that affect life in the future for generations to come.

We have gone from Steam Engines to Steel Mills, to computers to the Fourth Industrial Revolution that involves a digital economy, artificial intelligence, big data and a new system that introduces a new story of our future to enable different economic and human models.

Will the Fourth Industrial Revolution put humans first and empower technologies to give humans a better quality of life with cleaner air, water, food, health, a positive mindset and happiness? HOPE…

http://www.huffingtonpost.com/craig-zamary/the-fourth-industrial-rev_3_b_12423658.html

Artificial intelligence to generate new cancer drugs on demand

December 18, 2016

687474703a2f2f6d7573796f6b752e6769746875622e696f2f696d616765732f706f73742f323031362d30382d30392f73656d692d737570657276697365642f64696d5f726564756374696f6e2f6c6162656c65645f7a5f313030302e706e67

Summary:

  • Clinical trial failure rates for small molecules in oncology exceed 94% for molecules previously tested in animals and the costs to bring a new drug to market exceed $2.5 billion
  • There are around 2,000 drugs approved for therapeutic use by the regulators with very few providing complete cures
  • Advances in deep learning demonstrated superhuman accuracy in many areas and are expected to transform industries, where large amounts of training data is available
  • Generative Adversarial Networks (GANs), a new technology introduced in 2014 represent the “cutting edge” in artificial intelligence, where new images, videos and voice can be produced by the deep neural networks on demand
  • Here for the first time we demonstrate the application of Generative Adversarial Autoencoders (AAEs), a new type of GAN, for generation of molecular fingerprints of molecules that kill cancer cells at specific concentrations
  • This work is the proof of concept, which opens the door for the cornucopia of meaningful molecular leads created according to the given criteria
  • The study was published in Oncotarget and the open-access manuscript is available in the Advance Open Publications section
  • Authors speculate that in 2017 the conservative pharmaceutical industry will experience a transformation similar to the automotive industry with deep learned drug discovery pipelines integrated into the many business processes
  • The extension of this work will be presented at the “4th Annual R&D Data Intelligence Leaders Forum” in Basel, Switzerland, Jan 24-26th, 2017

Thursday, 22nd of December Baltimore, MD – Scientists at the Pharmaceutical Artificial Intelligence (pharma.AI) group of Insilico Medicine, Inc, today announced the publication of a seminal paper demonstrating the application of generative adversarial autoencoders (AAEs) to generating new molecular fingerprints on demand. The study was published in Oncotarget on 22nd of December, 2016. The study represents the proof of concept for applying Generative Adversarial Networks (GANs) to drug discovery. The authors significantly extended this model to generate new leads according to multiple requested characteristics and plan to launch a comprehensive GAN-based drug discovery engine producing promising therapeutic treatments to significantly accelerate pharmaceutical R&D and improve the success rates in clinical trials.

Since 2010 deep learning systems demonstrated unprecedented results in image, voice and text recognition, in many cases surpassing human accuracy and enabling autonomous driving, automated creation of pleasant art and even composition of pleasant music.

GAN is a fresh direction in deep learning invented by Ian Goodfellow in 2014. In recent years GANs produced extraordinary results in generating meaningful images according to the desired descriptions. Similar principles can be applied to drug discovery and biomarker development. This paper represents a proof of concept of an artificially-intelligent drug discovery engine, where AAEs are used to generate new molecular fingerprints with the desired molecular properties.

“At Insilico Medicine we want to be the supplier of meaningful, high-value drug leads in many disease areas with high probability of passing the Phase I/II clinical trials. While this publication is a proof of concept and only generates the molecular fingerprints with the very basic molecular properties, internally we can now generate entire molecular structures according to a large number of parameters. These structures can be fed into our multi-modal drug discovery pipeline, which predicts therapeutic class, efficacy, side effects and many other parameters. Imagine an intelligent system, which one can instruct to produce a set of molecules with specified properties that kill certain cancer cells at a specified dose in a specific subset of the patient population, then predict the age-adjusted and specific biomarker-adjusted efficacy, predict the adverse effects and evaluate the probability of passing the human clinical trials. This is our big vision”, said Alex Zhavoronkov, PhD, CEO of Insilico Medicine, Inc.

Previously, Insilico Medicine demonstrated the predictive power of its discovery systems in the nutraceutical industry. In 2017 Life Extension will launch a range of natural products developed using Insilico Medicine’s discovery pipelines. Earlier this year the pharmaceutical artificial intelligence division of Insilico Medicine published several seminal proof of concept papers demonstrating the applications of deep learning to drug discovery, biomarker development and aging research. Recently the authors published a tool in Nature Communications, which is used for dimensionality reduction in transcriptomic data for training deep neural networks (DNNs). The paper published in Molecular Pharmaceutics demonstrating the applications of deep neural networks for predicting the therapeutic class of the molecule using the transcriptional response data received the American Chemical Society Editors’ Choice Award. Another paper demonstrating the ability to predict the chronological age of the patient using a simple blood test, published in Aging, became the second most popular paper in the journal’s history.

“Generative AAE is a radically new way to discover drugs according to the required parameters. At Pharma.AI we have a comprehensive drug discovery pipeline with reasonably accurate predictors of efficacy and adverse effects that work on the structural data and transcriptional response data and utilize the advanced signaling pathway activation analysis and deep learning. We use this pipeline to uncover the prospective uses of molecules, where these types of data are available. But the generative models allow us to generate completely new molecular structures that can be run through our pipelines and then tested in vitro and in vivo. And while it is too early to make ostentatious claims before our predictions are validated in vivo, it is clear that generative adversarial networks coupled with the more traditional deep learning tools and biomarkers are likely to transform the way drugs are discovered”, said Alex Aliper, president, European R&D at the Pharma.AI group of Insilico Medicine.

Recent advances in deep learning and specifically in generative adversarial networks have demonstrated surprising results in generating new images and videos upon request, even when using natural language as input. In this study the group developed a 7-layer AAE architecture with the latent middle layer serving as a discriminator. As an input and output AAE uses a vector of binary fingerprints and concentration of the molecule. In the latent layer the group introduced a neuron responsible for tumor growth inhibition index, which when negative it indicates the reduction in the number of tumour cells after the treatment. To train AAE, the authors used the NCI-60 cell line assay data for 6252 compounds profiled on MCF-7 cell line. The output of the AAE was used to screen 72 million compounds in PubChem and select candidate molecules with potential anti-cancer properties.

“I am very happy to work alongside the Pharma.AI scientists at Insilico Medicine on getting the GANs to generate meaningful leads in cancer and, most importantly, age-related diseases and aging itself. This is humanity’s most pressing cause and everyone in machine learning and data science should be contributing. The pipelines these guys are developing will play a transformative role in the pharmaceutical industry and in extending human longevity and we will continue our collaboration and invite other scientists to follow this path”, said Artur Kadurin, the head of the segmentation group at Mail.Ru, one of the largest IT companies in Eastern Europe and the first author on the paper.

###

About Insilico Medicine, Inc

Insilico Medicine, Inc. is a bioinformatics company located at the Emerging Technology Centers at the Johns Hopkins University Eastern campus in Baltimore with Research and Development (“R&D”) resources in Belgium, UK and Russia hiring talent through hackathons and competitions. The company utilizes advances in genomics, big data analysis, and deep learning for in silico drug discovery and drug repurposing for aging and age-related diseases. The company pursues internal drug discovery programs in cancer, Parkinson’s Disease, Alzheimer’s Disease, sarcopenia, and geroprotector discovery. Through its Pharma.AI division, the company provides advanced machine learning services to biotechnology, pharmaceutical, and skin care companies. Brief company video: https://www.youtube.com/watch?v=l62jlwgL3v8

From: https://eurekalert.org/pub_releases/2016-12/imi-ait122016.php

What artificial intelligence will look like in 2030

November 14, 2016

ai

Artificial intelligence (AI) has already transformed our lives — from the autonomous cars on the roads to the robotic vacuums and smart thermostats in our homes. Over the next 15 years, AI technologies will continue to make inroads in nearly every area of our lives, from education to entertainment, health care to security.

The question is, are we ready? Do we have the answers to the legal and ethical quandaries that will certainly arise from the increasing integration of AI into our daily lives? Are we even asking the right questions?

Now, a panel of academics and industry thinkers has looked ahead to 2030 to forecast how advances in AI might affect life in a typical North American city and spark discussion about how to ensure the safe, fair, and beneficial development of these rapidly developing technologies.

“Artificial Intelligence and Life in 2030” is the first product of the One Hundred Year Study on Artificial Intelligence (AI100), an ongoing project hosted by Stanford University to inform debate and provide guidance on the ethical development of smart software, sensors, and machines. Every five years for the next 100 years, the AI100 project will release a report that evaluates the status of AI technologies and their potential impact on the world.

 AI Landscape: Global Quarterly Financing History

Image: CB Insights

 

“Now is the time to consider the design, ethical, and policy challenges that AI technologies raise,” said Grosz. “If we tackle these issues now and take them seriously, we will have systems that are better designed in the future and more appropriate policies to guide their use.”

“We believe specialized AI applications will become both increasingly common and more useful by 2030, improving our economy and quality of life,” said Peter Stone, a computer scientist at the University of Texas, Austin, and chair of the report. “But this technology will also create profound challenges, affecting jobs and incomes and other issues that we should begin addressing now to ensure that the benefits of AI are broadly shared.”

The report investigates eight areas of human activity in which AI technologies are already affecting urban life and will be even more pervasive by 2030: transportation, home/service robots, health care, education, entertainment, low-resource communities, public safety and security, employment, and the workplace.

Some of the biggest challenges in the next 15 years will be creating safe and reliable hardware for autonomous cars and health care robots; gaining public trust for AI systems, especially in low-resource communities; and overcoming fears that the technology will marginalize humans in the workplace.

Issues of liability and accountability also arise with questions such as: Who is responsible when a self-driven car crashes or an intelligent medical device fails? How can we prevent AI applications from being used for racial discrimination or financial cheating?

The report doesn’t offer solutions but rather is intended to start a conversation between scientists, ethicists, policymakers, industry leaders, and the general public.

Grosz said she hopes the AI 100 report “initiates a century-long conversation about ways AI-enhanced technologies might be shaped to improve life and societies.”

https://www.weforum.org/agenda/2016/09/what-artificial-intelligence-will-look-like-in-2030

Read the report: https://ai100.stanford.edu/2016-report

The Brain vs Deep Learning Part I: Computational Complexity — Or Why the Singularity Is Nowhere Near

June 04, 2016

original

In this blog post I will delve into the brain and explain its basic information processing machinery and compare it to deep learning. I do this by moving step-by-step along with the brains electrochemical and biological information processing pipeline and relating it directly to the architecture of convolutional nets. Thereby we will see that a neuron and a convolutional net are very similar information processing machines. While performing this comparison, I will also discuss the computational complexity of these processes and thus derive an estimate for the brains overall computational power. I will use these estimates, along with knowledge from high performance computing, to show that it is unlikely that there will be a technological singularity in this century.

This blog post is complex as it arcs over multiple topics in order to unify them into a coherent framework of thought. I have tried to make this article as readable as possible, but I might have not succeeded in all places. Thus, if you find yourself in an unclear passage it might become clearer a few paragraphs down the road where I pick up the thought again and integrate it with another discipline.

First I will give a brief overview about the predictions for a technological singularity and topics which are aligned with that. Then I will start the integration of ideas between the brain and deep learning. I finish with discussing high performance computing and how this all relates to predictions about a technological singularity.

The part which compares the brains information processing steps to deep learning is self-contained, and readers which are not interested in predictions for a technological singularity may skip to this part.

Part I: Evaluating current predictions of a technological singularity

There were a lot of headlines recently about predictions that artificial intelligence will reach super-human intelligence as early as 2030 and that this might herald the beginning of human extinction, or at least dramatically altering everyday life. How was this prediction made?

More at: http://timdettmers.com/2015/07/27/brain-vs-deep-learning-singularity/

How Facebook will use artificial intelligence to organize insane amounts of data into the perfect News Feed and a personal assistant with superpowers

November 8, 2015

mike-schroepfer

Facebook CTO Mike Schroepfer

Using some quick and dirty math, Facebook CTO Mike Schroepfer estimates that the amount of content that Facebook considers putting on your News Feed grows 40% to 50% year-over-year.

But because people aren’t gaining more time in the day, the company’s algorithms have to be much more selective about what they actually show you.

“We need systems that can help us understand the world and help us filter it better,” Schroepfer said at a press event prior to his appearance at the Dublin Web Summit Tuesday morning.

That’s why the company’s artificial intelligence team (called FAIR) has been hard at work training Facebook’s systems to make them understand the world more like humans, through language, images, planning, and prediction.

It already has trained its computer vision system to segment out individual objects from photos and then label them. The company plans to present a paper next month that shows how it can segment images 30 percent faster, using much less training data, than it previously could.

Ultimately, Schroepfer explains, this could have practical applications like helping you search through all your photos to surface any that contain ocean scenes or dogs. Or, you could tell your News Feed that you like seeing pictures with babies, but hate seeing photos of latte art.

It could also come in handy for photo editing. For example, you could tell the system to turn everything in a photo black-and-white, except one object.

These improving visual skills pair well with Facebook’s language recognition.

Schroepfer says that the company is in the early stages of building a product for the 285 million people around the world with low vision capabilities and the 40 million who are blind that will let them communicate with an artificial intelligence system to find out details about what is in any photo on their feed.

“We’re getting closer to that magical experience that we’re all hoping for,” he says.

The team is also tackling predictive, unsupervised learning and planning.

Making M into a superpower

Both of these research areas will be important to powering M, the virtual personal assistant that Facebook launched earlier this summer in its chat app, Messenger. Right now it’s in limited beta in the Bay Area, but the goal, Schroepfer says, is to make it feel like M is a superpower bestowed upon every Messenger user on earth.

Right now, everything M can do is supervised by real human beings. However, those people are backed up by artificial intelligence. Facebook has hooked up its memory networks to M’s console to train on the data that it’s gotten from its beta testers.

It might sound obvious, but the memory networks have helped M realize what questions to ask first if someone tells M they want to order flowers: “What’s your budget?” and “Where do you want them sent?”

The AI system discovered this by watching a handful of interactions between users and the people currently powering M.

“There is already some percentage of responses that are coming straight from the AI, and we’re going to increase that percentage over time, so that it allows us to train up these systems,” Schroepfer says.

“The reason this is exciting is that it’s scalable. We cannot afford to hire operators for the entire world, to be their virtual assistant, but with the right AI technology, we could deploy that for the entire planet, so that everyone in the world would have an automated assistant that helps them manage their own online world. And that ends up being a kind of superpower deployed to the whole world.”

Schroepfer says that the team has made a lot of progress over the last year, and plans to accelerate that progress over time.

“The promise I made to all the AI folks that joined us, is that we’re going to be the best place to get your work to a billion people as fast as possible.”

http://www.businessinsider.com/facebook-outlines-its-artificial-and-machine-learning-ambitions-2015-11

Artificial intelligence: ‘Homo sapiens will be split into a handful of gods and the rest of us’

November 8, 2015

1398654612801

If you wanted relief from stories about tyre factories and steel plants closing, you could try relaxing with a new 300-page report from Bank of America Merrill Lynch which looks at the likely effects of a robot revolution.

But you might not end up reassured. Though it promises robot carers for an ageing population, it also forecasts huge numbers of jobs being wiped out: up to 35% of all workers in the UK and 47% of those in the US, including white-collar jobs, seeing their livelihoods taken away by machines.

Haven’t we heard all this before, though? From the luddites of the 19th century to print unions protesting in the 1980s about computers, there have always been people fearful about the march of mechanisation. And yet we keep on creating new job categories.

However, there are still concerns that the combination of artificial intelligence (AI) – which is able to make logical inferences about its surroundings and experience – married to ever-improving robotics, will wipe away entire swaths of work and radically reshape society.

“The poster child for automation is agriculture,” says Calum Chace, author of Surviving AI and the novel Pandora’s Brain. “In 1900, 40% of the US labour force worked in agriculture. By 1960, the figure was a few per cent. And yet people had jobs; the nature of the jobs had changed.

“But then again, there were 21 million horses in the US in 1900. By 1960, there were just three million. The difference was that humans have cognitive skills – we could learn to do new things. But that might not always be the case as machines get smarter and smarter.”

What if we’re the horses to AI’s humans? To those who don’t watch the industry closely, it’s hard to see how quickly the combination of robotics and artificial intelligence is advancing. Last week a team from the Massachusetts Institute of Technology released a video showing a tiny drone flying through a lightly forested area at 30mph, avoiding the trees – all without a pilot, using only its onboard processors. Of course it can outrun a human-piloted one.

MIT has also built a “robot cheetah” which can jump over obstacles of up to 40cm without help. Add to that the standard progress of computing, where processing power doubles roughly every 18 months (or, equally, prices for capability halve), and you can see why people like Chace are getting worried.

Drone flies autonomously through a forested area

 

But the incursion of AI into our daily life won’t begin with robot cheetahs. In fact, it began long ago; the edge is thin, but the wedge is long. Cooking systems with vision processors can decide whether burgers are properly cooked. Restaurants can give customers access to tablets with the menu and let people choose without needing service staff.

Lawyers who used to slog through giant files for the “discovery” phase of a trial can turn it over to a computer. An “intelligent assistant” called Amy will, via email, set up meetings autonomously. Google announced last week that you can get Gmail to write appropriate responses to incoming emails. (You still have to act on your responses, of course.)

Further afield, Foxconn, the Taiwanese company which assembles devices for Apple and others, aims to replace much of its workforce with automated systems. The AP news agency gets news stories written automatically about sports and business by a system developed by Automated Insights. The longer you look, the more you find computers displacing simple work. And the harder it becomes to find jobs for everyone.

So how much impact will robotics and AI have on jobs, and on society? Carl Benedikt Frey, who with Michael Osborne in 2013 published the seminal paper The Future of Employment: How Susceptible Are Jobs to Computerisation? – on which the BoA report draws heavily – says that he doesn’t like to be labelled a “doomsday predictor”.

He points out that even while some jobs are replaced, new ones spring up that focus more on services and interaction with and between people. “The fastest-growing occupations in the past five years are all related to services,” he tells the Observer. “The two biggest are Zumba instructor and personal trainer.”

Frey observes that technology is leading to a rarification of leading-edge employment, where fewer and fewer people have the necessary skills to work in the frontline of its advances. “In the 1980s, 8.2% of the US workforce were employed in new technologies introduced in that decade,” he notes. “By the 1990s, it was 4.2%. For the 2000s, our estimate is that it’s just 0.5%. That tells me that, on the one hand, the potential for automation is expanding – but also that technology doesn’t create that many new jobs now compared to the past.”

This worries Chace. “There will be people who own the AI, and therefore own everything else,” he says. “Which means homo sapiens will be split into a handful of ‘gods’, and then the rest of us.

“I think our best hope going forward is figuring out how to live in an economy of radical abundance, where machines do all the work, and we basically play.”

Arguably, we might be part of the way there already; is a dance fitness programme like Zumba anything more than adult play? But, as Chace says, a workless lifestyle also means “you have to think about a universal income” – a basic, unconditional level of state support.

Perhaps the biggest problem is that there has been so little examination of the social effects of AI. Frey and Osborne are contributing to Oxford University’s programme on the future impacts of technology; at Cambridge, Observer columnist John Naughton and David Runciman are leading a project to map the social impacts of such change. But technology moves fast; it’s hard enough figuring out what happened in the past, let alone what the future will bring.

But some jobs probably won’t be vulnerable. Does Frey, now 31, think that he will still have a job in 20 years’ time? There’s a brief laugh. “Yes.” Academia, at least, looks safe for now – at least in the view of the academics.

Foxconn sign
Smartphone manufacturer Foxconn is aiming to automate much of its production facility. Photograph: Pichi Chuang/Reuters

The danger of change is not destitution, but inequality

Productivity is the secret ingredient in economic growth. In the late 18th century, the cleric and scholar Thomas Malthus notoriously predicted that a rapidly rising human population would result in misery and starvation.

But Malthus failed to anticipate the drastic technological changes – from the steam-powered loom to the combine harvester – that would allow the production of food and the other necessities of life to expand even more rapidly than the number of hungry mouths. The key to economic progress is this ability to do more with the same investment of capital and labour.

The latest round of rapid innovation, driven by the advance of robots and AI, is likely to power continued improvements.

Recent research led by Guy Michaels at the London School of Economics looked at detailed data across 14 industries and 17 countries over more than a decade, and found that the adoption of robots boosted productivity and wages without significantly undermining jobs.

Robotisation has reduced the number of working hours needed to make things; but at the same time as workers have been laid off from production lines, new jobs have been created elsewhere, many of them more creative and less dirty. So far, fears of mass layoffs as the machines take over have proven almost as unfounded as those that have always accompanied other great technological leaps forward.

There is an important caveat to this reassuring picture, however. The relatively low-skilled factory workers who have been displaced by robots are rarely the same people who land up as app developers or analysts, and technological progress is already being blamed for exacerbating inequality, a trend Bank of America Merrill Lynch believes may continue in future.

So the rise of the machines may generate huge economic benefits; but unless it is carefully managed, those gains may be captured by shareholders and highly educated knowledge workers, exacerbating inequality and leaving some groups out in the cold. Heather Stewart

http://www.theguardian.com/business/2015/nov/07/artificial-intelligence-homo-sapiens-split-handful-gods