Cancer ‘vaccine’ eliminates tumors in mice

February 05, 2018

Injecting minute amounts of two immune-stimulating agents directly into solid tumors in mice can eliminate all traces of cancer in the animals, including distant, untreated metastases, according to a study by researchers at the Stanford University School of Medicine.

The approach works for many different types of cancers, including those that arisespontaneously, the study found.

The researchers believe the local application of very small amounts of the agents could serve as a rapid and relatively inexpensive cancer therapy that is unlikely to cause the adverse side effects often seen with bodywide immune stimulation.

“When we use these two agents together, we see the elimination of tumors all over the body,” said Ronald Levy, MD, professor of oncology. “This approach bypasses the need to identify tumor-specific immune targets and doesn’t require wholesale activation of the immune system or customization of a patient’s immune cells.”

One agent is currently already approved for use in humans; the other has been tested for human use in several unrelated clinical trials. A clinical trial was launched in January to test the effect of the treatment in patients with lymphoma.

Levy, who holds the Robert K. and Helen K. Summy Professorship in the School of Medicine, is the senior author of the study, which was published Jan. 31 in Science Translational Medicine. Instructor of medicine Idit Sagiv-Barfi, PhD, is the lead author.

‘Amazing, bodywide effects’

Levy is a pioneer in the field of cancer immunotherapy, in which researchers try to harness the immune system to combat cancer. Research in his laboratory led to the development of rituximab, one of the first monoclonal antibodies approved for use as an anticancer treatment in humans.

Some immunotherapy approaches rely on stimulating the immune system throughout the body. Others target naturally occurring checkpoints that limit the anti-cancer activity of immune cells. Still others, like the CAR T-cell therapy recently approved to treat some types of leukemia and lymphomas, require a patient’s immune cells to be removed from the body and genetically engineered to attack the tumor cells. Many of these approaches have been successful, but they each have downsides — from difficult-to-handle side effects to high-cost and lengthy preparation or treatment times.

“All of these immunotherapy advances are changing medical practice,” Levy said. “Our approach uses a one-time application of very small amounts of two agents to stimulate the immune cells only within the tumor itself. In the mice, we saw amazing, bodywide effects, including the elimination of tumors all over the animal.”

Cancers often exist in a strange kind of limbo with regard to the immune system. Immune cells like T cells recognize the abnormal proteins often present on cancer cells and infiltrate to attack the tumor. However, as the tumor grows, it often devises ways to suppress the activity of the T cells.

Levy’s method works to reactivate the cancer-specific T cells by injecting microgram amounts of two agents directly into the tumor site. (A microgram is one-millionth of a gram). One, a short stretch of DNA called a CpG oligonucleotide, works with other nearby immune cells to amplify the expression of an activating receptor called OX40 on the surface of the T cells. The other, an antibody that binds to OX40, activates the T cells to lead the charge against the cancer cells. Because the two agents are injected directly into the tumor, only T cells that have infiltrated it are activated. In effect, these T cells are “prescreened” by the body to recognize only cancer-specific proteins.

Cancer-destroying rangers

Some of these tumor-specific, activated T cells then leave the original tumor to find and destroy other identical tumors throughout the body.

The approach worked startlingly well in laboratory mice with transplanted mouse lymphoma tumors in two sites on their bodies. Injecting one tumor site with the two agents caused the regression not just of the treated tumor, but also of the second, untreated tumor. In this way, 87 of 90 mice were cured of the cancer. Although the cancer recurred in three of the mice, the tumors again regressed after a second treatment. The researchers saw similar results in mice bearing breast, colon and melanoma tumors.

Mice genetically engineered to spontaneously develop breast cancers in all 10 of their mammary pads also responded to the treatment. Treating the first tumor that arose often prevented the occurrence of future tumors and significantly increased the animals’ life span, the researchers found.

Finally, Sagiv-Barfi explored the specificity of the T cells by transplanting two types of tumors into the mice. She transplanted the same lymphoma cancer cells in two locations, and she transplanted a colon cancer cell line in a third location. Treatment of one of the lymphoma sites caused the regression of both lymphoma tumors but did not affect the growth of the colon cancer cells.

“This is a very targeted approach,” Levy said. “Only the tumor that shares the protein targets displayed by the treated site is affected. We’re attacking specific targets without having to identify exactly what proteins the T cells are recognizing.”

The current clinical trial is expected to recruit about 15 patients with low-grade lymphoma. If successful, Levy believes the treatment could be useful for many tumor types. He envisions a future in which clinicians inject the two agents into solid tumors in humans prior to surgical removal of the cancer as a way to prevent recurrence due to unidentified metastases or lingering cancer cells, or even to head off the development of future tumors that arise due to genetic mutations like BRCA1 and 2.

“I don’t think there’s a limit to the type of tumor we could potentially treat, as long as it has been infiltrated by the immune system,” Levy said.

The work is an example of Stanford Medicine’s focus on precision health, the goal of which is to anticipate and prevent disease in the healthy and precisely diagnose and treat disease in the ill.

The study’s other Stanford co-authors are senior research assistant and lab manager Debra Czerwinski; professor of medicine Shoshana Levy, PhD; postdoctoral scholar Israt Alam, PhD; graduate student Aaron Mayer; and professor of radiology Sanjiv Gambhir, MD, PhD.

Levy is a member of the Stanford Cancer Institute and Stanford Bio-X.

Gambhir is the founder and equity holder in CellSight Inc., which develops and translates multimodality strategies to image cell trafficking and transplantation.

The research was supported by the National Institutes of Health (grant CA188005), the Leukemia and Lymphoma Society, the Boaz and Varda Dotan Foundation and the Phil N. Allen Foundation.

Stanford’s Department of Medicine also supported the work.

This article was originally published by:


Nick Bostrom: What happens when our computers get smarter than we are?

February 01, 2018

Artificial intelligence is getting smarter by leaps and bounds — within this century, research suggests, a computer AI could be as “smart” as a human being. And then, says Nick Bostrom, it will overtake us: “Machine intelligence is the last invention that humanity will ever need to make.” A philosopher and technologist, Bostrom asks us to think hard about the world we’re building right now, driven by thinking machines. Will our smart machines help to preserve humanity and our values — or will they have values of their own?

What Happens If China Makes First Contact?

February 01, 2018

Last January, the Chinese Academy of Sciences invited Liu Cixin, China’s preeminent science-fiction writer, to visit its new state-of-the-art radio dish in the country’s southwest. Almost twice as wide as the dish at America’s Arecibo Observatory, in the Puerto Rican jungle, the new Chinese dish is the largest in the world, if not the universe. Though it is sensitive enough to detect spy satellites even when they’re not broadcasting, its main uses will be scientific, including an unusual one: The dish is Earth’s first flagship observatory custom-built to listen for a message from an extraterrestrial intelligence. If such a sign comes down from the heavens during the next decade, China may well hear it first.

In some ways, it’s no surprise that Liu was invited to see the dish. He has an outsize voice on cosmic affairs in China, and the government’s aerospace agency sometimes asks him to consult on science missions. Liu is the patriarch of the country’s science-fiction scene. Other Chinese writers I met attached the honorific Da, meaning “Big,” to his surname. In years past, the academy’s engineers sent Liu illustrated updates on the dish’s construction, along with notes saying how he’d inspired their work.

But in other ways Liu is a strange choice to visit the dish. He has written a great deal about the risks of first contact. He has warned that the “appearance of this Other” might be imminent, and that it might result in our extinction. “Perhaps in ten thousand years, the starry sky that humankind gazes upon will remain empty and silent,” he writes in the postscript to one of his books. “But perhaps tomorrow we’ll wake up and find an alien spaceship the size of the Moon parked in orbit.”
China’s new radio dish was custom-built to listen for an extraterrestrial message. (Liu Xu / Xinhua / Getty)
In recent years, Liu has joined the ranks of the global literati. In 2015, his novel The Three-Body Problem became the first work in translation to win the Hugo Award, science fiction’s most prestigious prize. Barack Obama told The New York Times that the book—the first in a trilogy—gave him cosmic perspective during the frenzy of his presidency. Liu told me that Obama’s staff asked him for an advance copy of the third volume.At the end of the second volume, one of the main characters lays out the trilogy’s animating philosophy. No civilization should ever announce its presence to the cosmos, he says. Any other civilization that learns of its existence will perceive it as a threat to expand—as all civilizations do, eliminating their competitors until they encounter one with superior technology and are themselves eliminated. This grim cosmic outlook is called “dark-forest theory,” because it conceives of every civilization in the universe as a hunter hiding in a moonless woodland, listening for the first rustlings of a rival.
Liu’s trilogy begins in the late 1960s, during Mao’s Cultural Revolution, when a young Chinese woman sends a message to a nearby star system. The civilization that receives it embarks on a centuries-long mission to invade Earth, but she doesn’t care; the Red Guard’s grisly excesses have convinced her that humans no longer deserve to survive. En route to our planet, the extraterrestrial civilization disrupts our particle accelerators to prevent us from making advancements in the physics of warfare, such as the one that brought the atomic bomb into being less than a century after the invention of the repeating rifle.

Science fiction is sometimes described as a literature of the future, but historical allegory is one of its dominant modes. Isaac Asimov based his Foundation series on classical Rome, and Frank Herbert’s Dune borrows plot points from the past of the Bedouin Arabs. Liu is reluctant to make connections between his books and the real world, but he did tell me that his work is influenced by the history of Earth’s civilizations, “especially the encounters between more technologically advanced civilizations and the original settlers of a place.” One such encounter occurred during the 19th century, when the “Middle Kingdom” of China, around which all of Asia had once revolved, looked out to sea and saw the ships of Europe’s seafaring empires, whose ensuing invasion triggered a loss in status for China comparable to the fall of Rome.

This past summer, I traveled to China to visit its new observatory, but first I met up with Liu in Beijing. By way of small talk, I asked him about the film adaptation of The Three-Body Problem. “People here want it to be China’s Star Wars,” he said, looking pained. The pricey shoot ended in mid-2015, but the film is still in postproduction. At one point, the entire special-effects team was replaced. “When it comes to making science-fiction movies, our system is not mature,” Liu said.

I had come to interview Liu in his capacity as China’s foremost philosopher of first contact, but I also wanted to know what to expect when I visited the new dish. After a translator relayed my question, Liu stopped smoking and smiled.“It looks like something out of science fiction,” he said.

A week later, I rode a bullet train out of Shanghai, leaving behind its purple Blade Runner glow, its hip cafés and craft-beer bars. Rocketing along an elevated track, I watched high-rises blur by, each a tiny honeycomb piece of the rail-linked urban megastructure that has recently erupted out of China’s landscape. China poured more concrete from 2011 to 2013 than America did during the entire 20th century. The country has already built rail lines in Africa, and it hopes to fire bullet trains into Europe and North America, the latter by way of a tunnel under the Bering Sea.

The skyscrapers and cranes dwindled as the train moved farther inland. Out in the emerald rice fields, among the low-hanging mists, it was easy to imagine ancient China—the China whose written language was adopted across much of Asia; the China that introduced metal coins, paper money, and gunpowder into human life; the China that built the river-taming system that still irrigates the country’s terraced hills. Those hills grew steeper as we went west, stair-stepping higher and higher, until I had to lean up against the window to see their peaks. Every so often, a Hans Zimmer bass note would sound, and the glass pane would fill up with the smooth, spaceship-white side of another train, whooshing by in the opposite direction at almost 200 miles an hour.

Liu Cixin, China’s preeminent science-fiction writer, has written a great deal about the risks of first contact. (Han Wancheng / Shanxi Illustration)
It was mid-afternoon when we glided into a sparkling, cavernous terminal in Guiyang, the capital of Guizhou, one of China’s poorest, most remote provinces. A government-imposed social transformation appeared to be under way. Signs implored people not to spit indoors. Loudspeakers nagged passengers to “keep an atmosphere of good manners.” When an older man cut in the cab line, a security guard dressed him down in front of a crowd of hundreds.
The next morning, I went down to my hotel lobby to meet the driver I’d hired to take me to the observatory. Two hours into what was supposed to be a four-hour drive, he pulled over in the rain and waded 30 yards into a field where an older woman was harvesting rice, to ask for directions to a radio observatory more than 100 miles away. After much frustrated gesturing by both parties, she pointed the way with her scythe.We set off again, making our way through a string of small villages, beep-beeping motorbike riders and pedestrians out of our way. Some of the buildings along the road were centuries old, with upturned eaves; others were freshly built, their residents having been relocated by the state to clear ground for the new observatory. A group of the displaced villagers had complained about their new housing, attracting bad press—a rarity for a government project in China. Western reporters took notice. “China Telescope to Displace 9,000 Villagers in Hunt for Extraterrestrials,” read a headline in The New York Times.

The search for extraterrestrial intelligence (seti) is often derided as a kind of religious mysticism, even within the scientific community. Nearly a quarter century ago, the United States Congress defunded America’s seti program with a budget amendment proposed by Senator Richard Bryan of Nevada, who said he hoped it would “be the end of Martian-hunting season at the taxpayer’s expense.” That’s one reason it is China, and not the United States, that has built the first world-class radio observatory with seti as a core scientific goal.

seti does share some traits with religion. It is motivated by deep human desires for connection and transcendence. It concerns itself with questions about human origins, about the raw creative power of nature, and about our future in this universe—and it does all this at a time when traditional religions have become unpersuasive to many. Why these aspects of seti should count against it is unclear. Nor is it clear why Congress should find seti unworthy of funding, given that the government has previously been happy to spend hundreds of millions of taxpayer dollars on ambitious searches for phenomena whose existence was still in question. The expensive, decades-long missions that found black holes and gravitational waves both commenced when their targets were mere speculative possibilities. That intelligent life can evolve on a planet is not a speculative possibility, as Darwin demonstrated. Indeed, seti might be the most intriguing scientific project suggested by Darwinism.Even without federal funding in the United States, seti is now in the midst of a global renaissance. Today’s telescopes have brought the distant stars nearer, and in their orbits we can see planets. The next generation of observatories is now clicking on, and with them we will zoom into these planets’ atmospheres. seti researchers have been preparing for this moment. In their exile, they have become philosophers of the future. They have tried to imagine what technologies an advanced civilization might use, and what imprints those technologies would make on the observable universe. They have figured out how to spot the chemical traces of artificial pollutants from afar. They know how to scan dense star fields for giant structures designed to shield planets from a supernova’s shock waves.
In 2015, the Russian billionaire Yuri Milner poured $100 million of his own cash into a new seti program led by scientists at UC Berkeley. The team performs more seti observations in a single day than took place during entire years just a decade ago. In 2016, Milner sank another $100 million into an interstellar-probe mission. A beam from a giant laser array, to be built in the Chilean high desert, will wallop dozens of wafer-thin probes more than four light-years to the Alpha Centauri system, to get a closer look at its planets. Milner told me the probes’ cameras might be able to make out individual continents. The Alpha Centauri team modeled the radiation that such a beam would send out into space, and noticed striking similarities to the mysterious “fast radio bursts” that Earth’s astronomers keep detecting, which suggests the possibility that they are caused by similar giant beams, powering similar probes elsewhere in the cosmos.Andrew Siemion, the leader of Milner’s seti team, is actively looking into this possibility. He visited the Chinese dish while it was still under construction, to lay the groundwork for joint observations and to help welcome the Chinese team into a growing network of radio observatories that will cooperate on seti research, including new facilities in Australia, New Zealand, and South Africa. When I joined Siemion for overnight seti observations at a radio observatory in West Virginia last fall, he gushed about the Chinese dish. He said it was the world’s most sensitive telescope in the part of the radio spectrum that is “classically considered to be the most probable place for an extraterrestrial transmitter.”
More on:

10 Things Children Born in 2018 Will Probably Never Experience

February 01, 2018

It’s All Coming Back to Me Now

2017 was a year filled with nostalgia thanks to a number of pop culture properties with ties to the past.

We got another official Alien film, and Blade Runner came back with new visuals to dazzle us. Meanwhile, “Stranger Things” hearkened back to the Spielbergian fantasy that wowed so many children of the ’80s, and “Twin Peaks” revived Agent Cooper so he could unravel yet another impenetrable mystery from the enigmatic mind of David Lynch.

As these films and TV shows remind us, a lot can change over the course of a few decades, and the experiences of one generation can be far different from those that follow closely behind thanks to advances in technology.

Click to View Full Infographic

While the “Stranger Things” kids’ phone usage reminded 30-somethings of their own pre-mobile adolescences, children born in 2018 will probably never know the feeling of being tethered to a landline. A trip to the local megaplex to catch Blade Runner 2049 may have stirred up adults’ memories of seeing the original, but children born this year may never know what it’s like to watch a film on a smaller screen with a sound system that doesn’t rattle the brain.

Technology is currently advancing faster than ever before, so what else will kids born today only read about in books or, more likely, on computer screens? Here’s a list of the top 10 things that children born in 2018 will likely never experience.

Long, Boring Travel

Mobile devices and in-flight entertainment systems have made it pretty easy to stay distracted during the course of a long trip. However, aside from the Concorde, which was decommissioned in 2003, humanity hasn’t done nearly as much to increase the speed of air travel for international jet-setters. Beyond sparsely utilized bullet trains, even the speed of our ground transportation has remained fairly limited.

However, recent developments in transportation will likely speed up the travel process, meaning today’s kids may never know the pain of seemingly endless flights and road trips.

Supersonic planes are making a comeback and could ferry passengers “across the pond” in as few as 3.5 hours. While these aircraft could certainly make travel faster for a small subset of travelers, physical and cost limitations will likely prevent them from reaching the mainstream.

However, hyperloop technology could certainly prove to be an affordable way for travelers to subtract travel time from their itineraries.

Already, these super-fast systems have the ability to travel at speeds up to 387 kmh (240 mph). If proposed routes come to fruition, they could significantly cut the time of travel between major cities. For example, a trip from New York to Washington D.C. could take just 30 minutes as opposed to the current five hours.

Driver’s Licenses

Obtaining a driver’s license is currently a rite of passage for teenagers as they make that transition from the end of childhood to the beginning of adulthood. By the time today’s newborns are 16, self-driving cars may have already put an end to this unofficial ritual by completely removing the need for human operators of motor vehicles.

According to the Centers for Disease Control (CDC), an average of six teens between the ages of 16 and 19 died every day in 2015 from injuries sustained in motor vehicle collisions. Since a vast majority of accidents are caused by human error, removing the human from the equation could help to save the lives of people of all ages, so autonomous cars are a serious priority for many.

Elon Musk, CEO of Tesla, is confident that his electric and (currently) semi-autonomous car manufacturing company will produce fully autonomous vehicles within the next two years, and several ride-hailing services are already testing self-driving vehicles.

Biology’s Monopoly on Intelligence

Self-driving cars are just a single example of innovations made possible by the advancement of artificial intelligence (AI).

Today, we have AI systems that rival or even surpass human experts at specific tasks, such as playing chess or sorting recyclables. However, experts predict that conscious AI systems that rival human intelligence could just be decades away.

Advanced robots like Hanson Robotics’ Sophia are already blurring the line between humanity and machines. The next few decades will continue to push boundaries as we inch closer and closer to the singularity.

Children born in 2018 may never know what it’s like to join the workforce or go to college at a time when humans are the smartest entities on the planet.

Language Barriers

Another promising use for AI is communication, and eventually, technology could end the language barrier on Earth.

Communication tools such as Skype have already incorporated instantaneous translating capabilities that allow speakers of a few languages to freely converse in real-time, and Google has incorporated translating capabilities into their new headphones.

Other companies, such as Waverly Labs, are also working on perfecting the technology that will eventually rival the ability of the Babel fish, an alien species found in the book “The Hitchhiker’s Guide to the Galaxy” that can instantly translate alien languages for its host.

Children born in 2018 may find themselves growing up in a world in which anyone can talk to anyone, and the idea of a “foreign” language will seem, well, completely foreign.

Humanity as a Single-Planet Species

Technology that improves human communication could radically impact our world, but eventually, we may need to find a way to communicate with extraterrestrial species. Granted, the worlds we reach in the lifetimes of anyone born this year aren’t likely to contain intelligent life, but the first milestones on the path to such a future are likely to be reached in the next few decades.

When he’s not ushering in the future of autonomous transportation, Musk is pushing his space exploration company SpaceX to develop the technology to put humans on Mars. He thinks he’ll be able to get a crew to the Red Planet by 2024, so today’s children may have no memory of a time before humanity’s cosmic footprint extended beyond a single planet.

Quiet Spaces

Overpopulation is one of the factors that experts point to when they discuss the need for humanity to spread into the cosmos. Urban sprawl has been an issue on Earth for decades, bringing about continued deforestation and the elimination of farming space.

A less-discussed problem caused by the continuous spread of urbanization, however, is the increase in noise pollution.

Experts are concerned that noise is quickly becoming the next great public health crisis. Data collected by the United Nations estimates that by 2100, 84 percent of the world’s 10.8 billion citizens will live in cities, surrounded by a smorgasbord of sound.

This decline in the number of people who live in areas largely free from noise pollution means many of the babies born today will never know what it’s like to enjoy the sound of silence.

World Hunger

Urbanization may limit the space available for traditional farming, but thanks to innovations in agriculture, food shortages may soon become a relic of the past.

Urban farming is quickly developing into a major industry that is bringing fresh produce and even fish to many markets previously considered food deserts (areas cut off from access to fresh, unprocessed foods).

Vertical farming will bring greater access to underserved areas, making it more possible than ever to end hunger in urban areas. Meanwhile, companies are developing innovative ways to reduce food waste, such as by transforming food scraps into sweets or using coffee grounds to grow mushrooms.

If these innovations take hold, children born in 2018 could grow up in a world in which every person on Earth has access to all the food they need to live a healthy, happy life.

Paper Currency

The advent of credit cards may have been the first major blow to the utilization of cash, but it wasn’t the last. Today, paper currency must contend with PayPal, Venmo, Apple Pay, and a slew of other payment options.

By the time children born in 2018 are old enough to earn a paycheck, they will have access to even more payment options, and cash could be completely phased out.

In the race to dethrone paper currency, cryptocurrencies are a frontrunner. Blockchain technology is adding much needed security to financial transactions, and while the crypto market is currently volatile, experts are still optimistic about its potential to permanently disrupt finance.

Digital Insecurity

Today, digital security is a major subject of concern. Hacking can occur on an international level, and with the growth of the Internet of Things (IoT), even household appliances can be points of weakness in the defenses guarding sensitive personal information.

Experts are feverishly trying to keep security development on pace with the ubiquity of digitalization, and technological advances such as biometrics and RFID tech are helping. Unfortunately, these defenses still rely largely on typical encryption software, which is breakable.

The advent of the quantum computer will exponentially increase computing power, and better security systems will follow suit. By the time children born in 2018 reach adulthood, high-speed quantum encryption could ensure that the digital world they navigate is virtually unhackable.

Single-Screen Computing

While most of our digital devices currently make use of a typical flat screen, tomorrow’s user interfaces will be far more dynamic, and children born in 2018 may not remember a time when they were limited to a single screen and a keyboard.

The development of virtual reality (VR) and augmented reality (AR) have shifted the paradigm, and as these technologies continue to advance, we will increasingly see the incorporation of new capabilities into our computing experience.

Gesture recognition, language processing, and other technologies will allow for a more holistic interaction with our devices, and eventually, we may find ourselves interacting with systems akin to what we saw in Minority Report.

Suicide molecules kill any cancer cell

January 05, 2018

CHICAGO – Small RNA molecules originally developed as a tool to study gene function trigger a mechanism hidden in every cell that forces the cell to commit suicide, reports a new Northwestern Medicine study, the first to identify molecules to trigger a fail-safe mechanism that may protect us from cancer.

The mechanism — RNA suicide molecules — can potentially be developed into a novel form of cancer therapy, the study authors said.

Cancer cells treated with the RNA molecules never become resistant to them because they simultaneously eliminate multiple genes that cancer cells need for survival.

“It’s like committing suicide by stabbing yourself, shooting yourself and jumping off a building all at the same time,” said Northwestern scientist and lead study author Marcus Peter. “You cannot survive.”

The inability of cancer cells to develop resistance to the molecules is a first, Peter said.

“This could be a major breakthrough,” noted Peter, the Tom D. Spies Professor of Cancer Metabolism at Northwestern University Feinberg School of Medicine and a member of the Robert H. Lurie Comprehensive Cancer Center of Northwestern University.  

Peter and his team discovered sequences in the human genome that when converted into small double-stranded RNA molecules trigger what they believe to be an ancient kill switch in cells to prevent cancer. He has been searching for the phantom molecules with this activity for eight years.

“We think this is how multicellular organisms eliminated cancer before the development of the adaptive immune system, which is about 500 million years old,” he said. “It could be a fail safe that forces rogue cells to commit suicide. We believe it is active in every cell protecting us from cancer.”

This study, which will be published Oct. 24 in eLife, and two other new Northwestern studies in Oncotarget and Cell Cycle by the Peter group, describe the discovery of the assassin molecules present in multiple human genes and their powerful effect on cancer in mice.

Looking back hundreds of millions of years

Why are these molecules so powerful?

“Ever since life became multicellular, which could be more than 2 billion years ago, it had to deal with preventing or fighting cancer,” Peter said. “So nature must have developed a fail safe mechanism to prevent cancer or fight it the moment it forms. Otherwise, we wouldn’t still be here.”

Thus began his search for natural molecules coded in the genome that kill cancer.

“We knew they would be very hard to find,” Peter said. “The kill mechanism would only be active in a single cell the moment it becomes cancerous. It was a needle in a haystack.”

But he found them by testing a class of small RNAs, called small interfering (si)RNAs, scientists use to suppress gene activity. siRNAs are designed by taking short sequences of the gene to be targeted and converting them into double- stranded RNA. These siRNAs when introduced into cells suppress the expression of the gene they are derived from.Peter found that a large number of these small RNAs derived from certain genes did not, as expected, only suppress the gene they were designed against. They also killed all cancer cells. His team discovered these special sequences are distributed throughout the human genome, embedded in multiple genes as shown in the study in Cell Cycle.

When converted to siRNAs, these sequences all act as highly trained super assassins. They kill the cells by simultaneously eliminating the genes required for cell survival. By taking out these survivor genes, the assassin molecule activates multiple death cell pathways in parallel.

The small RNA assassin molecules trigger a mechanism Peter calls DISE, for Death Induced by Survival gene Elimination.

Activating DISE in organisms with cancer might allow cancer cells to be eliminated. Peter’s group has evidence this form of cell death preferentially affects cancer cells with little effect on normal cells.

To test this in a treatment situation, Peter collaborated with Dr. Shad Thaxton, associate professor of urology at Feinberg, to deliver the assassin molecules via nanoparticles to mice bearing human ovarian cancer. In the treated mice, the treatment strongly reduced the tumor growth with no toxicity to the mice, reports the study in Oncotarget. Importantly, the tumors did not develop resistance to this form of cancer treatment. Peter and Thaxton are now refining the treatment to increase its efficacy.

Peter has long been frustrated with the lack of progress in solid cancer treatment.

“The problem is cancer cells are so diverse that even though the drugs, designed to target single cancer driving genes, often initially are effective, they eventually stop working and patients succumb to the disease,” Peter said. He thinks a number of cancer cell subsets are never really affected by most targeted anticancer drugs currently used.

Most of the advanced solid cancers such as brain, lung, pancreatic or ovarian cancer have not seen an improvement in survival, Peter said.

“If you had an aggressive, metastasizing form of the disease 50 years ago, you were busted back then and you are still busted today,” he said. “Improvements are often due to better detection methods and not to better treatments.”

Cancer scientists need to listen to nature more, Peter said. Immune therapy has been a success, he noted, because it is aimed at activating an anticancer mechanism that evolution developed. Unfortunately, few cancers respond to immune therapy and only a few patients with these cancers benefit, he said.

“Our research may be tapping into one of nature’s original kill switches, and we hope the impact will affect many cancers,” he said. “Our findings could be disruptive.”

Northwestern co-authors include first authors William Putzbach, Quan Q. Gao, and Monal Patel, and coauthors Ashley Haluck-Kangas, Elizabeth T. Bartom, Kwang-Youn A. Kim, Denise M. Scholtens, Jonathan C. Zhao and Andrea E. Murmann.

The research is funded by grants T32CA070085, T32CA009560, R50CA211271 and R35CA197450 from the National Cancer Institute of the National Institutes of Health.

This article was originally published by:

We’re living in the Last Era Before Artificial General Intelligence

January 05, 2018

When we think of preparing for our future, we used to think about going to good college and moving for a good job that would put us on a relatively good career trajectory for a stable life where we will prosper in a free market meritocracy where we compete against fellow humans.

However, over the course of the next few decades homo sapiens including generation GenZ and Alpha, may be among the last people to grow up in a pre automation and pre AGI world.

Considering the exponential levels of technological progress expected in the next 30 years, that’s hard to put into words or even historical context. Namely, because there’s no historical precedent and no words to describe what the next-gen AI might become.

Kurzweil believes that the 21st century will achieve 1,000 times the progress of the 20th century.”

Pre Singularity Years

In the years before wide scale automation and sophisticated AI, we live believing things are changing fast. Retail is shifting to E-commerce and new modes of buying and convenience, self-driving and electric cars are coming, Tech firms in specific verticals still rule the planet, and countries still vye for dominance with outdated military traditions, their own political bubbles and outdated modes of hierarchy, authority and economic privilege.

We live in a world where AI is gaining momentum in popular thought, but in practice is still at the level of ANI: Artificial Narrow Intelligence. Rudimentary NLP, computer vision, robotic movement, and so on and so forth. We’re beginning to interact with personal assistants via smart speakers, but not in any fluid way. The interactions are repetitive. Like Google searching the same thing, on different days.

In this reality, we think about AI in terms useful to us, such as trying to teach machines to learn so that they can do things that humans do, but in turn help humans. A kind of machine learning that’s more about coding and algorithms than any actual artificial intelligence. Our world here is starting to shift into something else: the internet is maturing, software is getting smarter on the cloud, data is being collective, but no explosion takes place, even as more people on the planet get access to the Web.

When Everything Changes

Between 2014 and 2021, an entire 20th century’s worth of progress will have occurred, and then something strange happens, it begins to accelerate until more progress is being made in shorter and shorter time periods. We have to remember, the fruit of this transformation won’t belong just to Facebook, or Google or China or the U.S., it will be just the new normal for everyone.

Many believe sometime between 2025 and 2050, AI becomes native to self-learning, in that it adopts an Artificial General Intelligence, that completely changes the game.

After that point, not only does AI outperform human beings in tasks, problem solving and even human constructs of creativity, emotional intelligence, manipulating complex environments and predicting the future — it reaches Artificial Super Intelligence relatively quickly thereafter.

We live in Anticipation of the Singularity

As such in 2017–18, we might be living in the last “human” era. Here we think of AI as “augmenting” our world, we think of smart phones as miniaturized super computers and the cloud as an expansion of our neocortex in a self-serving existence where concepts such as wealth, consumption, and human quality of life trumps all other considerations.

Here we view computers as man-made tools, robots as slaves, and AI as a kind of “software magic” that’s obliged to our bidding.

Whatever the bottle-necks of carbon based life forms might be, silicon based AGI may have many advantages. Machines that can self-learn, self-replicate and program themselves might come into being in part due to copying how the human brain works, but like the difference between Alpha Go and Alpha Go Zero, the real breakthrough might be made from a blank slate.

While humans appear destined to create AGI, it doesn’t stand to reason that AGI will think, behave or have motivations like people, cultures or even our models of what super-intelligence might be like exhibit.

Artificial Intelligence with Creative Agency

For human beings, the Automation Economy only arrives after a point where AGI has come into being. Such an AGI would be able to program robots, facilitate smart cities and help humans govern themselves in a way that is impossible today.

AGI could also manipulate and advance STEM fields such as green tech, biotech, 3D-printing, nanotech, predictive algorithms, and quantum physics likely in ways humans up to that point could only achieve relatively slowly.

Everything pre singularity would feel like ancient history. A far more radical past than before the invention of computers or the internet. AGI could impact literally everything, as we are already seeing with primitive machine intelligence systems.

In such a world AGI would not only be able to self-learn and surpass all of human knowledge and data collected up to that point, but create its own fields, set its own goals and have its own interests (beyond which humans would likely be able to recognize). We might term this Artificially Intelligent Creative Agency (AICA).

AI Not as a Slave, but as a Legacy

Such a being would indeed feel like a God to us. Not a God that created man, but an entity that humanity made, in just a few thousand years since we were storytellers, explorers and then builders and traders.

A human brain consists of 86 billion neurons linked by trillions of synapses, but it’s not networked well to other nodes and external reality. It has to “experience” them in systems of relatedness and remain in relative isolation from them. AICA, would not have this constraint. It would be networked to all IoT devices, be able to hack into any human system, network or quantum computer. AICA would not be led by instincts of possession, mating, aggression or other emotive agencies of the mammalian brain. Whatever ethics, values and philosophical constraints it might have, could be refined over centuries, not mere months and years of an ordinary human lifetime.

AGI might not be humanity’s last invention, but symbolically, it would usher in the 4th industrial revolution and then some. There would be many grades and incidents of limited self-learning in deep learning algorithms. But AGI would represent a different quality. Likely it would instigate a self-aware separation between humanity and the descendent order of AI, whatever it might be.

High-Speed Quantum Evolution to AGI

The years before the Singularity

The road from ANI to AGI to ASI to some speculative AICA is not just a journey from narrow to general to super intelligence, but an evolutionary corridor of humanity across a distance of progress that’s could also be symbiotic. It’s not clear how this might work, but some human beings to protect their species might undertake “alterations”. Whatever these cybernetic, genetic or how invasive these changes might be, AI is surely going to be there every step of the way.

In the corporate race to AI, governments like China and the U.S. also want to “own” and monetize this for their own purposes. Fleets of cars and semi-intelligent robots will make certain individuals and companies very rich. There might be no human revolution from wealth inequality until AGI, because comparatively speaking, the conditions for which AGI arises may be closer than we might assume.

We Were Here

If the calculations per second (cps) of the human brain are static, at around 1⁰¹⁶, or 10 quadrillion cps, how much does it take for AI to replicate some kind of AGI field? Certainly it’s not just processing power or exponentially faster super-computers or quantum computing, or improved deep learning algorithms, but a combination of all of these and perhaps many other factors as well. In late 2017, Alpha Go Zero “taught itself” Go without using human data but generating its own data by gaming itself.

Living in a world that can better imagine AGI will mean planning ahead, not just coping with change to human systems. In a world where democracy can be hacked, and one- party socialism likely is the heir apparent to future iterations of artificial intelligence where concepts like freedom of speech, human rights or an openness to diversity of ideas is not practiced in the same way, it’s interesting to imagine the kinds of AI human controlled systems that might occur before AGI arrives (if it ever even arrives).

The Human Hybrid Dilemma

Considering our own violent history of the annihilation of biodiversity, modeling AI by plagiarizing the brain through some kind of whole brain emulation, might not be ethical. While it might mimic and lead to self-awareness, such an AGI might be dangerous. In the same sense we are a danger to ourselves and to other life forms in the galaxy.

Moore’s Law might have sounded like an impressive analogy to the Singularity in the 1990s, but not today. More people working in the AI field, are rightfully skeptical of AGI. It’s plausible that even most of them suffering from a linear vs. exponential bias of thinking. In the path towards the Singularity, we are still living in slow motion.

We Aren’t Ready for What’s Inevitable

We’re living in the last era before Artificial General Intelligence, and as usual, human civilization appears quite stupid. We don’t even actively know what’s coming.

While our simulations are improving, and we’re “discovery” exoplanets that are most likely to be life-like, our ability to predict the future in terms of the speed of technology, is mortifyingly bad. Our understanding of the implications of AGI and even machine intelligence on the planet are poor. Is it because this has never happend in recorded history, and represents such a paradigm shift, or could there be another reason?

Amazon can create and monetize patents in a hyper business model, Google, Facebook, Alibaba and Tencent can fight over talent AI talent luring academics to corporate workaholic lifestyles with the ability to demand their salary requests, but in 2017, humanity’s vision of the future is still myopic.

We can barely imagine that our prime directive in the universe might not be to simply grow, explore and make babies and exploit all within our path. And, we certainly can’t imagine a world where intelligent machines aren’t simply our slaves, tools and algorithms designed to make our lives more pleasurable and convenient.

This article was originally published by:

Here’s Everything You Need to Know about Elon Musk’s Human/AI Brain Merge

January 05, 2018

Neuralink Has Arrived

After weeks of anticipation, details on Elon Musk’s brain-computer interface company Neuralink have finally been revealed. In a detailed report on the website Wait But Why, Tim Urban recounts insights gleaned from his weeks meeting with Musk and his Neuralink team at their San Francisco headquarters. He offers an incredibly detailed and informative overview of both Musk’s latest venture and its place in humanity’s evolution, but for those of you interested in just the big picture, here’s what you really need to know about Neuralink.

Your Brain Will Get Another “Layer”

Right now, you have two primary “layers” to your brain: the limbic system, which controls things like your emotions, long-term memory, and behavior; and the cortex, which handles your complex thoughts, reasoning, and long-term planning. Musk wants his brain interface to be a third layer that will complement the other two. The weirdest thing about that goal may be that he thinks we actually already have this third layer — we just don’t have the best interface for it:

We already have a digital tertiary layer in a sense, in that you have your computer or your phone or your applications…The thing that people, I think, don’t appreciate right now is that they are already a cyborg…If you leave your phone behind, it’s like missing limb syndrome. I think people—they’re already kind of merged with their phone and their laptop and their applications and everything.

The goal of Neuralink, then, is eliminating the middleman and putting that power we currently have at our fingertips directly into our brains. Instead of one person using their phone to transmit a thought to another person (“Dinner at 8?”), the thought would just go from one brain to the other directly.

Thankfully, we’ll be able to control this completely, Musk tells Urban: “People won’t be able to read your thoughts — you would have to will it. If you don’t will it, it doesn’t happen. Just like if you don’t will your mouth to talk, it doesn’t talk.”

Musk Is Working with Some Very Smart People

Musk met with more than 1,000 people before deciding on the eight who would help him shape the future of humanity at Neuralink. He claims assembling the right team was a challenge in and of itself, as he needed to find people capable of working in a cross-disciplinary field that includes everything from brain surgery to microscopic electronics.

The crew he landed is a veritable supergroup of smarties. They have backgrounds from MIT, Duke, and IBM, and their bios include phrases like “neural dust,” “cortical physiology,” and “human psychophysics.” They’re engineers, neurosurgeons, and chip designers, and if anyone can bring Elon Musk’s vision to life, it’s them.

The Timeline For Adoption Is Hazy…

Neuralink won’t come out the gate with a BMI that transforms you into a walking computer. The first product the company will focus on releasing will be much more targeted. “We are aiming to bring something to market that helps with certain severe brain injuries (stroke, cancer lesion, congenital) in about four years,” said Musk.

I think we are about 8 to 10 years away from this being usable by people with no disability.” – Musk

The same way SpaceX was able to fund its research on reusable rockets by making deliveries to the ISS or Tesla was able to use profits from its early car sales to fund battery research, these earliest BMIs to treat diseases or the disabled will keep Neuralink afloat as it works on its truly mind-bending technologies.

As for when those technologies, the ones that allow healthy people to channel their inner telepaths, will arrive, Musk’s fairly optimistic timeline comes with several contingencies: “I think we are about 8 to 10 years away from this being usable by people with no disability…It is important to note that this depends heavily on regulatory approval timing and how well our devices work on people with disabilities.”

…Because The Hurdles are Many

Those are just two of the hurdles Neuralink faces. Elon Musk might make innovation look easy, but even going to Mars seems relatively straightforward in comparison to his plans for his latest company.

First, there are the engineering hurdles to overcome. The company has to deal with the problems of biocompatibility, wirelessness, power, and — the big one — bandwidth. To date, we’ve never put more than roughly 200 electrodes in a person’s brain at one time. When talking about a world-changing interface, the Neuralink team told Urban they were thinking something like “one million simultaneously recorded neurons.” Not only would they need to find a way to ensure that the brain could effectively communicate with that many electrodes, they also need to overcome the very practical problem of where to physically put them.

The engineering is only half the battle, though. Like Musk mentioned, regulatory approval will be a big factor in the development and adoption of Neuralink’s tech. The company also faces potential skepticism and even fear from a public that doesn’t want anyone cutting into their brains to install some high-tech machinery — according to a recent Pew survey, the public is even more worried about brain computer interfaces than gene editing. There’s also the not-entirely-unfounded fear that these computers could be hacked.

Add to all that our still very, very incomplete understanding of how the brain ticks exactly, and you can see that the Neuralink team has its work cut out for them.

Neuralink Won’t Exist in a Vacuum

Thankfully, they won’t be working to remake our minds alone — many other universities and research institutes are pushing brain interface technology forward. Facebook’s Building 8 is working on its own BCI, MIT is creating super-thin wires for use in brain implants, and other cyborg devices are already in the works to help the paralyzed walk again and the blind regain their sight. Each new development will push the field forward, and the team at Neuralink will be able to learn from the mistakes and successes of others in the field.

Just like other electric cars were on the road before Tesla came along, brain computer interfaces are not new — the tech might just need a visionary like Musk to elevate it (and us) to the next level.

This article was originally published by:

Canadian province trials basic income for thousands of residents

January 05, 2018

Canada is testing a basic income to discover what impact the policy has on unemployed people and those on low incomes.

The province of Ontario is planning to give 4,000 citizens thousands of dollars a month and assess how it affects their health, wellbeing, earnings and productivity.

It is among a number of regions and countries across the globe that are now piloting the scheme, which sees residents given a certain amount of money each month regardless of whether or not they are in work.

Although it is too early for the Ontario pilot to deliver clear results, some of those involved have already reported a significant change.

One recipient, Tim Button, said the monthly payments were making a “huge difference” to his life. He worked as a security guard before having to quit after a fall from a roof left him unable to work.

“It takes me out of depression”, he told the Associated Press. “I feel more sociable.”

The basic income payments have boosted his income by almost 60 per cent and have allowed him to make plans to visit his family for Christmas for the first time in years. He has also been able to buy healthier food, see a dentist and look into taking an educational course to help him find work.

Under the Ontario experiment, unemployed people or those with a low income can receive up to C$17,000 (£9,900) and are allowed to also keep half of what they earn at work, meaning there is still an incentive to work. Couples are entitled to C$24,000 (£13,400).

If the trial proves successful, the scheme could be expanded to more of the province’s 14.2 million residents and may inspire more regions of Canada and other nations to adopt the policy.

Support for a basic income has grown in recent years, fuelled in part by fears about the impact that new technology will have on jobs. As machines and robots are able to complete an increasing number of tasks, attention has turned to how people will live when there are not enough jobs to go round.

Ontario’s Premier, Kathleen Wynne, said this was a major factor in the decision to trial a basic income in the province.

She said: “I see it on a daily basis. I go into a factory and the floor plant manager can tell me where there were 20 people and there is one machine. We need to understand what it might look like if there is, in fact, the labour disruption that some economists are predicting.”

Ontario officials have found that many people are reluctant to sign up to the scheme, fearing there is a catch or that they will be left without money once the pilot finishes.

Many of those who are receiving payments, however, say their lives have already been changed for the better.

Dave Cherkewski, 46, said the extra C$750 (£436) a month he receives has helped him to cope with the mental illness that has kept him out of work since 2002.

“I’ve never been better after 14 years of living in poverty,” he said.

He hopes to soon find work helping other people with mental health challenges.

He said: “With basic income I will be able to clarify my dream and actually make it a reality, because I can focus all my effort on that and not worry about, ‘Well, I need to pay my $520 rent, I need to pay my $50 cellphone, I need to eat and do other things’.”

Finland is also trialling a basic income, as is the state of Hawaii, Oakland in California and the Dutch city of Utrecht.

This article was originally published by:

There’s a major long-term trend in the economy that isn’t getting enough attention

January 05, 2018

As the December Federal Reserve (Fed) meeting nears, discussions and speculation about the precise timing of Fed liftoff are certain to take center stage.

But while I’ve certainly weighed in on this debate many times, I believe it’s just one example of a topic that receives far too much attention from investors and market watchers alike.

The Fed has been abundantly clear that the forthcoming rate hiking cycle, likely to begin this month, will be incredibly gradual and sensitive to how economic data evolves, meaning the central bank is likely to be extraordinarily cautious about derailing the recovery and rates will likely remain historically low for an extended period of time. In other words, when the Fed does begin rate normalization, not much is likely to change.

Shifting the Focus to Other Economic Trends

In contrast, there are a number of important longer-term trends more worthy of our focus, as they’re likely to have a bigger, longer-sustaining impact on markets than the Fed’s first rate move. One such market influence that I believe should be getting more attention: The advances in technology happening all around us; innovations already having a huge disruptive influence on the economy and markets. These three charts help explain why.




As the chart above shows, people in the U.S. today are adopting new technologies, including tablets and smartphones, at the swiftest pace we’ve seen since the advent of the television. However, while television arguably detracted from U.S. productivity, today’s advances in technology are generally geared toward greater efficiency at lower costs. Indeed, when you take into account technology’s downward influence on price, U.S. consumption and productivity figures look much better than headline numbers would suggest.




Technology isn’t just transforming the consumer story. It’s having a similarly dramatic influence on industry, resulting in efficiency gains not reflected in traditional productivity measurements.

For instance, based on corporate capital expenditure data accessible via Bloomberg, it’s clear that U.S. investment is generally accelerating. However, the cost of that investment is going down, allowing companies to become dramatically more efficient in order to better compete. Similarly, with the help of new technologies, many corporations have refined inventory management practices, or have adopted business models that are purposefully asset-light, causing average inventory levels to decline over the past few decades. As the chart above shows, among the top 1500 U.S. stocks by market capitalization over the past 35 years, the percentage of companies reporting effectively zero inventory levels has increased to more than 20 percent from fewer than 5 percent, an extraordinary four-fold rise.

Above all, if there’s one common theme in all three of these charts, it’s this: Technology is advancing so fast that traditional economic metrics haven’t kept up. This has serious implications. It helps to explain widespread misconceptions about the state of the U.S. economy, including the assertion that we reside in a period of low productivity growth, despite the many remarkable advances we see around us. It also makes monetary policy evolution more difficult, and is one reason why I’ve found recent policy debates somewhat myopic and distorted from reality.

So, let’s all make this New Year’s resolution: Instead of focusing so much on the Fed, let’s give some attention to how technology is changing the entire world in ways never before witnessed, and let’s focus on education and training policies that can help our workforce adapt. Such initiatives are more important and durable, and should havefewer unintended negative economic consequences, than policies designed to distort the real rates of interest.

This article was originally published by:

Eugenics 2.0: We’re at the Dawn of Choosing Embryos by Health, Height, and More

November 18, 2017

Nathan Treff was diagnosed with type 1 diabetes at 24. It’s a disease that runs in families, but it has complex causes. More than one gene is involved. And the environment plays a role too.

So you don’t know who will get it. Treff’s grandfather had it, and lost a leg. But Treff’s three young kids are fine, so far. He’s crossing his fingers they won’t develop it later.

Now Treff, an in vitro fertilization specialist, is working on a radical way to change the odds. Using a combination of computer models and DNA tests, the startup company he’s working with, Genomic Prediction, thinks it has a way of predicting which IVF embryos in a laboratory dish would be most likely to develop type 1 diabetes or other complex diseases. Armed with such statistical scorecards, doctors and parents could huddle and choose to avoid embryos with failing grades.

IVF clinics already test the DNA of embryos to spot rare diseases, like cystic fibrosis, caused by defects in a single gene. But these “preimplantation” tests are poised for a dramatic leap forward as it becomes possible to peer more deeply at an embryo’s genome and create broad statistical forecasts about the person it would become.

The advance is occurring, say scientists, thanks to a growing flood of genetic data collected from large population studies. As statistical models known as predictors gobble up DNA and health information about hundreds of thousands of people, they’re getting more accurate at spotting the genetic patterns that foreshadow disease risk. But they have a controversial side, since the same techniques can be used to project the eventual height, weight, skin tone, and even intelligence of an IVF embryo.

In addition to Treff, who is the company’s chief scientific officer, the founders of Genomic Prediction are Stephen Hsu, a physicist who is vice president for research at Michigan State University, and Laurent Tellier, a Danish bioinformatician who is CEO. Both Hsu and Tellier have been closely involved with a project in China that aims to sequence the genomes of mathematical geniuses, hoping to shed light on the genetic basis of IQ.

Spotting outliers

The company’s plans rely on a tidal wave of new knowledge showing how small genetic differences can add up to put one person, but not another, at high odds for diabetes, a neurotic personality, or a taller or shorter height. Already, such “polygenic risk scores” are used in direct-to-consumer gene tests, such as reports from 23andMe that tell customers their genetic chance of being overweight.

For adults, risk scores are little more than a novelty or a source of health advice they can ignore. But if the same information is generated about an embryo, it could lead to existential consequences: who will be born, and who stays in a laboratory freezer.

“I remind my partners, ‘You know, if my parents had this test, I wouldn’t be here,’” says Treff, a prize-winning expert on diagnostic technology who is the author of more than 90 scientific papers.

Genomic Prediction was founded this year and has raised funds from venture capitalists in Silicon Valley, though it declines to say who they are. Tellier, whose inspiration is the science fiction film Gattaca, says the company plans to offer reports to IVF doctors and parents identifying “outliers”—those embryos whose genetic scores put them at the wrong end of a statistical curve for disorders such as diabetes, late-life osteoporosis, schizophrenia, and dwarfism, depending on whether models for those problems prove accurate.

A days-old human embryo in an IVF clinic. Some cells can be removed to perform DNA tests.

The company’s concept, which it calls expanded preimplantation genetic testing, or ePGT, would effectively add a range of common disease risks to the menu of rare ones already available, which it also plans to test for. Its promotional material uses a picture of a mostly submerged iceberg to get the idea across. “We believe it will become a standard part of the IVF process,” says Tellier, just as a test for Down syndrome is a standard part of pregnancy.

Some experts contacted by MIT Technology Review said they believed it’s premature to introduce polygenic scoring technology into IVF clinics—though perhaps not by very much. Matthew Rabinowitz, CEO of the prenatal-testing company Natera, based in California, says he thinks predictions obtained today could be “largely misleading” because DNA models don’t function well enough. But Rabinowitz agrees that the technology is coming.

“You are not going to stop the modeling in genetics, and you are not going to stop people from accessing it,” he says. “It’s going to get better and better.”

Sharp questions

Testing embryos for disease risks, including risks for diseases that develop only late in life, is considered ethically acceptable by U.S. fertility doctors. But the new DNA scoring models mean parents might be able to choose their kids on the basis of traits like IQ or adult weight. That’s because, just like type 1 diabetes, these traits are the result of complex genetic influences the predictor algorithms are designed to find.

“It’s the camel’s nose under the tent. Because if you are doing it for something more serious, then it’s trivially easy to look for anything else,” says Michelle Meyer, a bioethicist at the Geisinger Health System who analyzes issues in reproductive genetics. “Here is the genomic dossier on each embryo. And you flip through the book.” Imagine picking the embryo most likely to get into Harvard like Mom, or to be tall like Dad.

For Genomic Prediction, a tiny startup based out of a tech incubator in New Jersey, such questions will be especially sharply drawn. That is because of Hsu’s long-standing interest in genetic selection for superior intelligence.

In 2014, Hsu authored an essay titled “Super-Intelligent Humans Are Coming,” in which he argued that selecting embryos for intelligence could boost the resulting child’s IQ by 15 points.

Genomic Prediction says it will only report diseases—that is, identify those embryos it thinks would develop into people with serious medical problems. Even so, on his blog and in public statements, Hsu has for years been developing a vision that goes far beyond that.

“Suppose I could tell you embryo four is going to be the tallest, embryo three is going to be the smartest, embryo two is going to be very antisocial. Suppose that level of granularity was available in the reports,” he told the conservative radio and YouTube personality Stefan Molyneux this spring. “That is the near-term future that we as a civilization face. This is going to be here.”

Measuring height

The fuel for the predictive models is a deluge of new data, most recently genetic readouts and medical records for 500,000 middle-aged Britons that were released in July by the U.K. Biobank, a national precision-medicine project in that country.

The data trove included, for each volunteer, a map of about 800,000 single-nucleotide polymorphisms, or SNPs—points where their DNA differs slightly from another person’s. The release caused a pell-mell rush by geneticists to update their calculations about exactly how much of human disease, or even routine behaviors like bread consumption, these genetic differences could explain.

Armed with the U.K. data, Hsu and Tellier claimed a breakthrough. For one easily measured trait, height, they used machine-learning techniques to create a predictor that behaved flawlessly. They reported that the model could, for the most part, predict people’s height from their DNA data to within three or four centimeters.

Height is currently the easiest trait to predict. It’s determined mostly by genes, and it’s always recorded in population databases. But Tellier says genetic databases are “rapidly approaching” the size needed to make accurate predictions about other human features, including risk for diseases whose true causes aren’t even known.

Tellier says Genomic Prediction will zero in on disease traits for which the predictors already perform fairly well, or will soon. Those include autoimmune disorders like the illness Treff suffers from. In those conditions, a smaller set of genes dominates the predictions, sometimes making them more reliable.

A report from Germany in 2014, for instance, found it was possible to distinguish fairly accurately, from a polygenic DNA score alone, between a person with type 1 diabetes and a person without it. While the scores aren’t perfectly accurate, consider how they might influence a prospective parent. On average, children of a man with type 1 diabetes have a one in 17 chance of developing the ailment. Picking the best of several embryos made in an IVF clinic, even with an error-prone predictor, could lower the odds.

In the case of height, Genomic Prediction hopes to use the model to help identify embryos that would grow into adults shorter than 4’10”, the medical definition of dwarfism, says Tellier. There are many physical and psychological disadvantages to being so short. Eventually the company could also have the ability to identify intellectual problems, such as embryos with a predicted IQ of less than 70.

The company doesn’t intend to give out raw trait scores to parents, only to flag embryos likely to be abnormal. That is because the product has to be “ethically defensible,” says Hsu: “We would only reveal the negative outlier state. We don’t report, ‘This guy is going to be in the NBA.’”

Some scientists doubt the scores will prove useful at picking better people from IVF dishes. Even if they’re accurate on the average, for individuals there’s no guarantee of pinpoint precision. What’s more, environment has as big an impact on most traits as genes do. “There is a high probability that you will get it wrong—that would be my concern,” says Manuel Rivas, a professor at Stanford University who studies the genetics of Crohn’s disease. “If someone is using that information to make decisions about embryos, I don’t know what to make of it.”

Efforts to introduce this type of statistical scoring into reproduction have, in the past, drawn criticism. In 2013, 23andMe provoked outrage when it won a patent on the idea of drop-down menus parents could use to pick sperm or egg donors—say, to try to get a specific eye color. The company, funded by Google, quickly backpedaled.

But since then, polygenic scores have become a routine aspect of novelty DNA tests. A company called HumanCode sells a $199 test online that uses SNP scores to tell two people about how tall their kids might be. In the dairy cattle industry, polygenic tests are widely used to rate young animals for how much milk they’ll produce.

“At a broad level, our understanding of complex traits has evolved. It’s not that there are a few genes contributing to complex traits; it’s tens, or thousands, or even all genes,” says Meyer, the Geisinger professor. “That has led to polygenic risk scores. It’s many variants, each with small contributions of their own, but which have a significant contribution together. You add them up.” In his predictor for height, Hsu eventually made use of 20,000 variants to guess how tall each person was.

Measuring embryos

Around the world, a million couples undergo IVF each year; in the U.S., test-tube babies account for 1 percent of births. Preimplantation genetic diagnosis, or PGD, has been part of the technology since the 1990s. In that procedure, a few cells are plucked from a days-old embryo growing in a laboratory so they can be tested.

Until now, doctors have used PGD to detect embryos with major abnormalities, such as missing chromosomes, as well as those with “single gene” defects. Parents who carry the defective gene that causes Huntington’s disease, for instance, can use embryo tests to avoid having a child with the fatal brain ailment.

The obstacle to polygenic tests has been that with so few cells, it’s been difficult to get the broad, accurate view of an embryo’s genome necessary to perform the needed calculations. “It’s very hard to make reliable measurements on that little DNA,” says Rabinowitz, the Natera CEO.

Tellier says Genomic Prediction has developed an improved method for analyzing embryonic DNA, which he says will first be used to improve on traditional PGD, combing many single-gene tests into one. Tellier says the same technique is what will permit it to collect polygenic scores on embryos, although the company did not describe the method in detail. But other scientists have already demonstrated ways to overcome the accuracy barrier.

In 2015, a team led by Rabinowitz and Jay Shendure of the University of Washington did it by sequencing in detail the genomes of two parents undergoing IVF. That let them infer the embryo’s genome sequence, even though the embryo test itself was no more accurate than before. When the babies were born, they found they’d been right.

“We do have the technology to reconstruct the genome of an embryo and create a polygenic model,” says Rabinowitz, whose publicly traded company is worth about $600 million, and who says he has been mulling whether to enter the embryo-scoring business. “The problem is that the models have not quite been ready for prime time.”

That’s because despite Hsu’s success with height, the scoring algorithms have significant limitations. One is that they’re built using data mostly from Northern Europeans. That means they may not be useful for people from Asia or Africa, where the pattern of SNPs is different, or for people of mixed ancestry. Even their performance for specific families of European background can’t be taken for granted unless the procedure is carefully tested in a clinical study, something that’s never been done, says Akash Kumar, a Stanford resident physician who was lead author of the Natera study.

Kumar, who treats young patients with rare disorders, says the genetic predictors raise some “big issues.” One is that the sheer amount of genetic data becoming available could make it temptingly easy to assess nonmedical traits. “We’ve seen such a crazy change in the number of people we are able to study,” he says. “Not many have schizophrenia, but they all have a height and a body-mass index. So the number of people you can use to build the trait models is much larger. It’s a very unique place to be, thinking what we should do with this technology.”

Smarter kids

This week, Genomic Prediction manned a booth at the annual meeting of the American Society for Reproductive Medicine. That organization, which represents fertility doctors and scientists, has previously said it thinks testing embryos for late-life conditions, like Alzheimer’s, would be “ethically justified.” It cited, among other reasons, the “reproductive liberty” of parents.

The society has been more ambivalent about choosing the sex of embryos (something that conventional PGD allows), leaving it to the discretion of doctors. Combined, the society’s positions seem to open the door to any kind of measurement, perhaps so long as the test is justified for a medical reason.

Hsu has previously said he thinks intelligence is “the most interesting phenotype,” or trait, of all. But when he tried his predictor to see what it could say about how far along in school the 500,000 British subjects from the U.K. Biobank had gotten (years of schooling is a proxy for IQ), he found that DNA couldn’t predict it nearly as well as it could predict height.

Yet DNA did explain some of the difference. Daniel Benjamin, a geno-economist at the University of Southern California, says that for large populations, gene scores are already as predictive of educational attainment as whether someone grew up in a rich or poor family. He adds that the accuracy of the scores has been steadily improving. Scoring embryos for high IQ, however, would be “premature” and “ethically contentious,” he says.

Hsu’s prediction is that “billionaires and Silicon Valley types” will be the early adopters of embryo selection technology, becoming among the first “to do IVF even though they don’t need IVF.” As they start producing fewer unhealthy children, and more exceptional ones, the rest of society could follow suit.

“I fully predict it will be possible,” says Hsu of selecting embryos with higher IQ scores. “But we’ve said that we as a company are not going to do it. It’s a difficult issue, like nuclear weapons or gene editing. There will be some future debate over whether this should be legal, or made illegal. Countries will have referendums on it.”

This article was originally published by: