Scientists at Wake Forest Baptist Medical Center and the University of Southern California (USC) have demonstrated the successful implementation of a prosthetic system that uses a person’s own memory patterns to facilitate the brain’s ability to encode and recall memory.
In the pilot study, published in today’s Journal of Neural Engineering, participants’ short-term memory performance showed a 35 to 37 percent improvement over baseline measurements.
“This is the first time scientists have been able to identify a patient’s own brain cell code or pattern for memory and, in essence, ‘write in’ that code to make existing memory work better, an important first step in potentially restoring memory loss,” said the study’s lead author Robert Hampson, Ph.D., professor of physiology/pharmacology and neurology at Wake Forest Baptist.
The study focused on improving episodic memory, which is the most common type of memory loss in people with Alzheimer’s disease, stroke and head injury. Episodic memory is information that is new and useful for a short period of time, such as where you parked your car on any given day. Reference memory is information that is held and used for a long time, such as what is learned in school.
The researchers enrolled epilepsy patients at Wake Forest Baptist who were participating in a diagnostic brain-mapping procedure that used surgically implanted electrodes placed in various parts of the brain to pinpoint the origin of the patients’ seizures. Using the team’s electronic prosthetic system based on a multi-input multi-output (MIMO) nonlinear mathematical model, the researchers influenced the firing patterns of multiple neurons in the hippocampus, a part of the brain involved in making new memories in eight of those patients.
First, they recorded the neural patterns or ‘codes’ while the study participants were performing a computerized memory task. The patients were shown a simple image, such as a color block, and after a brief delay where the screen was blanked, were then asked to identify the initial image out of four or five on the screen.
The USC team led by biomedical engineers Theodore Berger, Ph.D., and Dong Song, Ph.D., analyzed the recordings from the correct responses and synthesized a MIMO-based code for correct memory performance. The Wake Forest Baptist team played back that code to the patients while they performed the image recall task. In this test, the patients’ episodic memory performance showed a 37 percent improvement over baseline.
In a second test, participants were shown a highly distinctive photographic image, followed by a short delay, and asked to identify the first photo out of four or five others on the screen. The memory trials were repeated with different images while the neural patterns were recorded during the testing process to identify and deliver correct-answer codes.
After another longer delay, Hampson’s team showed the participants sets of three pictures at a time with both an original and new photos included in the sets, and asked the patients to identify the original photos, which had been seen up to 75 minutes earlier. When stimulated with the correct-answer codes, study participants showed a 35 percent improvement in memory over baseline.
“We showed that we could tap into a patient’s own memory content, reinforce it and feed it back to the patient,” Hampson said. “Even when a person’s memory is impaired, it is possible to identify the neural firing patterns that indicate correct memory formation and separate them from the patterns that are incorrect. We can then feed in the correct patterns to assist the patient’s brain in accurately forming new memories, not as a replacement for innate memory function, but as a boost to it.
“To date we’ve been trying to determine whether we can improve the memory skill people still have. In the future, we hope to be able to help people hold onto specific memories, such as where they live or what their grandkids look like, when their overall memory begins to fail.”
The current study is built on more than 20 years of preclinical research on memory codes led by Sam Deadwyler, Ph.D., professor of physiology and pharmacology at Wake Forest Baptist, along with Hampson, Berger and Song. The preclinical work applied the same type of stimulation to restore and facilitate memory in animal models using the MIMO system, which was developed at USC.
The research was funded by the U.S. Defense Advanced Research Projects Agency (DARPA).
Injecting minute amounts of two immune-stimulating agents directly into solid tumors in mice can eliminate all traces of cancer in the animals, including distant, untreated metastases, according to a study by researchers at the Stanford University School of Medicine.
The approach works for many different types of cancers, including those that arisespontaneously, the study found.
The researchers believe the local application of very small amounts of the agents could serve as a rapid and relatively inexpensive cancer therapy that is unlikely to cause the adverse side effects often seen with bodywide immune stimulation.
“When we use these two agents together, we see the elimination of tumors all over the body,” said Ronald Levy, MD, professor of oncology. “This approach bypasses the need to identify tumor-specific immune targets and doesn’t require wholesale activation of the immune system or customization of a patient’s immune cells.”
One agent is currently already approved for use in humans; the other has been tested for human use in several unrelated clinical trials. A clinical trial was launched in January to test the effect of the treatment in patients with lymphoma.
Levy, who holds the Robert K. and Helen K. Summy Professorship in the School of Medicine, is the senior author of the study, which was published Jan. 31 in Science Translational Medicine. Instructor of medicine Idit Sagiv-Barfi, PhD, is the lead author.
‘Amazing, bodywide effects’
Levy is a pioneer in the field of cancer immunotherapy, in which researchers try to harness the immune system to combat cancer. Research in his laboratory led to the development of rituximab, one of the first monoclonal antibodies approved for use as an anticancer treatment in humans.
Some immunotherapy approaches rely on stimulating the immune system throughout the body. Others target naturally occurring checkpoints that limit the anti-cancer activity of immune cells. Still others, like the CAR T-cell therapy recently approved to treat some types of leukemia and lymphomas, require a patient’s immune cells to be removed from the body and genetically engineered to attack the tumor cells. Many of these approaches have been successful, but they each have downsides — from difficult-to-handle side effects to high-cost and lengthy preparation or treatment times.
“All of these immunotherapy advances are changing medical practice,” Levy said. “Our approach uses a one-time application of very small amounts of two agents to stimulate the immune cells only within the tumor itself. In the mice, we saw amazing, bodywide effects, including the elimination of tumors all over the animal.”
Cancers often exist in a strange kind of limbo with regard to the immune system. Immune cells like T cells recognize the abnormal proteins often present on cancer cells and infiltrate to attack the tumor. However, as the tumor grows, it often devises ways to suppress the activity of the T cells.
Levy’s method works to reactivate the cancer-specific T cells by injecting microgram amounts of two agents directly into the tumor site. (A microgram is one-millionth of a gram). One, a short stretch of DNA called a CpG oligonucleotide, works with other nearby immune cells to amplify the expression of an activating receptor called OX40 on the surface of the T cells. The other, an antibody that binds to OX40, activates the T cells to lead the charge against the cancer cells. Because the two agents are injected directly into the tumor, only T cells that have infiltrated it are activated. In effect, these T cells are “prescreened” by the body to recognize only cancer-specific proteins.
Some of these tumor-specific, activated T cells then leave the original tumor to find and destroy other identical tumors throughout the body.
The approach worked startlingly well in laboratory mice with transplanted mouse lymphoma tumors in two sites on their bodies. Injecting one tumor site with the two agents caused the regression not just of the treated tumor, but also of the second, untreated tumor. In this way, 87 of 90 mice were cured of the cancer. Although the cancer recurred in three of the mice, the tumors again regressed after a second treatment. The researchers saw similar results in mice bearing breast, colon and melanoma tumors.
Mice genetically engineered to spontaneously develop breast cancers in all 10 of their mammary pads also responded to the treatment. Treating the first tumor that arose often prevented the occurrence of future tumors and significantly increased the animals’ life span, the researchers found.
Finally, Sagiv-Barfi explored the specificity of the T cells by transplanting two types of tumors into the mice. She transplanted the same lymphoma cancer cells in two locations, and she transplanted a colon cancer cell line in a third location. Treatment of one of the lymphoma sites caused the regression of both lymphoma tumors but did not affect the growth of the colon cancer cells.
“This is a very targeted approach,” Levy said. “Only the tumor that shares the protein targets displayed by the treated site is affected. We’re attacking specific targets without having to identify exactly what proteins the T cells are recognizing.”
The current clinical trial is expected to recruit about 15 patients with low-grade lymphoma. If successful, Levy believes the treatment could be useful for many tumor types. He envisions a future in which clinicians inject the two agents into solid tumors in humans prior to surgical removal of the cancer as a way to prevent recurrence due to unidentified metastases or lingering cancer cells, or even to head off the development of future tumors that arise due to genetic mutations like BRCA1 and 2.
“I don’t think there’s a limit to the type of tumor we could potentially treat, as long as it has been infiltrated by the immune system,” Levy said.
The work is an example of Stanford Medicine’s focus on precision health, the goal of which is to anticipate and prevent disease in the healthy and precisely diagnose and treat disease in the ill.
The study’s other Stanford co-authors are senior research assistant and lab manager Debra Czerwinski; professor of medicine Shoshana Levy, PhD; postdoctoral scholar Israt Alam, PhD; graduate student Aaron Mayer; and professor of radiology Sanjiv Gambhir, MD, PhD.
Artificial intelligence is getting smarter by leaps and bounds — within this century, research suggests, a computer AI could be as “smart” as a human being. And then, says Nick Bostrom, it will overtake us: “Machine intelligence is the last invention that humanity will ever need to make.” A philosopher and technologist, Bostrom asks us to think hard about the world we’re building right now, driven by thinking machines. Will our smart machines help to preserve humanity and our values — or will they have values of their own?
Last January, the Chinese Academy of Sciences invited Liu Cixin, China’s preeminent science-fiction writer, to visit its new state-of-the-art radio dish in the country’s southwest. Almost twice as wide as the dish at America’s Arecibo Observatory, in the Puerto Rican jungle, the new Chinese dish is the largest in the world, if not the universe. Though it is sensitive enough to detect spy satellites even when they’re not broadcasting, its main uses will be scientific, including an unusual one: The dish is Earth’s first flagship observatory custom-built to listen for a message from an extraterrestrial intelligence. If such a sign comes down from the heavens during the next decade, China may well hear it first.
In some ways, it’s no surprise that Liu was invited to see the dish. He has an outsize voice on cosmic affairs in China, and the government’s aerospace agency sometimes asks him to consult on science missions. Liu is the patriarch of the country’s science-fiction scene. Other Chinese writers I met attached the honorific Da, meaning “Big,” to his surname. In years past, the academy’s engineers sent Liu illustrated updates on the dish’s construction, along with notes saying how he’d inspired their work.
But in other ways Liu is a strange choice to visit the dish. He has written a great deal about the risks of first contact. He has warned that the “appearance of this Other” might be imminent, and that it might result in our extinction. “Perhaps in ten thousand years, the starry sky that humankind gazes upon will remain empty and silent,” he writes in the postscript to one of his books. “But perhaps tomorrow we’ll wake up and find an alien spaceship the size of the Moon parked in orbit.”In recent years, Liu has joined the ranks of the global literati. In 2015, his novel The Three-Body Problem became the first work in translation to win the Hugo Award, science fiction’s most prestigious prize. Barack Obama told The New York Times that the book—the first in a trilogy—gave him cosmic perspective during the frenzy of his presidency. Liu told me that Obama’s staff asked him for an advance copy of the third volume.At the end of the second volume, one of the main characters lays out the trilogy’s animating philosophy. No civilization should ever announce its presence to the cosmos, he says. Any other civilization that learns of its existence will perceive it as a threat to expand—as all civilizations do, eliminating their competitors until they encounter one with superior technology and are themselves eliminated. This grim cosmic outlook is called “dark-forest theory,” because it conceives of every civilization in the universe as a hunter hiding in a moonless woodland, listening for the first rustlings of a rival.Liu’s trilogy begins in the late 1960s, during Mao’s Cultural Revolution, when a young Chinese woman sends a message to a nearby star system. The civilization that receives it embarks on a centuries-long mission to invade Earth, but she doesn’t care; the Red Guard’s grisly excesses have convinced her that humans no longer deserve to survive. En route to our planet, the extraterrestrial civilization disrupts our particle accelerators to prevent us from making advancements in the physics of warfare, such as the one that brought the atomic bomb into being less than a century after the invention of the repeating rifle.
Science fiction is sometimes described as a literature of the future, but historical allegory is one of its dominant modes. Isaac Asimov based his Foundation series on classical Rome, and Frank Herbert’s Dune borrows plot points from the past of the Bedouin Arabs. Liu is reluctant to make connections between his books and the real world, but he did tell me that his work is influenced by the history of Earth’s civilizations, “especially the encounters between more technologically advanced civilizations and the original settlers of a place.” One such encounter occurred during the 19th century, when the “Middle Kingdom” of China, around which all of Asia had once revolved, looked out to sea and saw the ships of Europe’s seafaring empires, whose ensuing invasion triggered a loss in status for China comparable to the fall of Rome.
This past summer, I traveled to China to visit its new observatory, but first I met up with Liu in Beijing. By way of small talk, I asked him about the film adaptation of TheThree-Body Problem. “People here want it to be China’s Star Wars,” he said, looking pained. The pricey shoot ended in mid-2015, but the film is still in postproduction. At one point, the entire special-effects team was replaced. “When it comes to making science-fiction movies, our system is not mature,” Liu said.
I had come to interview Liu in his capacity as China’s foremost philosopher of first contact, but I also wanted to know what to expect when I visited the new dish. After a translator relayed my question, Liu stopped smoking and smiled.“It looks like something out of science fiction,” he said.
A week later, I rode a bullet train out of Shanghai, leaving behind its purple Blade Runner glow, its hip cafés and craft-beer bars. Rocketing along an elevated track, I watched high-rises blur by, each a tiny honeycomb piece of the rail-linked urban megastructure that has recently erupted out of China’s landscape. China poured more concrete from 2011 to 2013 than America did during the entire 20th century. The country has already built rail lines in Africa, and it hopes to fire bullet trains into Europe and North America, the latter by way of a tunnel under the Bering Sea.
The skyscrapers and cranes dwindled as the train moved farther inland. Out in the emerald rice fields, among the low-hanging mists, it was easy to imagine ancient China—the China whose written language was adopted across much of Asia; the China that introduced metal coins, paper money, and gunpowder into human life; the China that built the river-taming system that still irrigates the country’s terraced hills. Those hills grew steeper as we went west, stair-stepping higher and higher, until I had to lean up against the window to see their peaks. Every so often, a Hans Zimmer bass note would sound, and the glass pane would fill up with the smooth, spaceship-white side of another train, whooshing by in the opposite direction at almost 200 miles an hour.
It was mid-afternoon when we glided into a sparkling, cavernous terminal in Guiyang, the capital of Guizhou, one of China’s poorest, most remote provinces. A government-imposed social transformation appeared to be under way. Signs implored people not to spit indoors. Loudspeakers nagged passengers to “keep an atmosphere of good manners.” When an older man cut in the cab line, a security guard dressed him down in front of a crowd of hundreds.The next morning, I went down to my hotel lobby to meet the driver I’d hired to take me to the observatory. Two hours into what was supposed to be a four-hour drive, he pulled over in the rain and waded 30 yards into a field where an older woman was harvesting rice, to ask for directions to a radio observatory more than 100 miles away. After much frustrated gesturing by both parties, she pointed the way with her scythe.We set off again, making our way through a string of small villages, beep-beeping motorbike riders and pedestrians out of our way. Some of the buildings along the road were centuries old, with upturned eaves; others were freshly built, their residents having been relocated by the state to clear ground for the new observatory. A group of the displaced villagers had complained about their new housing, attracting bad press—a rarity for a government project in China. Western reporters took notice. “China Telescope to Displace 9,000 Villagers in Hunt for Extraterrestrials,” read a headline in The New York Times.
The search for extraterrestrial intelligence (seti) is often derided as a kind of religious mysticism, even within the scientific community. Nearly a quarter century ago, the United States Congress defunded America’s seti program with a budget amendment proposed by Senator Richard Bryan of Nevada, who said he hoped it would “be the end of Martian-hunting season at the taxpayer’s expense.” That’s one reason it is China, and not the United States, that has built the first world-class radio observatory with seti as a core scientific goal.
seti does share some traits with religion. It is motivated by deep human desires for connection and transcendence. It concerns itself with questions about human origins, about the raw creative power of nature, and about our future in this universe—and it does all this at a time when traditional religions have become unpersuasive to many. Why these aspects of seti should count against it is unclear. Nor is it clear why Congress should find seti unworthy of funding, given that the government has previously been happy to spend hundreds of millions of taxpayer dollars on ambitious searches for phenomena whose existence was still in question. The expensive, decades-long missions that found black holes and gravitational waves both commenced when their targets were mere speculative possibilities. That intelligent life can evolve on a planet is not a speculative possibility, as Darwin demonstrated. Indeed, seti might be the most intriguing scientific project suggested by Darwinism.Even without federal funding in the United States, seti is now in the midst of a global renaissance. Today’s telescopes have brought the distant stars nearer, and in their orbits we can see planets. The next generation of observatories is now clicking on, and with them we will zoom into these planets’ atmospheres. seti researchers have been preparing for this moment. In their exile, they have become philosophers of the future. They have tried to imagine what technologies an advanced civilization might use, and what imprints those technologies would make on the observable universe. They have figured out how to spot the chemical traces of artificial pollutants from afar. They know how to scan dense star fields for giant structures designed to shield planets from a supernova’s shock waves.In 2015, the Russian billionaire Yuri Milner poured $100 million of his own cash into a new seti program led by scientists at UC Berkeley. The team performs more seti observations in a single day than took place during entire years just a decade ago. In 2016, Milner sank another $100 million into an interstellar-probe mission. A beam from a giant laser array, to be built in the Chilean high desert, will wallop dozens of wafer-thin probes more than four light-years to the Alpha Centauri system, to get a closer look at its planets. Milner told me the probes’ cameras might be able to make out individual continents. The Alpha Centauri team modeled the radiation that such a beam would send out into space, and noticed striking similarities to the mysterious “fast radio bursts” that Earth’s astronomers keep detecting, which suggests the possibility that they are caused by similar giant beams, powering similar probes elsewhere in the cosmos.Andrew Siemion, the leader of Milner’s seti team, is actively looking into this possibility. He visited the Chinese dish while it was still under construction, to lay the groundwork for joint observations and to help welcome the Chinese team into a growing network of radio observatories that will cooperate on seti research, including new facilities in Australia, New Zealand, and South Africa. When I joined Siemion for overnight seti observations at a radio observatory in West Virginia last fall, he gushed about the Chinese dish. He said it was the world’s most sensitive telescope in the part of the radio spectrum that is “classically considered to be the most probable place for an extraterrestrial transmitter.”More on: https://www.theatlantic.com/magazine/archive/2017/12/what-happens-if-china-makes-first-contact/544131/
2017 was a year filled with nostalgia thanks to a number of pop culture properties with ties to the past.
We got another official Alien film, and Blade Runner came back with new visuals to dazzle us. Meanwhile, “Stranger Things” hearkened back to the Spielbergian fantasy that wowed so many children of the ’80s, and “Twin Peaks” revived Agent Cooper so he could unravel yet another impenetrable mystery from the enigmatic mind of David Lynch.
As these films and TV shows remind us, a lot can change over the course of a few decades, and the experiences of one generation can be far different from those that follow closely behind thanks to advances in technology.
While the “Stranger Things” kids’ phone usage reminded 30-somethings of their own pre-mobile adolescences, children born in 2018 will probably never know the feeling of being tethered to a landline. A trip to the local megaplex to catch Blade Runner 2049 may have stirred up adults’ memories of seeing the original, but children born this year may never know what it’s like to watch a film on a smaller screen with a sound system that doesn’t rattle the brain.
Technology is currently advancing faster than ever before, so what else will kids born today only read about in books or, more likely, on computer screens? Here’s a list of the top 10 things that children born in 2018 will likely never experience.
Long, Boring Travel
Mobile devices and in-flight entertainment systems have made it pretty easy to stay distracted during the course of a long trip. However, aside from the Concorde, which was decommissioned in 2003, humanity hasn’t done nearly as much to increase the speed of air travel for international jet-setters. Beyond sparsely utilized bullet trains, even the speed of our ground transportation has remained fairly limited.
However, recent developments in transportation will likely speed up the travel process, meaning today’s kids may never know the pain of seemingly endless flights and road trips.
Supersonic planes are making a comeback and could ferry passengers “across the pond” in as few as 3.5 hours. While these aircraft could certainly make travel faster for a small subset of travelers, physical and cost limitations will likely prevent them from reaching the mainstream.
However, hyperloop technology could certainly prove to be an affordable way for travelers to subtract travel time from their itineraries.
Already, these super-fast systems have the ability to travel at speeds up to 387 kmh (240 mph). If proposed routes come to fruition, they could significantly cut the time of travel between major cities. For example, a trip from New York to Washington D.C. could take just 30 minutes as opposed to the current five hours.
Obtaining a driver’s license is currently a rite of passage for teenagers as they make that transition from the end of childhood to the beginning of adulthood. By the time today’s newborns are 16, self-driving cars may have already put an end to this unofficial ritual by completely removing the need for human operators of motor vehicles.
According to the Centers for Disease Control (CDC), an average of six teens between the ages of 16 and 19 died every day in 2015 from injuries sustained in motor vehicle collisions. Since a vast majority of accidents are caused by human error, removing the human from the equation could help to save the lives of people of all ages, so autonomous cars are a serious priority for many.
Elon Musk, CEO of Tesla, is confident that his electric and (currently) semi-autonomous car manufacturing company will produce fully autonomous vehicles within the next two years, and several ride-hailing services are already testing self-driving vehicles.
Biology’s Monopoly on Intelligence
Self-driving cars are just a single example of innovations made possible by the advancement of artificial intelligence (AI).
Today, we have AI systems that rival or even surpass human experts at specific tasks, such as playing chess or sorting recyclables. However, experts predict that conscious AI systems that rival human intelligence could just be decades away.
Other companies, such as Waverly Labs, are also working on perfecting the technology that will eventually rival the ability of the Babel fish, an alien species found in the book “The Hitchhiker’s Guide to the Galaxy” that can instantly translate alien languages for its host.
Children born in 2018 may find themselves growing up in a world in which anyone can talk to anyone, and the idea of a “foreign” language will seem, well, completely foreign.
Humanity as a Single-Planet Species
Technology that improves human communication could radically impact our world, but eventually, we may need to find a way to communicate with extraterrestrial species. Granted, the worlds we reach in the lifetimes of anyone born this year aren’t likely to contain intelligent life, but the first milestones on the path to such a future are likely to be reached in the next few decades.
When he’s not ushering in the future of autonomous transportation, Musk is pushing his space exploration company SpaceX to develop the technology to put humans on Mars. He thinks he’ll be able to get a crew to the Red Planet by 2024, so today’s children may have no memory of a time before humanity’s cosmic footprint extended beyond a single planet.
Overpopulation is one of the factors that experts point to when they discuss the need for humanity to spread into the cosmos. Urban sprawl has been an issue on Earth for decades, bringing about continued deforestation and the elimination of farming space.
A less-discussed problem caused by the continuous spread of urbanization, however, is the increase in noise pollution.
Experts are concerned that noise is quickly becoming the next great public health crisis. Data collected by the United Nations estimates that by 2100, 84 percent of the world’s 10.8 billion citizens will live in cities, surrounded by a smorgasbord of sound.
This decline in the number of people who live in areas largely free from noise pollution means many of the babies born today will never know what it’s like to enjoy the sound of silence.
Urbanization may limit the space available for traditional farming, but thanks to innovations in agriculture, food shortages may soon become a relic of the past.
Urban farming is quickly developing into a major industry that is bringing fresh produce and even fish to many markets previously considered food deserts (areas cut off from access to fresh, unprocessed foods).
Vertical farming will bring greater access to underserved areas, making it more possible than ever to end hunger in urban areas. Meanwhile, companies are developing innovative ways to reduce food waste, such as by transforming food scraps into sweets or using coffee grounds to grow mushrooms.
If these innovations take hold, children born in 2018 could grow up in a world in which every person on Earth has access to all the food they need to live a healthy, happy life.
The advent of credit cards may have been the first major blow to the utilization of cash, but it wasn’t the last. Today, paper currency must contend with PayPal, Venmo, Apple Pay, and a slew of other payment options.
By the time children born in 2018 are old enough to earn a paycheck, they will have access to even more payment options, and cash could be completely phased out.
In the race to dethrone paper currency, cryptocurrencies are a frontrunner. Blockchain technology is adding much needed security to financial transactions, and while the crypto market is currently volatile, experts are still optimistic about its potential to permanently disrupt finance.
Today, digital security is a major subject of concern. Hacking can occur on an international level, and with the growth of the Internet of Things (IoT), even household appliances can be points of weakness in the defenses guarding sensitive personal information.
Experts are feverishly trying to keep security development on pace with the ubiquity of digitalization, and technological advances such as biometrics and RFID tech are helping. Unfortunately, these defenses still rely largely on typical encryption software, which is breakable.
The advent of the quantum computer will exponentially increase computing power, and better security systems will follow suit. By the time children born in 2018 reach adulthood, high-speed quantum encryption could ensure that the digital world they navigate is virtually unhackable.
While most of our digital devices currently make use of a typical flat screen, tomorrow’s user interfaces will be far more dynamic, and children born in 2018 may not remember a time when they were limited to a single screen and a keyboard.
The development of virtual reality (VR) and augmented reality (AR) have shifted the paradigm, and as these technologies continue to advance, we will increasingly see the incorporation of new capabilities into our computing experience.
Gesture recognition, language processing, and other technologies will allow for a more holistic interaction with our devices, and eventually, we may find ourselves interacting with systems akin to what we saw in Minority Report.
CHICAGO – Small RNA molecules originally developed as a tool to study gene function trigger a mechanism hidden in every cell that forces the cell to commit suicide, reports a new Northwestern Medicine study, the first to identify molecules to trigger a fail-safe mechanism that may protect us from cancer.
The mechanism — RNA suicide molecules — can potentially be developed into a novel form of cancer therapy, the study authors said.
Cancer cells treated with the RNA molecules never become resistant to them because they simultaneously eliminate multiple genes that cancer cells need for survival.
“It’s like committing suicide by stabbing yourself, shooting yourself and jumping off a building all at the same time,” said Northwestern scientist and lead study author Marcus Peter. “You cannot survive.”
The inability of cancer cells to develop resistance to the molecules is a first, Peter said.
“This could be a major breakthrough,” noted Peter, the Tom D. Spies Professor of Cancer Metabolism at Northwestern University Feinberg School of Medicine and a member of the Robert H. Lurie Comprehensive Cancer Center of Northwestern University.
Peter and his team discovered sequences in the human genome that when converted into small double-stranded RNA molecules trigger what they believe to be an ancient kill switch in cells to prevent cancer. He has been searching for the phantom molecules with this activity for eight years.
“We think this is how multicellular organisms eliminated cancer before the development of the adaptive immune system, which is about 500 million years old,” he said. “It could be a fail safe that forces rogue cells to commit suicide. We believe it is active in every cell protecting us from cancer.”
This study, which will be published Oct. 24 in eLife, and two other new Northwestern studies in Oncotarget and Cell Cycle by the Peter group, describe the discovery of the assassin molecules present in multiple human genes and their powerful effect on cancer in mice.
Looking back hundreds of millions of years
Why are these molecules so powerful?
“Ever since life became multicellular, which could be more than 2 billion years ago, it had to deal with preventing or fighting cancer,” Peter said. “So nature must have developed a fail safe mechanism to prevent cancer or fight it the moment it forms. Otherwise, we wouldn’t still be here.”
Thus began his search for natural molecules coded in the genome that kill cancer.
“We knew they would be very hard to find,” Peter said. “The kill mechanism would only be active in a single cell the moment it becomes cancerous. It was a needle in a haystack.”
But he found them by testing a class of small RNAs, called small interfering (si)RNAs, scientists use to suppress gene activity. siRNAs are designed by taking short sequences of the gene to be targeted and converting them into double- stranded RNA. These siRNAs when introduced into cells suppress the expression of the gene they are derived from.Peter found that a large number of these small RNAs derived from certain genes did not, as expected, only suppress the gene they were designed against. They also killed all cancer cells. His team discovered these special sequences are distributed throughout the human genome, embedded in multiple genes as shown in the study in Cell Cycle.
When converted to siRNAs, these sequences all act as highly trained super assassins. They kill the cells by simultaneously eliminating the genes required for cell survival. By taking out these survivor genes, the assassin molecule activates multiple death cell pathways in parallel.
The small RNA assassin molecules trigger a mechanism Peter calls DISE, for Death Induced by Survival gene Elimination.
Activating DISE in organisms with cancer might allow cancer cells to be eliminated. Peter’s group has evidence this form of cell death preferentially affects cancer cells with little effect on normal cells.
To test this in a treatment situation, Peter collaborated with Dr. Shad Thaxton, associate professor of urology at Feinberg, to deliver the assassin molecules via nanoparticles to mice bearing human ovarian cancer. In the treated mice, the treatment strongly reduced the tumor growth with no toxicity to the mice, reports the study in Oncotarget. Importantly, the tumors did not develop resistance to this form of cancer treatment. Peter and Thaxton are now refining the treatment to increase its efficacy.
Peter has long been frustrated with the lack of progress in solid cancer treatment.
“The problem is cancer cells are so diverse that even though the drugs, designed to target single cancer driving genes, often initially are effective, they eventually stop working and patients succumb to the disease,” Peter said. He thinks a number of cancer cell subsets are never really affected by most targeted anticancer drugs currently used.
Most of the advanced solid cancers such as brain, lung, pancreatic or ovarian cancer have not seen an improvement in survival, Peter said.
“If you had an aggressive, metastasizing form of the disease 50 years ago, you were busted back then and you are still busted today,” he said. “Improvements are often due to better detection methods and not to better treatments.”
Cancer scientists need to listen to nature more, Peter said. Immune therapy has been a success, he noted, because it is aimed at activating an anticancer mechanism that evolution developed. Unfortunately, few cancers respond to immune therapy and only a few patients with these cancers benefit, he said.
“Our research may be tapping into one of nature’s original kill switches, and we hope the impact will affect many cancers,” he said. “Our findings could be disruptive.”
Northwestern co-authors include first authors William Putzbach, Quan Q. Gao, and Monal Patel, and coauthors Ashley Haluck-Kangas, Elizabeth T. Bartom, Kwang-Youn A. Kim, Denise M. Scholtens, Jonathan C. Zhao and Andrea E. Murmann.
The research is funded by grants T32CA070085, T32CA009560, R50CA211271 and R35CA197450 from the National Cancer Institute of the National Institutes of Health.
When we think of preparing for our future, we used to think about going to good college and moving for a good job that would put us on a relatively good career trajectory for a stable life where we will prosper in a free market meritocracy where we compete against fellow humans.
However, over the course of the next few decades homo sapiens including generation GenZ and Alpha, may be among the last people to grow up in a pre automation and pre AGI world.
Considering the exponential levels of technological progress expected in the next 30 years, that’s hard to put into words or even historical context. Namely, because there’s no historical precedent and no words to describe what the next-gen AI might become.
Kurzweil believes that the 21st century will achieve 1,000 times the progress of the 20th century.”
Pre Singularity Years
In the years before wide scale automation and sophisticated AI, we live believing things are changing fast. Retail is shifting to E-commerce and new modes of buying and convenience, self-driving and electric cars are coming, Tech firms in specific verticals still rule the planet, and countries still vye for dominance with outdated military traditions, their own political bubbles and outdated modes of hierarchy, authority and economic privilege.
We live in a world where AI is gaining momentum in popular thought, but in practice is still at the level of ANI: Artificial Narrow Intelligence. Rudimentary NLP, computer vision, robotic movement, and so on and so forth. We’re beginning to interact with personal assistants via smart speakers, but not in any fluid way. The interactions are repetitive. Like Google searching the same thing, on different days.
In this reality, we think about AI in terms useful to us, such as trying to teach machines to learn so that they can do things that humans do, but in turn help humans. A kind of machine learning that’s more about coding and algorithms than any actual artificial intelligence. Our world here is starting to shift into something else: the internet is maturing, software is getting smarter on the cloud, data is being collective, but no explosion takes place, even as more people on the planet get access to the Web.
When Everything Changes
Between 2014 and 2021, an entire 20th century’s worth of progress will have occurred, and then something strange happens, it begins to accelerate until more progress is being made in shorter and shorter time periods. We have to remember, the fruit of this transformation won’t belong just to Facebook, or Google or China or the U.S., it will be just the new normal for everyone.
Many believe sometime between 2025 and 2050, AI becomes native to self-learning, in that it adopts an Artificial General Intelligence, that completely changes the game.
After that point, not only does AI outperform human beings in tasks, problem solving and even human constructs of creativity, emotional intelligence, manipulating complex environments and predicting the future — it reaches Artificial Super Intelligence relatively quickly thereafter.
We live in Anticipation of the Singularity
As such in 2017–18, we might be living in the last “human” era. Here we think of AI as “augmenting” our world, we think of smart phones as miniaturized super computers and the cloud as an expansion of our neocortex in a self-serving existence where concepts such as wealth, consumption, and human quality of life trumps all other considerations.
Here we view computers as man-made tools, robots as slaves, and AI as a kind of “software magic” that’s obliged to our bidding.
Whatever the bottle-necks of carbon based life forms might be, silicon based AGI may have many advantages. Machines that can self-learn, self-replicate and program themselves might come into being in part due to copying how the human brain works, but like the difference between Alpha Go and Alpha Go Zero, the real breakthrough might be made from a blank slate.
While humans appear destined to create AGI, it doesn’t stand to reason that AGI will think, behave or have motivations like people, cultures or even our models of what super-intelligence might be like exhibit.
Artificial Intelligence with Creative Agency
For human beings, the Automation Economy only arrives after a point where AGI has come into being. Such an AGI would be able to program robots, facilitate smart cities and help humans govern themselves in a way that is impossible today.
AGI could also manipulate and advance STEM fields such as green tech, biotech, 3D-printing, nanotech, predictive algorithms, and quantum physics likely in ways humans up to that point could only achieve relatively slowly.
Everything pre singularity would feel like ancient history. A far more radical past than before the invention of computers or the internet. AGI could impact literally everything, as we are already seeing with primitive machine intelligence systems.
In such a world AGI would not only be able to self-learn and surpass all of human knowledge and data collected up to that point, but create its own fields, set its own goals and have its own interests (beyond which humans would likely be able to recognize). We might term this Artificially Intelligent Creative Agency (AICA).
AI Not as a Slave, but as a Legacy
Such a being would indeed feel like a God to us. Not a God that created man, but an entity that humanity made, in just a few thousand years since we were storytellers, explorers and then builders and traders.
A human brain consists of 86 billion neurons linked by trillions of synapses, but it’s not networked well to other nodes and external reality. It has to “experience” them in systems of relatedness and remain in relative isolation from them. AICA, would not have this constraint. It would be networked to all IoT devices, be able to hack into any human system, network or quantum computer. AICA would not be led by instincts of possession, mating, aggression or other emotive agencies of the mammalian brain. Whatever ethics, values and philosophical constraints it might have, could be refined over centuries, not mere months and years of an ordinary human lifetime.
AGI might not be humanity’s last invention, but symbolically, it would usher in the 4th industrial revolution and then some. There would be many grades and incidents of limited self-learning in deep learning algorithms. But AGI would represent a different quality. Likely it would instigate a self-aware separation between humanity and the descendent order of AI, whatever it might be.
High-Speed Quantum Evolution to AGI
The years before the Singularity
The road from ANI to AGI to ASI to some speculative AICA is not just a journey from narrow to general to super intelligence, but an evolutionary corridor of humanity across a distance of progress that’s could also be symbiotic. It’s not clear how this might work, but some human beings to protect their species might undertake “alterations”. Whatever these cybernetic, genetic or how invasive these changes might be, AI is surely going to be there every step of the way.
In the corporate race to AI, governments like China and the U.S. also want to “own” and monetize this for their own purposes. Fleets of cars and semi-intelligent robots will make certain individuals and companies very rich. There might be no human revolution from wealth inequality until AGI, because comparatively speaking, the conditions for which AGI arises may be closer than we might assume.
We Were Here
If the calculations per second (cps) of the human brain are static, at around 1⁰¹⁶, or 10 quadrillion cps, how much does it take for AI to replicate some kind of AGI field? Certainly it’s not just processing power or exponentially faster super-computers or quantum computing, or improved deep learning algorithms, but a combination of all of these and perhaps many other factors as well. In late 2017, Alpha Go Zero “taught itself” Go without using human data but generating its own data by gaming itself.
Living in a world that can better imagine AGI will mean planning ahead, not just coping with change to human systems. In a world where democracy can be hacked, and one- party socialism likely is the heir apparent to future iterations of artificial intelligence where concepts like freedom of speech, human rights or an openness to diversity of ideas is not practiced in the same way, it’s interesting to imagine the kinds of AI human controlled systems that might occur before AGI arrives (if it ever even arrives).
The Human Hybrid Dilemma
Considering our own violent history of the annihilation of biodiversity, modeling AI by plagiarizing the brain through some kind of whole brain emulation, might not be ethical. While it might mimic and lead to self-awareness, such an AGI might be dangerous. In the same sense we are a danger to ourselves and to other life forms in the galaxy.
Moore’s Law might have sounded like an impressive analogy to the Singularity in the 1990s, but not today. More people working in the AI field, are rightfully skeptical of AGI. It’s plausible that even most of them suffering from a linear vs. exponential bias of thinking. In the path towards the Singularity, we are still living in slow motion.
We Aren’t Ready for What’s Inevitable
We’re living in the last era before Artificial General Intelligence, and as usual, human civilization appears quite stupid. We don’t even actively know what’s coming.
While our simulations are improving, and we’re “discovery” exoplanets that are most likely to be life-like, our ability to predict the future in terms of the speed of technology, is mortifyingly bad. Our understanding of the implications of AGI and even machine intelligence on the planet are poor. Is it because this has never happend in recorded history, and represents such a paradigm shift, or could there be another reason?
Amazon can create and monetize patents in a hyper business model, Google, Facebook, Alibaba and Tencent can fight over talent AI talent luring academics to corporate workaholic lifestyles with the ability to demand their salary requests, but in 2017, humanity’s vision of the future is still myopic.
We can barely imagine that our prime directive in the universe might not be to simply grow, explore and make babies and exploit all within our path. And, we certainly can’t imagine a world where intelligent machines aren’t simply our slaves, tools and algorithms designed to make our lives more pleasurable and convenient.
After weeks of anticipation, details on Elon Musk’s brain-computer interface company Neuralink have finally been revealed. In a detailed report on the website Wait But Why, Tim Urban recounts insights gleaned from his weeks meeting with Musk and his Neuralink team at their San Francisco headquarters. He offers an incredibly detailed and informative overview of both Musk’s latest venture and its place in humanity’s evolution, but for those of you interested in just the big picture, here’s what you really need to know about Neuralink.
Your Brain Will Get Another “Layer”
Right now, you have two primary “layers” to your brain: the limbic system, which controls things like your emotions, long-term memory, and behavior; and the cortex, which handles your complex thoughts, reasoning, and long-term planning. Musk wants his brain interface to be a third layer that will complement the other two. The weirdest thing about that goal may be that he thinks we actually already have this third layer — we just don’t have the best interface for it:
We already have a digital tertiary layer in a sense, in that you have your computer or your phone or your applications…The thing that people, I think, don’t appreciate right now is that they are already a cyborg…If you leave your phone behind, it’s like missing limb syndrome. I think people—they’re already kind of merged with their phone and their laptop and their applications and everything.
The goal of Neuralink, then, is eliminating the middleman and putting that power we currently have at our fingertips directly into our brains. Instead of one person using their phone to transmit a thought to another person (“Dinner at 8?”), the thought would just go from one brain to the other directly.
Thankfully, we’ll be able to control this completely, Musk tells Urban: “People won’t be able to read your thoughts — you would have to will it. If you don’t will it, it doesn’t happen. Just like if you don’t will your mouth to talk, it doesn’t talk.”
Musk Is Working with Some Very Smart People
Musk met with more than 1,000 people before deciding on the eight who would help him shape the future of humanity at Neuralink. He claims assembling the right team was a challenge in and of itself, as he needed to find people capable of working in a cross-disciplinary field that includes everything from brain surgery to microscopic electronics.
The crew he landed is a veritable supergroup of smarties. They have backgrounds from MIT, Duke, and IBM, and their bios include phrases like “neural dust,” “cortical physiology,” and “human psychophysics.” They’re engineers, neurosurgeons, and chip designers, and if anyone can bring Elon Musk’s vision to life, it’s them.
The Timeline For Adoption Is Hazy…
Neuralink won’t come out the gate with a BMI that transforms you into a walking computer. The first product the company will focus on releasing will be much more targeted. “We are aiming to bring something to market that helps with certain severe brain injuries (stroke, cancer lesion, congenital) in about four years,” said Musk.
“I think we are about 8 to 10 years away from this being usable by people with no disability.” – Musk
The same way SpaceX was able to fund its research on reusable rockets by making deliveries to the ISS or Tesla was able to use profits from its early car sales to fund battery research, these earliest BMIs to treat diseases or the disabled will keep Neuralink afloat as it works on its truly mind-bending technologies.
As for when those technologies, the ones that allow healthy people to channel their inner telepaths, will arrive, Musk’s fairly optimistic timeline comes with several contingencies: “I think we are about 8 to 10 years away from this being usable by people with no disability…It is important to note that this depends heavily on regulatory approval timing and how well our devices work on people with disabilities.”
…Because The Hurdles are Many
Those are just two of the hurdles Neuralink faces. Elon Musk might make innovation look easy, but even going to Mars seems relatively straightforward in comparison to his plans for his latest company.
First, there are the engineering hurdles to overcome. The company has to deal with the problems of biocompatibility, wirelessness, power, and — the big one — bandwidth. To date, we’ve never put more than roughly 200 electrodes in a person’s brain at one time. When talking about a world-changing interface, the Neuralink team told Urban they were thinking something like “one million simultaneously recorded neurons.” Not only would they need to find a way to ensure that the brain could effectively communicate with that many electrodes, they also need to overcome the very practical problem of where to physically put them.
The engineering is only half the battle, though. Like Musk mentioned, regulatory approval will be a big factor in the development and adoption of Neuralink’s tech. The company also faces potential skepticism and even fear from a public that doesn’t want anyone cutting into their brains to install some high-tech machinery — according to a recent Pew survey, the public is even more worried about brain computer interfaces than gene editing. There’s also the not-entirely-unfounded fear that these computers could be hacked.
Add to all that our still very, very incomplete understanding of how the brain ticks exactly, and you can see that the Neuralink team has its work cut out for them.
Neuralink Won’t Exist in a Vacuum
Thankfully, they won’t be working to remake our minds alone — many other universities and research institutes are pushing brain interface technology forward. Facebook’s Building 8 is working on its own BCI, MIT is creating super-thin wires for use in brain implants, and other cyborg devices are already in the works to help the paralyzed walk again and the blind regain their sight. Each new development will push the field forward, and the team at Neuralink will be able to learn from the mistakes and successes of others in the field.
Just like other electric cars were on the road before Tesla came along, brain computer interfaces are not new — the tech might just need a visionary like Musk to elevate it (and us) to the next level.
Canada is testing a basic income to discover what impact the policy has on unemployed people and those on low incomes.
The province of Ontario is planning to give 4,000 citizens thousands of dollars a month and assess how it affects their health, wellbeing, earnings and productivity.
It is among a number of regions and countries across the globe that are now piloting the scheme, which sees residents given a certain amount of money each month regardless of whether or not they are in work.
Although it is too early for the Ontario pilot to deliver clear results, some of those involved have already reported a significant change.
One recipient, Tim Button, said the monthly payments were making a “huge difference” to his life. He worked as a security guard before having to quit after a fall from a roof left him unable to work.
“It takes me out of depression”, he told the Associated Press. “I feel more sociable.”
The basic income payments have boosted his income by almost 60 per cent and have allowed him to make plans to visit his family for Christmas for the first time in years. He has also been able to buy healthier food, see a dentist and look into taking an educational course to help him find work.
Under the Ontario experiment, unemployed people or those with a low income can receive up to C$17,000 (£9,900) and are allowed to also keep half of what they earn at work, meaning there is still an incentive to work. Couples are entitled to C$24,000 (£13,400).
If the trial proves successful, the scheme could be expanded to more of the province’s 14.2 million residents and may inspire more regions of Canada and other nations to adopt the policy.
Support for a basic income has grown in recent years, fuelled in part by fears about the impact that new technology will have on jobs. As machines and robots are able to complete an increasing number of tasks, attention has turned to how people will live when there are not enough jobs to go round.
Ontario’s Premier, Kathleen Wynne, said this was a major factor in the decision to trial a basic income in the province.
She said: “I see it on a daily basis. I go into a factory and the floor plant manager can tell me where there were 20 people and there is one machine. We need to understand what it might look like if there is, in fact, the labour disruption that some economists are predicting.”
Ontario officials have found that many people are reluctant to sign up to the scheme, fearing there is a catch or that they will be left without money once the pilot finishes.
Many of those who are receiving payments, however, say their lives have already been changed for the better.
Dave Cherkewski, 46, said the extra C$750 (£436) a month he receives has helped him to cope with the mental illness that has kept him out of work since 2002.
“I’ve never been better after 14 years of living in poverty,” he said.
He hopes to soon find work helping other people with mental health challenges.
He said: “With basic income I will be able to clarify my dream and actually make it a reality, because I can focus all my effort on that and not worry about, ‘Well, I need to pay my $520 rent, I need to pay my $50 cellphone, I need to eat and do other things’.”
Finland is also trialling a basic income, as is the state of Hawaii, Oakland in California and the Dutch city of Utrecht.
As the December Federal Reserve (Fed) meeting nears, discussions and speculation about the precise timing of Fed liftoff are certain to take center stage.
But while I’ve certainly weighed in on this debate many times, I believe it’s just one example of a topic that receives far too much attention from investors and market watchers alike.
The Fed has been abundantly clear that the forthcoming rate hiking cycle, likely to begin this month, will be incredibly gradual and sensitive to how economic data evolves, meaning the central bank is likely to be extraordinarily cautious about derailing the recovery and rates will likely remain historically low for an extended period of time. In other words, when the Fed does begin rate normalization, not much is likely to change.
Shifting the Focus to Other Economic Trends
In contrast, there are a number of important longer-term trends more worthy of our focus, as they’re likely to have a bigger, longer-sustaining impact on markets than the Fed’s first rate move. One such market influence that I believe should be getting more attention: The advances in technology happening all around us; innovations already having a huge disruptive influence on the economy and markets. These three charts help explain why.
1. ADOPTION OF TECHNOLOGY IN THE U.S., 1900 TO PRESENT
As the chart above shows, people in the U.S. today are adopting new technologies, including tablets and smartphones, at the swiftest pace we’ve seen since the advent of the television. However, while television arguably detracted from U.S. productivity, today’s advances in technology are generally geared toward greater efficiency at lower costs. Indeed, when you take into account technology’s downward influence on price, U.S. consumption and productivity figures look much better than headline numbers would suggest.
2. PERCENTAGE TOP 1500 U.S. STOCKS WITH ZERO INVENTORY THROUGH Q2 2015
Meanwhile, on the labor market front, greater utilization of technology in business has placed a premium on high-skilled workers who can navigate and innovate alongside that technology. As such, over the past 15 years, we’ve seen considerably faster jobs growth in skilled positions than in lesser skilled ones, as shown in the chart above.
This shift reflects some of the significant influences of technological innovation on the labor market: Highly-skilled labor is rewarded for compatibility with new technologies and is less likely to be replaced by automation or robotics, while the opposite is true for lower-skilled workers, a trend that has kept job growth from being even more robust. This skills-divide also highlights the need for fiscal policies that emphasize education and retraining. In my view, the adoption of such policies will ultimately be much more important to the trajectory of the U.S. labor market and economy than whether the Fed moves away from emergency-rate levels this year or next.
Above all, if there’s one common theme in all three of these charts, it’s this: Technology is advancing so fast that traditional economic metrics haven’t kept up. This has serious implications. It helps to explain widespread misconceptions about the state of the U.S. economy, including the assertion that we reside in a period of low productivity growth, despite the many remarkable advances we see around us. It also makes monetary policy evolution more difficult, and is one reason why I’ve found recent policy debates somewhat myopic and distorted from reality.
So, let’s all make this New Year’s resolution: Instead of focusing so much on the Fed, let’s give some attention to how technology is changing the entire world in ways never before witnessed, and let’s focus on education and training policies that can help our workforce adapt. Such initiatives are more important and durable, and should havefewer unintended negative economic consequences, than policies designed to distort the real rates of interest.