Suicide molecules kill any cancer cell

January 05, 2018

CHICAGO – Small RNA molecules originally developed as a tool to study gene function trigger a mechanism hidden in every cell that forces the cell to commit suicide, reports a new Northwestern Medicine study, the first to identify molecules to trigger a fail-safe mechanism that may protect us from cancer.

The mechanism — RNA suicide molecules — can potentially be developed into a novel form of cancer therapy, the study authors said.

Cancer cells treated with the RNA molecules never become resistant to them because they simultaneously eliminate multiple genes that cancer cells need for survival.

“It’s like committing suicide by stabbing yourself, shooting yourself and jumping off a building all at the same time,” said Northwestern scientist and lead study author Marcus Peter. “You cannot survive.”

The inability of cancer cells to develop resistance to the molecules is a first, Peter said.

“This could be a major breakthrough,” noted Peter, the Tom D. Spies Professor of Cancer Metabolism at Northwestern University Feinberg School of Medicine and a member of the Robert H. Lurie Comprehensive Cancer Center of Northwestern University.  

Peter and his team discovered sequences in the human genome that when converted into small double-stranded RNA molecules trigger what they believe to be an ancient kill switch in cells to prevent cancer. He has been searching for the phantom molecules with this activity for eight years.

“We think this is how multicellular organisms eliminated cancer before the development of the adaptive immune system, which is about 500 million years old,” he said. “It could be a fail safe that forces rogue cells to commit suicide. We believe it is active in every cell protecting us from cancer.”

This study, which will be published Oct. 24 in eLife, and two other new Northwestern studies in Oncotarget and Cell Cycle by the Peter group, describe the discovery of the assassin molecules present in multiple human genes and their powerful effect on cancer in mice.

Looking back hundreds of millions of years

Why are these molecules so powerful?

“Ever since life became multicellular, which could be more than 2 billion years ago, it had to deal with preventing or fighting cancer,” Peter said. “So nature must have developed a fail safe mechanism to prevent cancer or fight it the moment it forms. Otherwise, we wouldn’t still be here.”

Thus began his search for natural molecules coded in the genome that kill cancer.

“We knew they would be very hard to find,” Peter said. “The kill mechanism would only be active in a single cell the moment it becomes cancerous. It was a needle in a haystack.”

But he found them by testing a class of small RNAs, called small interfering (si)RNAs, scientists use to suppress gene activity. siRNAs are designed by taking short sequences of the gene to be targeted and converting them into double- stranded RNA. These siRNAs when introduced into cells suppress the expression of the gene they are derived from.Peter found that a large number of these small RNAs derived from certain genes did not, as expected, only suppress the gene they were designed against. They also killed all cancer cells. His team discovered these special sequences are distributed throughout the human genome, embedded in multiple genes as shown in the study in Cell Cycle.

When converted to siRNAs, these sequences all act as highly trained super assassins. They kill the cells by simultaneously eliminating the genes required for cell survival. By taking out these survivor genes, the assassin molecule activates multiple death cell pathways in parallel.

The small RNA assassin molecules trigger a mechanism Peter calls DISE, for Death Induced by Survival gene Elimination.

Activating DISE in organisms with cancer might allow cancer cells to be eliminated. Peter’s group has evidence this form of cell death preferentially affects cancer cells with little effect on normal cells.

To test this in a treatment situation, Peter collaborated with Dr. Shad Thaxton, associate professor of urology at Feinberg, to deliver the assassin molecules via nanoparticles to mice bearing human ovarian cancer. In the treated mice, the treatment strongly reduced the tumor growth with no toxicity to the mice, reports the study in Oncotarget. Importantly, the tumors did not develop resistance to this form of cancer treatment. Peter and Thaxton are now refining the treatment to increase its efficacy.

Peter has long been frustrated with the lack of progress in solid cancer treatment.

“The problem is cancer cells are so diverse that even though the drugs, designed to target single cancer driving genes, often initially are effective, they eventually stop working and patients succumb to the disease,” Peter said. He thinks a number of cancer cell subsets are never really affected by most targeted anticancer drugs currently used.

Most of the advanced solid cancers such as brain, lung, pancreatic or ovarian cancer have not seen an improvement in survival, Peter said.

“If you had an aggressive, metastasizing form of the disease 50 years ago, you were busted back then and you are still busted today,” he said. “Improvements are often due to better detection methods and not to better treatments.”

Cancer scientists need to listen to nature more, Peter said. Immune therapy has been a success, he noted, because it is aimed at activating an anticancer mechanism that evolution developed. Unfortunately, few cancers respond to immune therapy and only a few patients with these cancers benefit, he said.

“Our research may be tapping into one of nature’s original kill switches, and we hope the impact will affect many cancers,” he said. “Our findings could be disruptive.”

Northwestern co-authors include first authors William Putzbach, Quan Q. Gao, and Monal Patel, and coauthors Ashley Haluck-Kangas, Elizabeth T. Bartom, Kwang-Youn A. Kim, Denise M. Scholtens, Jonathan C. Zhao and Andrea E. Murmann.

The research is funded by grants T32CA070085, T32CA009560, R50CA211271 and R35CA197450 from the National Cancer Institute of the National Institutes of Health.

This article was originally published by:


We’re living in the Last Era Before Artificial General Intelligence

January 05, 2018

When we think of preparing for our future, we used to think about going to good college and moving for a good job that would put us on a relatively good career trajectory for a stable life where we will prosper in a free market meritocracy where we compete against fellow humans.

However, over the course of the next few decades homo sapiens including generation GenZ and Alpha, may be among the last people to grow up in a pre automation and pre AGI world.

Considering the exponential levels of technological progress expected in the next 30 years, that’s hard to put into words or even historical context. Namely, because there’s no historical precedent and no words to describe what the next-gen AI might become.

Kurzweil believes that the 21st century will achieve 1,000 times the progress of the 20th century.”

Pre Singularity Years

In the years before wide scale automation and sophisticated AI, we live believing things are changing fast. Retail is shifting to E-commerce and new modes of buying and convenience, self-driving and electric cars are coming, Tech firms in specific verticals still rule the planet, and countries still vye for dominance with outdated military traditions, their own political bubbles and outdated modes of hierarchy, authority and economic privilege.

We live in a world where AI is gaining momentum in popular thought, but in practice is still at the level of ANI: Artificial Narrow Intelligence. Rudimentary NLP, computer vision, robotic movement, and so on and so forth. We’re beginning to interact with personal assistants via smart speakers, but not in any fluid way. The interactions are repetitive. Like Google searching the same thing, on different days.

In this reality, we think about AI in terms useful to us, such as trying to teach machines to learn so that they can do things that humans do, but in turn help humans. A kind of machine learning that’s more about coding and algorithms than any actual artificial intelligence. Our world here is starting to shift into something else: the internet is maturing, software is getting smarter on the cloud, data is being collective, but no explosion takes place, even as more people on the planet get access to the Web.

When Everything Changes

Between 2014 and 2021, an entire 20th century’s worth of progress will have occurred, and then something strange happens, it begins to accelerate until more progress is being made in shorter and shorter time periods. We have to remember, the fruit of this transformation won’t belong just to Facebook, or Google or China or the U.S., it will be just the new normal for everyone.

Many believe sometime between 2025 and 2050, AI becomes native to self-learning, in that it adopts an Artificial General Intelligence, that completely changes the game.

After that point, not only does AI outperform human beings in tasks, problem solving and even human constructs of creativity, emotional intelligence, manipulating complex environments and predicting the future — it reaches Artificial Super Intelligence relatively quickly thereafter.

We live in Anticipation of the Singularity

As such in 2017–18, we might be living in the last “human” era. Here we think of AI as “augmenting” our world, we think of smart phones as miniaturized super computers and the cloud as an expansion of our neocortex in a self-serving existence where concepts such as wealth, consumption, and human quality of life trumps all other considerations.

Here we view computers as man-made tools, robots as slaves, and AI as a kind of “software magic” that’s obliged to our bidding.

Whatever the bottle-necks of carbon based life forms might be, silicon based AGI may have many advantages. Machines that can self-learn, self-replicate and program themselves might come into being in part due to copying how the human brain works, but like the difference between Alpha Go and Alpha Go Zero, the real breakthrough might be made from a blank slate.

While humans appear destined to create AGI, it doesn’t stand to reason that AGI will think, behave or have motivations like people, cultures or even our models of what super-intelligence might be like exhibit.

Artificial Intelligence with Creative Agency

For human beings, the Automation Economy only arrives after a point where AGI has come into being. Such an AGI would be able to program robots, facilitate smart cities and help humans govern themselves in a way that is impossible today.

AGI could also manipulate and advance STEM fields such as green tech, biotech, 3D-printing, nanotech, predictive algorithms, and quantum physics likely in ways humans up to that point could only achieve relatively slowly.

Everything pre singularity would feel like ancient history. A far more radical past than before the invention of computers or the internet. AGI could impact literally everything, as we are already seeing with primitive machine intelligence systems.

In such a world AGI would not only be able to self-learn and surpass all of human knowledge and data collected up to that point, but create its own fields, set its own goals and have its own interests (beyond which humans would likely be able to recognize). We might term this Artificially Intelligent Creative Agency (AICA).

AI Not as a Slave, but as a Legacy

Such a being would indeed feel like a God to us. Not a God that created man, but an entity that humanity made, in just a few thousand years since we were storytellers, explorers and then builders and traders.

A human brain consists of 86 billion neurons linked by trillions of synapses, but it’s not networked well to other nodes and external reality. It has to “experience” them in systems of relatedness and remain in relative isolation from them. AICA, would not have this constraint. It would be networked to all IoT devices, be able to hack into any human system, network or quantum computer. AICA would not be led by instincts of possession, mating, aggression or other emotive agencies of the mammalian brain. Whatever ethics, values and philosophical constraints it might have, could be refined over centuries, not mere months and years of an ordinary human lifetime.

AGI might not be humanity’s last invention, but symbolically, it would usher in the 4th industrial revolution and then some. There would be many grades and incidents of limited self-learning in deep learning algorithms. But AGI would represent a different quality. Likely it would instigate a self-aware separation between humanity and the descendent order of AI, whatever it might be.

High-Speed Quantum Evolution to AGI

The years before the Singularity

The road from ANI to AGI to ASI to some speculative AICA is not just a journey from narrow to general to super intelligence, but an evolutionary corridor of humanity across a distance of progress that’s could also be symbiotic. It’s not clear how this might work, but some human beings to protect their species might undertake “alterations”. Whatever these cybernetic, genetic or how invasive these changes might be, AI is surely going to be there every step of the way.

In the corporate race to AI, governments like China and the U.S. also want to “own” and monetize this for their own purposes. Fleets of cars and semi-intelligent robots will make certain individuals and companies very rich. There might be no human revolution from wealth inequality until AGI, because comparatively speaking, the conditions for which AGI arises may be closer than we might assume.

We Were Here

If the calculations per second (cps) of the human brain are static, at around 1⁰¹⁶, or 10 quadrillion cps, how much does it take for AI to replicate some kind of AGI field? Certainly it’s not just processing power or exponentially faster super-computers or quantum computing, or improved deep learning algorithms, but a combination of all of these and perhaps many other factors as well. In late 2017, Alpha Go Zero “taught itself” Go without using human data but generating its own data by gaming itself.

Living in a world that can better imagine AGI will mean planning ahead, not just coping with change to human systems. In a world where democracy can be hacked, and one- party socialism likely is the heir apparent to future iterations of artificial intelligence where concepts like freedom of speech, human rights or an openness to diversity of ideas is not practiced in the same way, it’s interesting to imagine the kinds of AI human controlled systems that might occur before AGI arrives (if it ever even arrives).

The Human Hybrid Dilemma

Considering our own violent history of the annihilation of biodiversity, modeling AI by plagiarizing the brain through some kind of whole brain emulation, might not be ethical. While it might mimic and lead to self-awareness, such an AGI might be dangerous. In the same sense we are a danger to ourselves and to other life forms in the galaxy.

Moore’s Law might have sounded like an impressive analogy to the Singularity in the 1990s, but not today. More people working in the AI field, are rightfully skeptical of AGI. It’s plausible that even most of them suffering from a linear vs. exponential bias of thinking. In the path towards the Singularity, we are still living in slow motion.

We Aren’t Ready for What’s Inevitable

We’re living in the last era before Artificial General Intelligence, and as usual, human civilization appears quite stupid. We don’t even actively know what’s coming.

While our simulations are improving, and we’re “discovery” exoplanets that are most likely to be life-like, our ability to predict the future in terms of the speed of technology, is mortifyingly bad. Our understanding of the implications of AGI and even machine intelligence on the planet are poor. Is it because this has never happend in recorded history, and represents such a paradigm shift, or could there be another reason?

Amazon can create and monetize patents in a hyper business model, Google, Facebook, Alibaba and Tencent can fight over talent AI talent luring academics to corporate workaholic lifestyles with the ability to demand their salary requests, but in 2017, humanity’s vision of the future is still myopic.

We can barely imagine that our prime directive in the universe might not be to simply grow, explore and make babies and exploit all within our path. And, we certainly can’t imagine a world where intelligent machines aren’t simply our slaves, tools and algorithms designed to make our lives more pleasurable and convenient.

This article was originally published by:

Here’s Everything You Need to Know about Elon Musk’s Human/AI Brain Merge

January 05, 2018

Neuralink Has Arrived

After weeks of anticipation, details on Elon Musk’s brain-computer interface company Neuralink have finally been revealed. In a detailed report on the website Wait But Why, Tim Urban recounts insights gleaned from his weeks meeting with Musk and his Neuralink team at their San Francisco headquarters. He offers an incredibly detailed and informative overview of both Musk’s latest venture and its place in humanity’s evolution, but for those of you interested in just the big picture, here’s what you really need to know about Neuralink.

Your Brain Will Get Another “Layer”

Right now, you have two primary “layers” to your brain: the limbic system, which controls things like your emotions, long-term memory, and behavior; and the cortex, which handles your complex thoughts, reasoning, and long-term planning. Musk wants his brain interface to be a third layer that will complement the other two. The weirdest thing about that goal may be that he thinks we actually already have this third layer — we just don’t have the best interface for it:

We already have a digital tertiary layer in a sense, in that you have your computer or your phone or your applications…The thing that people, I think, don’t appreciate right now is that they are already a cyborg…If you leave your phone behind, it’s like missing limb syndrome. I think people—they’re already kind of merged with their phone and their laptop and their applications and everything.

The goal of Neuralink, then, is eliminating the middleman and putting that power we currently have at our fingertips directly into our brains. Instead of one person using their phone to transmit a thought to another person (“Dinner at 8?”), the thought would just go from one brain to the other directly.

Thankfully, we’ll be able to control this completely, Musk tells Urban: “People won’t be able to read your thoughts — you would have to will it. If you don’t will it, it doesn’t happen. Just like if you don’t will your mouth to talk, it doesn’t talk.”

Musk Is Working with Some Very Smart People

Musk met with more than 1,000 people before deciding on the eight who would help him shape the future of humanity at Neuralink. He claims assembling the right team was a challenge in and of itself, as he needed to find people capable of working in a cross-disciplinary field that includes everything from brain surgery to microscopic electronics.

The crew he landed is a veritable supergroup of smarties. They have backgrounds from MIT, Duke, and IBM, and their bios include phrases like “neural dust,” “cortical physiology,” and “human psychophysics.” They’re engineers, neurosurgeons, and chip designers, and if anyone can bring Elon Musk’s vision to life, it’s them.

The Timeline For Adoption Is Hazy…

Neuralink won’t come out the gate with a BMI that transforms you into a walking computer. The first product the company will focus on releasing will be much more targeted. “We are aiming to bring something to market that helps with certain severe brain injuries (stroke, cancer lesion, congenital) in about four years,” said Musk.

I think we are about 8 to 10 years away from this being usable by people with no disability.” – Musk

The same way SpaceX was able to fund its research on reusable rockets by making deliveries to the ISS or Tesla was able to use profits from its early car sales to fund battery research, these earliest BMIs to treat diseases or the disabled will keep Neuralink afloat as it works on its truly mind-bending technologies.

As for when those technologies, the ones that allow healthy people to channel their inner telepaths, will arrive, Musk’s fairly optimistic timeline comes with several contingencies: “I think we are about 8 to 10 years away from this being usable by people with no disability…It is important to note that this depends heavily on regulatory approval timing and how well our devices work on people with disabilities.”

…Because The Hurdles are Many

Those are just two of the hurdles Neuralink faces. Elon Musk might make innovation look easy, but even going to Mars seems relatively straightforward in comparison to his plans for his latest company.

First, there are the engineering hurdles to overcome. The company has to deal with the problems of biocompatibility, wirelessness, power, and — the big one — bandwidth. To date, we’ve never put more than roughly 200 electrodes in a person’s brain at one time. When talking about a world-changing interface, the Neuralink team told Urban they were thinking something like “one million simultaneously recorded neurons.” Not only would they need to find a way to ensure that the brain could effectively communicate with that many electrodes, they also need to overcome the very practical problem of where to physically put them.

The engineering is only half the battle, though. Like Musk mentioned, regulatory approval will be a big factor in the development and adoption of Neuralink’s tech. The company also faces potential skepticism and even fear from a public that doesn’t want anyone cutting into their brains to install some high-tech machinery — according to a recent Pew survey, the public is even more worried about brain computer interfaces than gene editing. There’s also the not-entirely-unfounded fear that these computers could be hacked.

Add to all that our still very, very incomplete understanding of how the brain ticks exactly, and you can see that the Neuralink team has its work cut out for them.

Neuralink Won’t Exist in a Vacuum

Thankfully, they won’t be working to remake our minds alone — many other universities and research institutes are pushing brain interface technology forward. Facebook’s Building 8 is working on its own BCI, MIT is creating super-thin wires for use in brain implants, and other cyborg devices are already in the works to help the paralyzed walk again and the blind regain their sight. Each new development will push the field forward, and the team at Neuralink will be able to learn from the mistakes and successes of others in the field.

Just like other electric cars were on the road before Tesla came along, brain computer interfaces are not new — the tech might just need a visionary like Musk to elevate it (and us) to the next level.

This article was originally published by:

Canadian province trials basic income for thousands of residents

January 05, 2018

Canada is testing a basic income to discover what impact the policy has on unemployed people and those on low incomes.

The province of Ontario is planning to give 4,000 citizens thousands of dollars a month and assess how it affects their health, wellbeing, earnings and productivity.

It is among a number of regions and countries across the globe that are now piloting the scheme, which sees residents given a certain amount of money each month regardless of whether or not they are in work.

Although it is too early for the Ontario pilot to deliver clear results, some of those involved have already reported a significant change.

One recipient, Tim Button, said the monthly payments were making a “huge difference” to his life. He worked as a security guard before having to quit after a fall from a roof left him unable to work.

“It takes me out of depression”, he told the Associated Press. “I feel more sociable.”

The basic income payments have boosted his income by almost 60 per cent and have allowed him to make plans to visit his family for Christmas for the first time in years. He has also been able to buy healthier food, see a dentist and look into taking an educational course to help him find work.

Under the Ontario experiment, unemployed people or those with a low income can receive up to C$17,000 (£9,900) and are allowed to also keep half of what they earn at work, meaning there is still an incentive to work. Couples are entitled to C$24,000 (£13,400).

If the trial proves successful, the scheme could be expanded to more of the province’s 14.2 million residents and may inspire more regions of Canada and other nations to adopt the policy.

Support for a basic income has grown in recent years, fuelled in part by fears about the impact that new technology will have on jobs. As machines and robots are able to complete an increasing number of tasks, attention has turned to how people will live when there are not enough jobs to go round.

Ontario’s Premier, Kathleen Wynne, said this was a major factor in the decision to trial a basic income in the province.

She said: “I see it on a daily basis. I go into a factory and the floor plant manager can tell me where there were 20 people and there is one machine. We need to understand what it might look like if there is, in fact, the labour disruption that some economists are predicting.”

Ontario officials have found that many people are reluctant to sign up to the scheme, fearing there is a catch or that they will be left without money once the pilot finishes.

Many of those who are receiving payments, however, say their lives have already been changed for the better.

Dave Cherkewski, 46, said the extra C$750 (£436) a month he receives has helped him to cope with the mental illness that has kept him out of work since 2002.

“I’ve never been better after 14 years of living in poverty,” he said.

He hopes to soon find work helping other people with mental health challenges.

He said: “With basic income I will be able to clarify my dream and actually make it a reality, because I can focus all my effort on that and not worry about, ‘Well, I need to pay my $520 rent, I need to pay my $50 cellphone, I need to eat and do other things’.”

Finland is also trialling a basic income, as is the state of Hawaii, Oakland in California and the Dutch city of Utrecht.

This article was originally published by:

There’s a major long-term trend in the economy that isn’t getting enough attention

January 05, 2018

As the December Federal Reserve (Fed) meeting nears, discussions and speculation about the precise timing of Fed liftoff are certain to take center stage.

But while I’ve certainly weighed in on this debate many times, I believe it’s just one example of a topic that receives far too much attention from investors and market watchers alike.

The Fed has been abundantly clear that the forthcoming rate hiking cycle, likely to begin this month, will be incredibly gradual and sensitive to how economic data evolves, meaning the central bank is likely to be extraordinarily cautious about derailing the recovery and rates will likely remain historically low for an extended period of time. In other words, when the Fed does begin rate normalization, not much is likely to change.

Shifting the Focus to Other Economic Trends

In contrast, there are a number of important longer-term trends more worthy of our focus, as they’re likely to have a bigger, longer-sustaining impact on markets than the Fed’s first rate move. One such market influence that I believe should be getting more attention: The advances in technology happening all around us; innovations already having a huge disruptive influence on the economy and markets. These three charts help explain why.




As the chart above shows, people in the U.S. today are adopting new technologies, including tablets and smartphones, at the swiftest pace we’ve seen since the advent of the television. However, while television arguably detracted from U.S. productivity, today’s advances in technology are generally geared toward greater efficiency at lower costs. Indeed, when you take into account technology’s downward influence on price, U.S. consumption and productivity figures look much better than headline numbers would suggest.




Technology isn’t just transforming the consumer story. It’s having a similarly dramatic influence on industry, resulting in efficiency gains not reflected in traditional productivity measurements.

For instance, based on corporate capital expenditure data accessible via Bloomberg, it’s clear that U.S. investment is generally accelerating. However, the cost of that investment is going down, allowing companies to become dramatically more efficient in order to better compete. Similarly, with the help of new technologies, many corporations have refined inventory management practices, or have adopted business models that are purposefully asset-light, causing average inventory levels to decline over the past few decades. As the chart above shows, among the top 1500 U.S. stocks by market capitalization over the past 35 years, the percentage of companies reporting effectively zero inventory levels has increased to more than 20 percent from fewer than 5 percent, an extraordinary four-fold rise.

Above all, if there’s one common theme in all three of these charts, it’s this: Technology is advancing so fast that traditional economic metrics haven’t kept up. This has serious implications. It helps to explain widespread misconceptions about the state of the U.S. economy, including the assertion that we reside in a period of low productivity growth, despite the many remarkable advances we see around us. It also makes monetary policy evolution more difficult, and is one reason why I’ve found recent policy debates somewhat myopic and distorted from reality.

So, let’s all make this New Year’s resolution: Instead of focusing so much on the Fed, let’s give some attention to how technology is changing the entire world in ways never before witnessed, and let’s focus on education and training policies that can help our workforce adapt. Such initiatives are more important and durable, and should havefewer unintended negative economic consequences, than policies designed to distort the real rates of interest.

This article was originally published by: