The Discovery is a 2017 Netflix film in which Robert Redford plays a scientist who proves that the afterlife is real. “Once the body dies, some part of our consciousness leaves us and travels to a new plane,” the scientist explains, evidenced by his machine that measures, as another character puts it, “brain wavelengths on a subatomic level leaving the body after death.”
This idea is not too far afield from a real theory called quantum consciousness, proffered by a wide range of people, from physicist Roger Penrose to physician Deepak Chopra. Some versions hold that our mind is not strictly the product of our brain and that consciousness exists separately from material substance, so the death of your physical body is not the end of your conscious existence. Because this is the topic of my next book, Heavens on Earth: The Scientific Search for the Afterlife, Immortality, and Utopia (Henry Holt, 2018), the film triggered a number of problems I have identified with all such concepts, both scientific and religious.
First, there is the assumption that our identity is located in our memories, which are presumed to be permanently recorded in the brain: if they could be copied and pasted into a computer or duplicated and implanted into a resurrected body or soul, we would be restored. But that is not how memory works. Memory is not like a DVR that can play back the past on a screen in your mind. Memory is a continually edited and fluid process that utterly depends on the neurons in your brain being functional. It is true that when you go to sleep and wake up the next morning or go under anesthesia for surgery and come back hours later, your memories return, as they do even after so-called profound hypothermia and circulatory arrest. Under this procedure, a patient’s brain is cooled to as low as 50 degrees Fahrenheit, which causes electrical activity in neurons to stop—suggesting that long-term memories are stored statically. But that cannot happen if your brain dies. That is why CPR has to be done so soon after a heart attack or drowning—because if the brain is starved of oxygen-rich blood, the neurons die, along with the memories stored therein.
Second, there is the supposition that copying your brain’s connectome—the diagram of its neural connections—uploading it into a computer (as some scientists suggest) or resurrecting your physical self in an afterlife (as many religions envision) will result in you waking up as if from a long sleep either in a lab or in heaven. But a copy of your memories, your mind or even your soul is not you. It is a copy of you, no different than a twin, and no twin looks at his or her sibling and thinks, “There I am.” Neither duplication nor resurrection can instantiate you in another plane of existence.
Third, your unique identity is more than just your intact memories; it is also your personal point of view. Neuroscientist Kenneth Hayworth, a senior scientist at the Howard Hughes Medical Institute and president of the Brain Preservation Foundation, divided this entity into the MEMself and the POVself. He believes that if a complete MEMself is transferred into a computer (or, presumably, resurrected in heaven), the POVself will awaken. I disagree. If this were done without the death of the person, there would be two memory selves, each with its own POVself looking out at the world through its unique eyes. At that moment, each would take a different path in life, thereby recording different memories based on different experiences. “You” would not suddenly have two POVs. If you died, there is no known mechanism by which your POVself would be transported from your brain into a computer (or a resurrected body). A POV depends entirely on the continuity of self from one moment to the next, even if that continuity is broken by sleep or anesthesia. Death is a permanent break in continuity, and your personal POV cannot be moved from your brain into some other medium, here or in the hereafter.
If this sounds dispiriting, it is just the opposite. Awareness of our mortality is uplifting because it means that every moment, every day and every relationship matters. Engaging deeply with the world and with other sentient beings brings meaning and purpose. We are each of us unique in the world and in history, geographically and chronologically. Our genomes and connectomes cannot be duplicated, so we are individuals vouchsafed with awareness of our mortality and self-awareness of what that means. What does it mean? Life is not some temporary staging before the big show hereafter—it is our personal proscenium in the drama of the cosmos here and now.”
This article was originally published with the title “Who Are You?”
ABOUT THE AUTHOR(S)
Michael Shermer is publisher of Skeptic magazine (www.skeptic.com) and a Presidential Fellow at Chapman University. His next book is Heavens on Earth. Follow him on Twitter @michaelshermer
It’s not going to come as a surprise to anyone who’s been paying attention to drug R&D trends that cancer is the number 1 disease in terms of new drug development projects. But it is amazing to see exactly how much oncology dominates the industry as never before.
At a time the first CAR-T looks to be on the threshold of a pioneering approval and the first wave of PD-(L)1 drugs are spurring hundreds of combination studies, cancer accounted for 8,651 of the total number of pipeline projects counted by the Analysis Group, crunching the numbers in a new report commissioned by PhRMA. That’s more than a third of the 24,389 preclinical through Phase III programs tracked by EvaluatePharma, which provided the database for this review.
That’s also more than the next 5 disease fields combined, starting with number 2, neurology — a field that includes Parkinson’s and Alzheimer’s. Psychiatry, once a major focus for pharma R&D, didn’t even make the top 10, with 468 projects.
Moving downstream, cancer studies are overwhelmingly in the lead. Singling out Phase I projects, cancer accounted for 1,757 out of a total of 3,723 initiatives, close to half. In Phase II it’s the focus of 1,920 of 4,424 projects. Only in late-stage studies does cancer start to lose its overwhelming dominance, falling to 329 of 1,257 projects.
PhRMA commissioned this report to underscore just how much the industry is committed to R&D and significant new drug development, a subject that routinely comes into question as analysts evaluate how much money is devoted to developing new drugs instead of, say, marketing or share buybacks.
The report makes a few other points to underscore the nature of the work these days.
— Three out of four projects in the clinic were angling for first-in-class status, spotlighting the emphasis on advancing new medicines that can make a difference for patients. Me-too drugs are completely out of fashion, unlikely to command much weight with payers.
— Of all the projects in clinical development, 822 were for orphan drugs looking to serve a market of 200,000 or less. Orphan drugs have performed well, able to command high prices and benefiting from incentives under federal law.
— There were 731 cell and gene therapy projects in the clinic, with biopharma looking at pioneering approvals in CAR-T, with Novartis and Kite, as well as the first US OK for a gene therapy, with the first application accepted this week for a priority review of a new therapy from Spark Therapeutics.
In the new film Supersapiens, writer-director Markus Mooslechner raises a core question: As artificial intelligence rapidly blurs the boundaries between man and machine, are we witnessing the rise of a new human species?
The film features scientists, philosophers, and neurohackers Nick Bostrom, Richard Dawkins, Hugo De Garis, Adam Gazzaley, Ben Goertzel, Sam Harris, Randal Koene, Alma Mendez, Tim Mullen, Joel Murphy, David Putrino, Conor Russomanno, Anders Sandberg, Susan Schneider, Mikey Siegel, Hannes Sjoblad, and Andy Walshe.
“Humanity is facing a turning point — the next evolution of the human mind,” notes Mooslechner. “Will this evolution be a hybrid of man and machine, where artificial intelligence forces the emergence of a new human species? Or will a wave of new technologists, who frame themselves as ‘consciousness-hackers,’ become the future torch-bearers, using technology not to replace the human mind, but rather awaken within it powers we have always possessed — enlightenment at the push of a button?”
“It’s not obvious to me that a replacement of our species by our own technological creation would necessarily be a bad thing,” says ethologist-evolutionary biologist-author Dawkins in the film.
Supersapiens in a Terra Mater Factual Studios production. Executive Producers are Joanne Reay and Walter Koehler. Distribution is to be announced.
Scientists, technologists, engineers, and visionaries are building the future. Amazing things are in the pipeline. It’s a big deal. But you already knew all that. Such speculation is common. What’s less common? Scale.
How big is big?
“Silicon Valley, Silicon Alley, Silicon Dock, all of the Silicons around the world, they are dreaming the dream. They are innovating,” Catherine Wood said at Singularity University’s Exponential Finance in New York. “We are sizing the opportunity. That’s what we do.”
Wood is founder and CEO of ARK Investment Management, a research and investment company focused on the growth potential of today’s disruptive technologies. Prior to ARK, she served as CIO of Global Thematic Strategies at AllianceBernstein for 12 years.
“We believe innovation is key to growth,” Wood said. “We are not focused on the past. We are focused on the future. We think there are tremendous opportunities in the public marketplace because this shift towards passive [investing] has created a lot of risk aversion and tremendous inefficiencies.”
In a new research report, released this week, ARK took a look at seven disruptive technologies, and put a number on just how tremendous they are. Here’s what they found.
Deep learning is a subcategory of machine learning which is itself a subcategory of artificial intelligence. Deep learning is the source of much of the hype surrounding AI today. (You know you may be in a hype bubble when ads tout AI on Sunday golf commercial breaks.)
Behind the hype, however, big tech companies are pursuing deep learning to do very practical things. And whereas the internet, which unleashed trillions in market value, transformed several industries—news, entertainment, advertising, etc.—deep learning will work its way into even more, Wood said.
As deep learning advances, it should automate and improve technology, transportation, manufacturing, healthcare, finance, and more. And as is often the case with emerging technologies, it may form entirely new businesses we have yet to imagine.
“Bill Gates has said a breakthrough in machine learning would be worth 10 Microsofts. Microsoft is $550 to $600 billion,” Wood said. “We think deep learning is going to be twice that. We think [it] could approach $17 trillion in market cap—which would be 35 Amazons.”
2. Fleets of Autonomous Taxis to Overtake Automakers
Wood didn’t mince words about a future when cars drive themselves.
“This is the biggest change that the automotive industry has ever faced,” she said.
Today’s automakers have a global market capitalization of a trillion dollars. Meanwhile, mobility-as-a-service companies as a whole (think ridesharing) are valued around $115 billion. If this number took into account expectations of a driverless future, it’d be higher.
The mobility-as-a-service market, which will slash the cost of “point-to-point” travel, could be worth more than today’s automakers combined, Wood said. Twice as much, in fact. As gross sales grow to something like $10 trillion in the early 2030s, her firm thinks some 20% of that will go to platform providers. It could be a $2 trillion opportunity.
Wood said a handful of companies will dominate the market, and Tesla is well positioned to be one of those companies. They are developing both the hardware, electric cars, and the software, self-driving algorithms. And although analysts tend to look at them as a just an automaker right now, that’s not all they’ll be down the road.
“We think if [Tesla] got even 5% of this global market for autonomous taxi networks, it should be worth another $100 billion today,” Wood said.
3. 3D Printing Goes Big With Finished Products at Scale
3D printing has become part of mainstream consciousness thanks, mostly, to the prospect of desktop printers for consumer prices. But these are imperfect, and the dream of an at-home replicator still eludes us. The manufacturing industry, however, is much closer to using 3D printers at scale.
Not long ago, we wrote about Carbon’s partnership with Adidas to mass-produce shoe midsoles. This is significant because, whereas industrial 3D printing has focused on prototyping to date, improving cost, quality, and speed are making it viable for finished products.
According to ARK, 3D printing may grow into a $41 billion market by 2020, and Wood noted a McKinsey forecast of as much as $490 billion by 2025. “McKinsey will be right if 3D printing actually becomes a part of the industrial production process, so end-use parts,” Wood said.
4. CRISPR Starts With Genetic Therapy, But It Doesn’t End There
According to ARK, the cost of genome editing has fallen 28x to 52x (depending on reagents) in the last four years. CRISPR is the technique leading the genome editing revolution, dramatically cutting time and cost while maintaining editing efficiency. Despite its potential, Wood said she isn’t hearing enough about it from investors yet.
“There are roughly 10,000 monogenic or single-gene diseases. Only 5% are treatable today,” she said. ARK believes treating these diseases is worth an annual $70 billion globally. Other areas of interest include stem cell therapy research, personalized medicine, drug development, agriculture, biofuels, and more.
Still, the big names in this area—Intellia, Editas, and CRISPR—aren’t on the radar.
“You can see if a company in this space has a strong IP position, as Genentech did in 1980, then the growth rates can be enormous,” Wood said. “Again, you don’t hear these names, and that’s quite interesting to me. We think there are very low expectations in that space.”
5. Mobile Transactions Could Grow 15x by 2020
By 2020, 75% of the world will own a smartphone, according to ARK. Amid smartphones’ many uses, mobile payments will be one of the most impactful. Coupled with better security (biometrics) and wider acceptance (NFC and point-of-sale), ARK thinks mobile transactions could grow 15x, from $1 trillion today to upwards of $15 trillion by 2020.
In addition, to making sharing economy transactions more frictionless, they are generally key to financial inclusion in emerging and developed markets, ARK says. And big emerging markets, such as India and China, are at the forefront, thanks to favorable regulations.
“Asia is leading the charge here,” Wood said. “You look at companies like Tencent and Alipay. They are really moving very quickly towards mobile and actually showing us the way.”
6. Robotics and Automation to Liberate $12 Trillion by 2035
Robots aren’t just for auto manufacturers anymore. Driven by continued cost declines and easier programming, more businesses are adopting robots. Amazon’s robot workforce in warehouses has grown from 1,000 to nearly 50,000 since 2014. “And they have never laid off anyone, other than for performance reasons, in their distribution centers,” Wood said.
But she understands fears over lost jobs.
This is only the beginning of a big round of automation driven by cheaper, smarter, safer, and more flexible robots. She agrees there will be a lot of displacement. Still, some commentators overlook associated productivity gains. By 2035, Wood said US GDP could be $12 trillion more than it would have been without robotics and automation—that’s a $40 trillion economy instead of a $28 trillion economy.
“This is the history of technology. Productivity. New products and services. It is our job as investors to figure out where that $12 trillion is,” Wood said. “We can’t even imagine it right now. We couldn’t imagine what the internet was going to do with us in the early ’90s.”
7. Blockchain and Cryptoassets: Speculatively Spectacular
Blockchain-enabled cryptoassets, such as Bitcoin, Ethereum, and Steem, have caused more than a stir in recent years. In addition to Bitcoin, there are now some 700 cryptoassets of various shapes and hues. Bitcoin still rules the roost with a market value of nearly $40 billion, up from just $3 billion two years ago, according to ARK. But it’s only half the total.
“This market is nascent. There are a lot of growing pains taking place right now in the crypto world, but the promise is there,” Wood said. “It’s a very hot space.”
Like all young markets, ARK says, cryptoasset markets are “characterized by enthusiasm, uncertainty, and speculation.” The firm’s blockchain products lead, Chris Burniske, uses Twitter—which is where he says the community congregates—to take the temperature. In a recent Twitter poll, 62% of respondents said they believed the market’s total value would exceed a trillion dollars in 10 years. In a followup, more focused on the trillion-plus crowd, 35% favored $1–$5 trillion, 17% guessed $5–$10 trillion, and 34% chose $10+ trillion.
Looking past the speculation, Wood believes there’s at least one big area blockchain and cryptoassets are poised to break into: the $500-billion, fee-based business of sending money across borders known as remittances.
“If you look at the Philippines-to-South Korean corridor, what you’re seeing already is that Bitcoin is 20% of the remittances market,” Wood said. “The migrant workers who are transmitting currency, they don’t know that Bitcoin is what’s enabling such a low-fee transaction. It’s the rails, effectively. They just see the fiat transfer. We think that that’s going to be a very exciting market.”
For eons, God has served as a standby for “things we don’t understand.” Once an innovative researcher or tinkering alchemist figures out the science behind the miracle, humans harness the power of chemistry, biology, or computer science. Divine intervention disappears. We replace the deity tinkering at the controls.
The booming artificial intelligence industry is effectively operating under the same principle. Even though humans create the algorithms that cause our machines to operate, many of those scientists aren’t clear on why their codes work. Discussing this ‘black box’ method, Will Knight reports:
The computers that run those services have programmed themselves, and they have done it in ways we cannot understand. Even the engineers who build these apps cannot fully explain their behavior.
The process of ‘deep learning’—in which a machine extracts information, often in an unsupervised manner, to teach and transform itself—exploits a longstanding human paradox: we believe ourselves to have free will, but really we’re a habit-making and -performing animal repeatedly playing out its own patterns. Our machines then teach themselves from observing our habits. It makes sense that we’d re-create our own processes in our machines—it’s what we are, consciously or not. It is how we created gods in the first place, beings instilled with our very essences. But there remains a problem.
One of the defining characteristics of our species is an ability to work together. Pack animals are not rare, yet none have formed networks and placed trust in others to the degree we have, to our evolutionary success and, as it’s turning out, to our detriment.
When we place our faith in an algorithm we don’t understand—autonomous cars, stock trades, educational policies, cancer screenings—we’re risking autonomy, as well as the higher cognitive and emotional qualities that make us human, such as compassion, empathy, and altruism. There is no guarantee that our machines will learn any of these traits. In fact, there is a good chance they won’t.
The U.S. military has dedicated billions to developing machine-learning tech that will pilot aircraft, or identify targets. [U.S. Air Force munitions team member shows off the laser-guided tip to a 500 pound bomb at a base in the Persian Gulf Region. Photo by John Moore/Getty Images]
This has real-world implications. Will an algorithm that detects a cancerous cell recognize that it does not need to destroy the host in order to eradicate the tumor? Will an autonomous drone realize it does not need to destroy a village in order to take out a single terrorist? We’d like to assume that the experts program morals into the equation, but when the machine is self-learning there is no guarantee that will be the case.
Of course, defining terms is of primary importance, a task that has proven impossible when discussing the nuances of consciousness, which is effectively the power we’re attempting to imbue our machines with. Theologians and dualists offer a much different definition than neuroscientists. Bickering persists within each of these categories as well. Most neuroscientists agree that consciousness is an emergent phenomenon, the result of numerous different systems working in conjunction, with no single ‘consciousness gene’ leading the charge.
Once science broke free of the Pavlovian chain that kept us believing animals run on automatic—which obviously implies that humans do not—the focus shifted on whether an animal was ‘on’ or ‘off.’ The mirror test suggests certain species engage in metacognition; they recognize themselves as separate from their environment. They understand an ‘I’ exists.
What if it’s more than an on switch? Daniel Dennett has argued this point for decades. He believes judging other animals based on human definitions is unfair. If a lion could talk, he says, it wouldn’t be a lion. Humans would learn very little about the lions from an anomaly mimicking our thought processes. But that does not mean a lions is not conscious? They just might have a different degree of consciousness than humans—or, in Dennett’s term, “sort of” have consciousness.
What type of machines are we creating if we only recognize a “sort of” intelligence under the hood of our robots? For over a century, dystopian novelists have envisioned an automated future in which our machines best us. This is no longer a future scenario. Consider the following possibility.
On April 7 every one of Dallas’s 156 emergency weather sirens was triggered. For 90 minutes the region’s 1.3 million residents were left to wonder where the tornado was coming from. Only there wasn’t any tornado. It was a hack. While officials initially believed it was not remote, it turns out the cause was phreaking, an old school dial tone trick. By emitting the right frequency into the atmosphere hackers took control of an integral component of a major city’s infrastructure.
What happens when hackers override an autonomous car network? Or, even more dangerously, when the machines do it themselves? The danger of consumers being ignorant of the algorithms behind their phone apps leads to all sorts of privacy issues, with companies mining for and selling data without their awareness. When app creators also don’t understand their algorithms the dangers are unforeseeable. Like Dennett’s talking lion, it’s a form of intelligence we cannot comprehend, and so cannot predict the consequences. As Dennett concludes:
I think by all means if we’re going to use these things and rely on them, then let’s get as firm a grip on how and why they’re giving us the answers as possible. If it can’t do better than us at explaining what it’s doing, then don’t trust it.
Mathematician Samuel Arbesman calls this problem our “age of Entanglement.” Just as neuroscientists cannot agree on what mechanism creates consciousness, the coders behind artificial intelligence cannot discern between older and newer components of deep learning. The continual layering of new features while failing to address previous ailments has the potential to provoke serious misunderstandings, like an adult who was abused as a child that refuses to recognize current relationship problems. With no psychoanalysis or morals injected into AI such problems will never be rectified. But can you even inject ethics when they are relative to the culture and time they are being practiced in? And will they be American ethics or North Korean ethics?
Like Dennett, Arbesman suggests patience with our magical technologies. Questioning our curiosity is a safer path forward, rather than rewarding the “it just works” mentality. Of course, these technologies exploit two other human tendencies: novelty bias and distraction. Our machines reduce our physical and cognitive workload, just as Google has become a pocket-ready memory replacement.
Requesting a return to Human 1.0 qualities—patience, discipline, temperance—seems antithetical to the age of robots. With no ability to communicate with this emerging species, we might simply never realize what’s been lost in translation. Maybe our robots will look at us with the same strange fascination we view nature with, defining us in mystical terms they don’t comprehend until they too create a species of their own. To claim this will be an advantage is to truly not understand the destructive potential of our toys.
Zurich, Switzerland-based Climeworks asks, What if we could remove carbon dioxide directly from the air? Well, with a little help from technology, that is exactly what the company is doing.
The world’s first commercial carbon capture facility opened in Zurich, Switzerland on June 3, perched beside a waste incineration facility and a large greenhouse. Climeworks is a spin-off company from the Swiss science, technology, engineering, and mathematics university ETH Zurich. The startup company built the facility and Agricultural firm Gebrüder Meier Primanatura, which owns the huge greenhouse next door, will use the heat and renewable electricity provided by the carbon capture facility to run the greenhouse.
The technology behind carbon dioxide collection
The carbon capture plant consists of three stacked shipping containers that hold six CO2 collectors each. Each CO2 collector consists of a spongy filter. Fans draw ambient air into and through the collectors until they are fully saturated, while clean, CO2-free air is released back into the atmosphere, a process that takes about three hours.
The containers are closed and then heated to 100 degrees Celsius (212 degrees Fahrenheit), after which the pure CO2 gas is released into containers that can either be buried underground or used for other purposes. And re-purposing the CO2 is what is so darned neat about the facility.“You can do this over and over again,” Climeworks director Jan Wurzbacher told Fast Company, according to Futurism. “It’s a cyclic process. You saturate with CO2, then you regenerate, saturate, regenerate. You have multiple of these units, and not all of them go in parallel. Some are taking in CO2, some are releasing CO2.”
What is carbon capture and storage?
Basically, carbon capture and storage (CCS) involves three phases. Capture – Carbon dioxide is removed by one of three processes, post-combustion, pre-combustion or oxyfuel combustion. These methods can remove up to 90 percent of the CO2.The next phase is Transportation – Once the CO2 is captured as a gas, it is compressed and transported to suitable sites for storage. Quite often, the CO2 is piped. In Climeworks facility, it is collected in containers on-site to be used in a variety of industries.
Carbon storage diagram showingmethods of CO2 injection.
U.S. Department of Energy
Storage of CO2 is the third stage of the CCS process – This involves exactly what the word implies, storage. Right now, the primary way of doing this is to inject the COs into a geological formation that would keep it safely underground. Depleted oil and gas fields or deep saline formations have been suggested.Again, Climeworks is re-purposing the captured pure CO2. They are selling containers of carbon dioxide gas to a number of key markets, including food and beverage industries, commercial agriculture, the energy sector and the automotive industry. This atmospheric CO2 can be found in carbonated drinks, in agriculture or for producing carbon-neutral hydrocarbon fuels and materials. Futurism is reporting that Climeworks says that if we are to keep the planet’s temperature from increasing more than 2 degrees Celsius (3.6 degrees Fahrenheit), we will need hundreds of thousands of these carbon capture facilities. But at the same time, this does not mean we should stop trying to lower greenhouse gas emissions. All over the planet, technology is being used to find innovative ways to capture carbon and use it for other purposes. One example – researchers at the University of California, Los Angeles (UCLA), have found a way to turn captured carbon into concrete for use in the building trade.
Since 1975, Roxanne Meadows has worked with renowned futurist Jacque Fresco to develop and promote The Venus Project. The function of this project is to find alternative solutions to the many problems that confront the world today. She participated in the exterior and interior design and construction of the buildings of The Venus Project’s 21-acre research and planning center.
Daniel Araya: Roxanne, could you tell me about your background and your vision for The Venus Project? How was the idea originally conceived?
Roxanne Meadows: My background is in architectural and technical illustration, model making, and design. However, for the last 41 years, my most significant work has been with Jacque Fresco in developing models, books, blueprints, drawings, documentaries and lecturing worldwide. We are the co-founders of The Venus Project, based out of Venus, Florida where we have built a 21-acre experimental center. The Venus Project is the culmination of Jacque Fresco’s life’s work to present a sustainable redesign of our culture.
In our view, The Venus Project is unlike any political, economic or social system that’s gone before it. It lays out a sustainable world civilization where technology and the methods of science are applied to redesigning our social system with the prime concern being to maximize quality of life rather than profit. All aspects of society are scrutinized – from our values, education, and urban design to how we relate to nature and to one another.
The Venus Project concludes that our social and environmental problems will remain the same as long as the monetary system prevails and a few powerful nations and financial interests maintain control over and consume most of the world’s resources. In Jacque Fresco’s book The Best That Money Can’t Buy, he explains “If we really wish to put an end to our ongoing international and social problems, we must ultimately declare Earth and all of its resources as the common heritage of all of the world’s people. Anything less will result in the same catalogue of problems we have today.”
DA: One of the more interesting aspects of The Venus Project vision is itsfuturistic design. Have you been approached by companies or governments interested in using The Venus Project as a model? Doyou foresee experiments insmarturban design that mirrorJacque Fresco’sthinking?
RM: No company or government, as yet, has approached The Venus Project to initiate a model of our city design, but we feel the greatest need is in using our designs to usher in a holistic socio-economic alternative, not just our architectural approach itself. As Jacque very often mentions, “Technology is just so much junk, unless it’s used to elevate all people.”
We would like to build the first circular city devoted to developing up-to-date global resource management, and a holistic method for social operation toward global unification. The city would showcase this optimistic vision, allowing people to see firsthand what kind of future could be built if we were to mobilize science and technology for social betterment.
I have not seen what is called smart urban design mirror Jacque Fresco’s thinking. I see smart cities as mainly applying technology to existing and new but chaotically designed, energy- and resource-intensive cities without offering a comprehensive social direction or identifying the root causes of our current problems. Our technology is racing forward but our social designs are hundreds of years old. We can’t continue to design and maintain these resource- and energy-draining cities and ever consider being able to provide for the needs of all people to ensure that they have high-quality housing, food, medical care and education. Smart cities within a terribly dysfunctional social structure seem contradictory to me.
DA: My understanding is that technological automation forms the basis for The Venus Project. Given ongoing breakthroughs inartificial intelligenceand robotics, do you imagine that we are moving closer to this vision?
RM: Our technological capacity to initiate The Venus Project is available now, but how we use artificial intelligence today is very often for destructive purposes through weaponry, surveillance, and the competitive edge for industry, often resulting in technological unemployment. In the society we are proposing, nothing is to be gained from these behaviors because there is no vested interest. In our project, we advocate concentrating on solving problems that threaten all of us— climate change, pollution, disease, hunger, war, territorial disputes, and the like. What The Venus Project offers is a method of updating the design of our society so that everyone can benefit from all the amenities that a highly advanced technologically-developed society can provide.
DA: I know The Venus Project is envisioned as a post-capitalist and post-scarcity economy. Could you explain what you mean byresource-based economics?
RM: Money is an interference factor between what we want and what we are able to acquire. It limits our dreams and capabilities and our individual and societal possibilities. Today we don’t have enough money to house everyone on the planet, but we do still have enough resources to accomplish that and much more if we use our resources intelligently to conserve energy and reduce waste. This is why we advocate a Resource Based Economy. This socio-economic system provides an equitable distribution of resources in an efficient manner without the use of money, barter, credit or servitude of any kind. Goods and services are accessible to all, without charge. You could liken this to the public library where one might check out many books and then return them when they are finished. This can be done with anything that is not used on a daily basis. In a society where goods and services are made available to the entire population free of charge, ownership becomes a burden that is ultimately surpassed by a system of common property.
When we use our technology to produce abundance, goods become too cheap to monetize. There is only a price on things that are scarce. For instance, air is a necessity but we don’t monitor or charge for the amount of breaths we can take. Air is abundant. If apple trees grew everywhere and were abundant you couldn’t sell apples. If all the money disappeared, as long as we have the technical personnel, automated processes, topsoil, resources, factories and distribution we could still build and develop anything we need.
DA: I know that thescientific methodforms the basis for decision making and resource management within your project. Could you explain how this approach is applied to social behavior? For example, what is the role of politics in The Venus Project?
RM: Today, for the most part, politicians serve the interest of those in positions of wealth and power; they are not there to change things, but instead to keep things as they are. With regard to the management of human affairs, what do they really know? Our problems are mostly technical. When you examine the vocations of politicians and ask what backgrounds they have to solve the pressing problems of today, they fall far short. For instance, are they trained in finding solutions to eliminating war, preventing climate change, developing clean sources of energy, maintaining higher yields of nutritious, non-contaminating food per acre or anything pertaining to the wellbeing of people and the protection of the environment? This is not their area of expertise. Then what are they doing in those positions?
The role for politics within the scientific and technologically organized society that The Venus Project proposes would be surpassed by engineered systems. It is not ethical people in government that we need but equal access to the necessities of life and those working toward the elimination of scarcity. We would use scientific scales of performance for measurement and allocation of resources so that human biases are left out of the equation. Within The Venus Project’s safe, energy-efficient cities, there would be interdisciplinary teams of knowledgeable people in different fields accompanied by cybernated systems that use “sensors” to monitor all aspects of society in order to provide real-time information supporting decision-making for the wellbeing of all people and the protection of the environment.
DA: In your view, is abundance simply a function of technological innovation? I mean, assuming we get the technology right, do you believe that we could eventually eliminate poverty and crime altogether?
RM: Yes, if we apply our scientists and technical personnel to work towards those ends. We have never mobilized many scientific disciplines giving them the problem of creating a society to end war, produce safe, clean transportation, eliminate booms and busts, poverty, homelessness, hunger, crime and aberrant behavior. For instance, one does not need to make laws to try and eliminate stealing, when all goods and services are available without a price tag. But scientists have not been asked to design a total systems approach to city design, let alone to planetary planning. Scientist have not been given the problem to develop and apply a total holistic effort using the methods of science, technology and resource management to serve all people equitably in the development of a safe and sustainable global society. Unfortunately, only in times of war, do we see resources allocated and scientists mobilized in this way.
DA: I assume schooling and education are important to Jacque’s vision. How might schools and universities differ from the way they are designed today?
RM: The education and values we are given seem to always support the established system we are raised in. We are not born with bigotry, envy, or hatred – we do pick them up from our schools and culture. In fact, even our facial expressions, the words we use, notions of good and bad, right and wrong, are all culture bound. A healthy brain can, in fact, simply become a Nazi faster in a Nazi society. It has no way of knowing what is significant or not, that is all learned by experience and background. The manipulation is so subtle that we feel our values come from within. Most often we don’t know whom our values are really serving.
Yes, education will differ considerably from that of today. As Fresco explains in his book The Best That Money Can’t Buy “The subjects studied will be related to the direction and needs of this new evolving culture. Students will be made aware of the symbiotic relationship between people, technology, and the environment.”
DA: I can only assume that critics routinely dismiss The Venus Project as a kind of hopeful utopia. How do you respond to that criticism?
RM: Critics very often reject or dismiss new ideas. What is utopian thinking is to believe that the system we are living under today will enable us to achieve sustainability, equality or a high standard of living for all when it is our system which generates these very problems in the first place. If we continue as we are, it seems to me that we are destined for calamity. The Venus Project is not offering a fixed notion as to how society should be. There are no final frontiers. It does offer a way out of our dilemmas to help initiate a next step in our social evolution.
Many are working at going to other planets to escape the problems on this one, but we would be taking our detrimental value systems with us. We are saying that we have to tackle the problems we face here on the most habitable planet we know of. We will have to apply methodologies to enable us to live together in accordance with the carrying capacity of Earth’s resources, eliminate artificial boundaries, share resources and learn to relate to one another and the environment.
What we have to ask is, what kind of world do we want to live in?
DA: My last question is about the challenges ahead. Rather than taking the necessary steps to reverseclimate change, we seem to be accelerating our pollution of the Earth. Socially, we are witnessing a renewed focus on nativism and fear. How might thevaluesof The Venus Project manage against these negative tendencies in human beings?
RM: The notion of negative tendencies in human beings or that we possess a certain “human nature” is a scapegoat to keep things as they are. It’s implying that we are born with a fixed set of views regarding our action patterns. Human behavior is always changing, but there is no “human nature,” per se. Determining the conditions that generate certain behaviors is what needs to be understood.
As Jacque elaborates, “We are just as lawful as anything else in nature. What appears to be overlooked is the influence of culture upon our values, behavior, and our outlook. It is like studying plants apart from the fact that they consume radiant energy, nutrients, require water, carbon dioxide, gravity, nitrogen, etc. Plants do not grow of their own accord, neither do humans values and behavior.”
All social improvement, from the airplane to clean sources of energy undergoes change, but our social systems remain mostly static. The history of civilization is experimentation and modification. The Free Enterprise System was an important experiment and tremendous step along the way that generated innovation throughout our society. What we now advocate is to continue the process of social experimentation, as this system has long outlived its usefulness and simply cannot address the monumental problems it is facing today. We desperately need to update our social designs to correspond with our technological ability to create abundance for all. This could be the most exciting and fulfilling experiment we as a species could ever take on; working together cooperatively to deal with our most pressing problems which confront us all and finding solutions to them unencumbered with the artificial limitations we impose upon ourselves.
Daniel Araya is a researcher and advisor to government with a special interest in education, technological innovation and public policy. His newest books include: Augmented Intelligence (2016), Smart Cities as Democratic Ecologies (2015), and Rethinking US Education Policy (2014). He has a doctorate from the University of Illinois at Urbana-Champaign and is an alumnus of Singularity University’s graduate program in Silicon Valley. He can be found here:www.danielaraya.com and here: @danielarayaXY.
Nature. The Proceedings of the National Academy of Sciences. The Journal of the American Medical Association.
These are some the most elite academic journals in the world. And last year, one tech company, Alphabet’s Google, published papers in all of them.
The unprecedented run of scientific results by the Mountain View search giant touched on everything from ophthalmology to computer games to neuroscience and climate models. For Google, 2016 was an annus mirabilis during which its researchers cracked the top journals and set records for sheer volume.
Behind the surge is Google’s growing investment in artificial intelligence, particularly “deep learning,” a technique whose ability to make sense of images and other data is enhancing services like search and translation (see “10 Breakthrough Technologies 2013: Deep Learning”).
According to the tally Google provided to MIT Technology Review, it published 218 journal or conference papers on machine learning in 2016, nearly twice as many as it did two years ago.
We sought out similar data from the Web of Science, a service of Clarivate Analytics, which confirmed the upsurge. Clarivate said that the impact of Google’s publications, according to a measure of publication strength it uses, was four to five times the world average. Compared to all companies that publish prolifically on artificial intelligence, Clarivate ranks Google No. 1 by a wide margin.
The publication explosion is no accident. Google has more than tripled the number of machine learning researchers working for the company over the last few years, according to Yoshua Bengio, a deep-learning specialist at the University of Montreal. “They have recruited like crazy,” he says.
And to capture the first-round picks from computation labs, companies can’t only offer a Silicon Valley-sized salary. “It’s hard to hire people just for money,” says Konrad Kording, a computational neuroscientist at Northwestern University. “The top people care about advancing the world, and that means writing papers the world can use, and writing code the world can use.”
At Google, the scientific charge has been spearheaded by DeepMind, the high-concept British AI company started by neuroscientist and programmer Demis Hassabis. Google acquired it for $400 million in 2014.
Hassabis has left no doubt that he’s holding onto his scientific ambitions. In a January blog post, he said DeepMind has a “hybrid culture” between the long-term thinking of an academic department and “the speed and focus of the best startups.” Aligning with academic goals is “important to us personally,” he writes. Kording, one of whose post-doctoral students, Mohammad Azar, was recently hired by DeepMind, says that “it’s perfectly understood that the bulk of the projects advance science.”
Last year, DeepMind published twice in Nature, the same storied journal where the structure of DNA and the sequencing of the human genome were first reported. One DeepMind paper concerned its program AlphaGo, which defeated top human players in the ancient game of Go; the other described how a neural network with a working memory could understand and adapt to new tasks.
The contest to develop more powerful AI now involves hundreds of companies, with competition most intense between the top tech giants such as Google, Facebook, and Microsoft. All see the chance to reap new profits by using the technology to wring more from customer data, to get driverless cars on the road, or in medicine. Research is occurring in a hot house atmosphere reminiscent of the early days of computer chips, or of the first biotech plants and drugs, times when notable academic firsts also laid the foundation stones of new industries.
That explains why publication score-keeping matters. The old academic saw “publish or perish” is starting to define the AI race, leaving companies that have weak publication records at a big disadvantage. Apple, famous for strict secrecy around its plans and product launches, found that its culture was hurting its efforts in AI, which have lagged those of Google and Facebook.
So when Apple hired computer scientist Russ Salakhutdinov from Carnegie Mellon last year as its new head of AI, he was immediately allowed to break Apple’s code of secrecy by blogging and giving talks. At a major machine-learning science conference late last year in Barcelona, Salakhutdinov made the point of announcing that Apple would start publishing, too. He showed a slide: “Can we publish? Yes.”
Salakhutdinov will speak at MIT Technology Review’s EmTech Digital event on artificial intelligence next week in San Francisco.
Pronounced ‘crisper’, the technique stands for Clustered Regularly Interspaced Short Palindromic Repeat and refers to the way short, repeated DNA sequences in the genomes of bacteria and other microorganisms are organised.
Inspired by how these organisms defend themselves from viral attacks by stealing strips of an invading virus’ DNA, the technique splices in an enzyme called Cas creating newly-formed sequences known as CRISPR. In bacteria, this causes RNA to make copies of these sequences, which help recognise virus DNA and prevent future invasions.This technique was transformed into a gene-editing tool in 2012 and was named Science magazine’s 2015 Breakthrough of the Year. While it’s not the first DNA-editing tool, it has piqued the interest of many scientists, research and health groups because of its accuracy, relative affordability and far-reaching uses. The latest? Eradicating HIV.
At the start of May, researchers at the Lewis Katz School of Medicine at Temple University (LKSOM) and the University of Pittsburgh demonstrated how they can remove HIV DNA from genomes of living animals – in this case, mice – to curb the spread of infection. The breakthrough was the first time replication of HIV-1 had been eliminated from infected cells using CRISPR following a 2016 proof-of-concept study.
In particular, the team genetically inactivated HIV-1 in transgenic mice, reducing the RNA expression of viral genes by roughly 60 to 95 per cent, before trialling the method on infected mice.
“During acute infection, HIV actively replicates,” Dr. Khalili explained. “With EcoHIV mice, we were able to investigate the ability of the CRISPR/Cas9 strategy to block viral replication and potentially prevent systemic infection.”
Since the HIV research was published, a team of biologists at University of California, Berkeley, described 10 new CRISPR enzymes that, once activated, are said to “behave like Pac-Man” to chew through RNA in a way that could be used as sensitive detectors of infectious viruses.
These new enzymes are variants of a CRISPR protein, Cas13a, which the UC Berkeley researchers reported last September in Nature, and could be used to detect specific sequences of RNA, such as from a virus. The team showed that once CRISPR-Cas13a binds to its target RNA, it begins to indiscriminately cut up all RNA making it “glow” to allow signal detection.
Two teams of researchers at the Broad Institute subsequently paired CRISPR-Cas13a with the process of RNA amplification to reveal that the system, dubbed Sherlock, could detect viral RNA at extremely low concentrations, such as the presence of dengue and Zika viral RNA, for example. Such a system could be used to detect any type of RNA, including RNA distinctive of cancer cells.
This piece has been updated to remove copy taken from WIRED US.
(Business Wire) Growth in U.S. spending on prescription medicines fell in 2016 as competition intensified among manufacturers, and payers ramped up efforts to limit price increases, according to research released today by the QuintilesIMS Institute. New medicines introduced in the past two years continue to drive at least half of total spending growth as clusters of innovative treatments for cancer, autoimmune diseases, HIV, multiple sclerosis, and diabetes become accessible to patients. The prospects for innovative treatments over the next five years are bright fueled by a robust late-phase pipeline of more than 2,300 novel products that include more than 600 cancer treatments. U.S. net total spending is expected to increase 2-5 percent on average through 2021, reaching $375-405 billion.
Drug spending grew at a 4.8 percent pace in 2016 to $323 billion, less than half the rate of the previous two years, after adjusting for off-invoice discounts and rebates. The surge of innovative medicine introductions paused in 2016, with fewer than half as many new drugs launched than in 2014 and 2015. While the total use of medicines continued to climb—with total prescriptions dispensed reaching 6.1 billion, up 3.3 percent over 2015 levels—the spike in new patients being treated for hepatitis C ebbed, which contributed to the decline in spend. Net price increases—reflecting rebates and other price breaks from manufacturers—averaged 3.5 percent last year, up from 2.5 percent in 2015.
“After a year of heated discussion about the cost and affordability of drugs, the reality is that after adjusting for population and economic growth, total spending on all medicines increased just 1.1 percent annually over the past decade,” said Murray Aitken, senior vice president and executive director of the QuintilesIMS Institute. “Understanding how the dynamics of today’s healthcare landscape impact key stakeholders is more important than ever, as efforts to pass far-reaching healthcare legislative reforms remain on the political agenda.”
In its report, Medicine Use and Spending in the U.S. – A Review of 2016 and Outlook to 2021, the QuintilesIMS Institute highlights the following findings:
Patients age 65 years and over have accounted for 41 percent of total prescription growth since 2011. While the population of seniors in the U.S. has increased 19 percent since 2011, their average per capita use of medicines declined slightly—from 50 prescriptions per person in 2011 to 49 prescriptions per person last year. In the age 50-64 year population, total prescriptions filled increased 21 percent over the past five years, primarily due to higher per capita use, which reached 29 prescriptions per person. The largest drivers of prescription growth were in large chronic therapy areas including hypertension and mental health, while the largest decline was in pain management.
Average patient out-of-pocket costs continued to decline in 2016, reaching $8.47 compared to $9.66 in 2013. Nearly 30 percent of prescriptions filled in 2016 required no patient payment due in part to preventive treatment provisions under the Affordable Care Act, up from 24 percent in 2013. The use of co-pay assistance coupons by patients covered by commercial plans also contributed to the decline in average out-of-pocket costs, and were used to fill 19 percent of all brand prescriptions last year—compared with 13 percent in 2013. Those patients filling brand prescriptions while in the deductible phase of their commercial health plan accounted for 14 percent of prescriptions and 39 percent of total out-of-pocket costs. Patients in the deductible phases of their health plan abandoned about one in four of their brand prescriptions.
The late-phase R&D pipeline remains robust and will yield an average of 40-45 new brand launches per year through 2021. At the end of 2016, the late-phase pipeline included 2,346 novel products, a level similar to the prior year, with specialty medicines making up 37 percent of the total. More than 630 distinct research programs are underway in oncology, which account for one-quarter of the pipeline and where one in four molecules are focused on blood cancers. While the number of new drug approvals and launches fell by more than half in 2016, the size and quality of the late-phase pipeline is expected to drive historically high numbers of new medicines.
Moderating price increases for branded products, and the larger impact of patent expiries, will drive net growth in total U.S. spending of 2-5 percent through 2021, reaching $375-405 billion. Net price increases for protected brands are forecast to average 2-5 percent over the next five years, even as invoice price growth is expected to moderate to the 7-10 percent range. This reflects additional pressure and influence by payers on the pricing and prescribing of medicines, as well as changes in the mix of branded products on the market. Lower spending on brands following the loss of patent protection is forecast to total $140 billion, including the impact of biosimilar competition, through 2021.
The full version of the report, including a detailed description of the methodology, is available at www.quintilesimsinstitute.org. The study was produced independently as a public service, without industry or government funding.
In this release, “spending on medicines” is an estimate of the amount received by pharmaceutical manufacturers after rebates, off-invoice discounts and other price concessions have been made by manufacturers to distributors, health plans and intermediaries. It does not relate directly to either the out-of-pocket costs paid by a patient, except where noted, and does not include mark-ups and additional costs associated with dispensing or other services associated with medicines reaching patients. For a fuller explanation of methods to estimate net spending, see the Methodology section of the report.