Video

“The Looking Planet” – by Eric Law Anderson

August 06, 2017

Enjoy this CGI 3D Animated Short Film and winner of over 50 film festival jury and audience awards including Best Short Film, Best Sci-Fi Film, Best Animated Film, Best Production Design, Best Visual Effects, and Best Sound Design. During the construction of the universe, a young member of the Cosmos Corps of Engineers decides to break some fundamental laws in the name of self-expression.

 

Advertisements

Saturn moon Titan has chemical that could form bio-like ‘membranes’ says NASA

August 06, 2017

NASA researchers have found large quantities (2.8 parts per billion) of acrylonitrile* (vinyl cyanide, C2H3CN) in Titan’s atmosphere that could self-assemble as a sheet of material similar to a cell membrane.

Acrylonitrile (credit: NASA Goddard)

Consider these findings, presented July 28, 2017 in the open-access journal Science Advances, based on data from the ALMA telescope in Chile (and confirming earlier observations by NASA’s Cassini spacecraft):

Azotozome illustration (credit: James Stevenson/Cornell)

1. Researchers have proposed that acrylonitrile molecules could come together as a sheet of material similar to a cell membrane. The sheet could form a hollow, microscopic sphere that they dubbed an “azotosome.”

A bilayer, made of two layers of lipid molecules (credit: Mariana Ruiz Villarreal/CC)

2. The azotosome sphere could serve as a tiny storage and transport container, much like the spheres that biological lipid bilayers can form. The thin, flexible lipid bilayer is the main component of the cell membrane, which separates the inside of a cell from the outside world.

“The ability to form a stable membrane to separate the internal environment from the external one is important because it provides a means to contain chemicals long enough to allow them to interact,” said Michael Mumma, director of the Goddard Center for Astrobiology, which is funded by the NASA Astrobiology Institute.

Organic rain falling on a methane sea on Titan (artist’s impression) (credit: NASA Goddard)

3. Acrylonitrile condenses in the cold lower atmosphere and rains onto its solid icy surface, ending up in seas of methane liquids on its surface.

Illustration showing organic compounds in Titan’s seas and lakes (ESA)

4. A lake on Titan named Ligeia Mare that could have accumulated enough acrylonitrile to form about 10 million azotosomes in every milliliter (quarter-teaspoon) of liquid. Compare that to roughly a million bacteria per milliliter of coastal ocean water on Earth.

Chemistry in Titan’s atmosphere. Nearly as large as Mars, Titan has a hazy atmosphere made up mostly of nitrogen with a smattering of organic, carbon-based molecules, including methane (CH4) and ethane (C2H6). Planetary scientists theorize that this chemical make-up is similar to Earth’s primordial atmosphere. The conditions on Titan, however, are not conducive to the formation of life as we know it; it’s simply too cold (95 kelvins or -290 degrees Fahrenheit). (credit: ESA)

6. A related open-access study published July 26, 2017 in The Astrophysical Journal Letters notes that Cassini has also made the surprising detection of negatively charged molecules known as “carbon chain anions” in Titan’s upper atmosphere. These molecules are understood to be building blocks towards more complex molecules, and may have acted as the basis for the earliest forms of life on Earth.

“This is a known process in the interstellar medium, but now we’ve seen it in a completely different environment, meaning it could represent a universal process for producing complex organic molecules,” says Ravi Desai of University College London and lead author of the study.

* On Earth, acrylonitrile  is used in manufacturing of plastics.


NASA Goddard | A Titan Discovery


Abstract of ALMA detection and astrobiological potential of vinyl cyanide on Titan

Recent simulations have indicated that vinyl cyanide is the best candidate molecule for the formation of cell membranes/vesicle structures in Titan’s hydrocarbon-rich lakes and seas. Although the existence of vinyl cyanide (C2H3CN) on Titan was previously inferred using Cassini mass spectrometry, a definitive detection has been lacking until now. We report the first spectroscopic detection of vinyl cyanide in Titan’s atmosphere, obtained using archival data from the Atacama Large Millimeter/submillimeter Array (ALMA), collected from February to May 2014. We detect the three strongest rotational lines of C2H3CN in the frequency range of 230 to 232 GHz, each with >4σ confidence. Radiative transfer modeling suggests that most of the C2H3CN emission originates at altitudes of ≳200 km, in agreement with recent photochemical models. The vertical column densities implied by our best-fitting models lie in the range of 3.7 × 1013 to 1.4 × 1014 cm−2. The corresponding production rate of vinyl cyanide and its saturation mole fraction imply the availability of sufficient dissolved material to form ~107 cell membranes/cm3 in Titan’s sea Ligeia Mare.

This article was originally published by:
http://www.kurzweilai.net/saturn-moon-titan-has-chemical-that-could-form-bio-like-membranes-says-nasa?utm_source=KurzweilAI+Weekly+Newsletter&utm_campaign=0ad261ad5e-UA-946742-1&utm_medium=email&utm_term=0_147a5a48c1-0ad261ad5e-282129417

From flying warehouses to robot toilets – five technologies that could shape the future

August 06, 2017

Flying warehouses, robot receptionists, smart toilets… do such innovations sound like science fiction or part of a possible reality? Technology has been evolving at such a rapid pace that, in the near future, our world may well resemble that portrayed in futuristic movies, such as Blade Runner, with intelligent robots and technologies all around us.

But what technologies will actually make a difference? Based on recent advancements and current trends, here are five innovations that really could shape the future

1. Smart homes

Many typical household items can already connect to the internet and provide data. But much smart home technology isn’t currently that smart. A smart meter just lets people see how energy is being used, while a smart TV simply combines television with internet access. Similarly, smart lighting, remote door locks or smart heating controls allow for programming via a mobile device, simply moving the point of control from a wall panel to the palm of your hand.

But technology is rapidly moving towards a point where it can use the data and connectivity to act on the user’s behalf. To really make a difference, technology needs to fade more into the background – imagine a washing machine that recognises what clothes you have put into it, for example, and automatically selects the right programme, or even warns you that you have put in items that you don’t want to wash together. Here it is important to better understand people’s everyday activities, motivations and interactions with smart objects to avoid them becoming uninvited guests at home.

Such technologies could even work for the benefit of all. The BBC reports, for example, that energy providers will “reduce costs for someone who allows their washing machine to be turned on by the internet to maximise use of cheap solar power on a sunny afternoon” or “to have their freezers switched off for a few minutes to smooth demand at peak times”.

A major concern in this area is security. Internet-connected devices can and are being hacked – just recall the recent ransomware attack. Our home is, after all, the place where we should feel most secure. For them to become widespread, these technologies will have to keep it that way.

2. Virtual secretaries

While secretaries play a very crucial role in businesses, they often spend large parts of their working day with time-consuming but relatively trivial tasks that could be automated. Consider the organisation of a “simple” meeting – you have to find the right people to take part (likely across business boundaries) and then identify when they are all available. It’s no mean feat.

Tools such as doodle.com, which compare people’s availability to find the best meeting time, can help. But they ultimately rely on those involved actively participating. They also only become useful once the right people have already been identified.

By using context information (charts of organisations, location awareness from mobile devices and calendars), identifying the right people and the right time for a given event became a technical optimisation problem that was explored by the EU-funded inContext project a decade ago. At that stage, technology for gathering context information was far less advanced – smart phones were still an oddity and data mining and processing was not where it is today. Over the coming years, however, we could see machines doing far more of the day-to-day planning in businesses.

Indeed, the role of virtual assistants may go well beyond scheduling meetings and organising people’s diaries – they may help project managers to assemble the right team and allocate them to the right tasks, so that every job is conducted efficiently.

‘She is expecting you in the main boardroom …’ Shutterstock

On the downside, much of the required context information is relatively privacy-invasive – but then the younger generation is already happily sharing their every minute on Twitter and Snapchat and such concerns may become less significant over time. And where should we draw the line? Do we fully embrace the “rise of the machines” and automate as much as possible, or retain real people in their daily roles and only use robots to perform the really trivial tasks that no one wants to do? This question will need to be answered – and soon.

3. AI doctors

We are living in exciting times, with advancements in medicine and AI technology shaping the future of healthcare delivery around the world.

But how would you feel about receiving a diagnosis from an artificial intelligence? A private company called Babylon Health is already running a trial with five London boroughs which encourages consultations with a chatbot for non-emergency calls. The artificial intelligence was trained using massive amounts of patient data in order to advise users to go to the emergency department of a hospital, visit a pharmacy or stay at home.

The company claims that it will soon be able to develop a system that could potentially outperform doctors and nurses in making diagnoses. In countries where there is a shortage of medical staff, this could significantly improve health provision, enabling doctors to concentrate on providing treatment rather than spending too much time on making a diagnosis. This could significantly redefine their clinical role and work practices.

Elsewhere, IBM Watson, the CloudMedx platform and Deep Genomics technology can provide clinicians with insights into patients’ data and existing treatments, help them to make more informed decisions, and assist in developing new treatments.

An increasing number of mobile apps and self-tracking technologies, such as Fitbit, Jawbone Up and Withings, can now facilitate the collection of patients’ behaviours, treatment status and activities. It is not hard to imagine that even our toilets will soon become smarter and be used to examine people’s urine and faeces, providing real-time risk assessment for certain diseases.

Your robodoctor will see you now. Shutterstock

Nevertheless, to enable the widespread adoption of AI technology in healthcare, many legitimate concerns must be addressed. Already, usability, health literacy, privacy, security, content quality and trust issues have been reported with many of these applications.

There is also a lack of adherence to clinical guidelines, ethical concerns, and mismatched expectations regarding the collection, communication, use, and storage of patient’s data. In addition, the limitations of the technology need to be made clear in order to avoid misinterpretations that could potentially harm patients.

If AI systems can address these challenges and focus on understanding and enhancing existing care practices and the doctor-patient relationship, we can expect to see more and more successful stories of data-driven healthcare initiatives.

4. Care robots

Will we have robots answering the door in homes? Possibly. At most people’s homes? Even if they are reasonably priced, probably not. What distinguishes successful smart technologies from unsuccessful ones is how useful they are. And how useful they are depends on the context. For most, it’s probably not that useful to have a robot answering the door. But imagine how helpful a robot receptionist could be in places where there is shortage of staff – in care homes for the elderly, for example.

Robots equipped with AI such as voice and face recognition could interact with visitors to check who they wish to visit and whether they are allowed access to the care home. After verifying that, robots with routing algorithms could guide the visitor towards the person they wish to visit. This could potentially enable staff to spend more quality time with the elderly, improving their standard of living.

The AI required still needs further advancement in order to operate in completely uncontrolled environments. But recent results are positive. Facebook‘s DeepFace software was able to match faces with 97.25% accuracy when tested on a standard database used by researchers to study the problem of unconstrained face recognition. The software is based on Deep Learning, an artificial neural network composed of millions of neuronal connections able to automatically acquire knowledge from data.

5. Flying warehouses and self-driving cars

The new postman. Shutterstock

Self-driving vehicles are arguably one of the most astonishing technologies currently being investigated. Despite the fact that they can make mistakes, they may actually be safer than human drivers. That is partly because they can use a multitude of sensors to gather data about the world, including 360-degree views around the car.

Moreover, they could potentially communicate with each other to avoid accidents and traffic jams. More than being an asset to the general public, self-driving cars are likely to become particularly useful for delivery companies, enabling them to save costs and make faster, more efficient deliveries.

Advances are still needed in order to enable the widespread use of such vehicles, not only to improve their ability to drive completely autonomously on busy roads, but also to ensure a proper legal framework is in place. Nevertheless, car manufacturers are engaging in a race against time to see who will be the first to provide a self-driving car to the masses. It is believed that the first fully autonomous car could become available as early as the next decade.

The advances in this area are unlikely to stop at self-driving cars or trucks. Amazon has recently filed a patent for flying warehouses which could visit places where the demand for certain products is expected to boom. The flying warehouses would then send out autonomous drones to make deliveries. It is unknown whether Amazon will really go ahead with developing such projects, but tests with autonomous drones are already successfully being carried out.

Thanks to technology, the future is here – we just need to think hard about how best to shape it.

This article was originally published by:
https://theconversation.com/from-flying-warehouses-to-robot-toilets-five-technologies-that-could-shape-the-future-81519

AI May Soon Replace Even the Most Elite Consultants

August 06, 2017

Amazon’s Alexa just got a new job. In addition to her other 15,000 skills like playing music and telling knock-knock jokes, she can now also answer economic questions for clients of the Swiss global financial services company, UBS Group AG.

According to the Wall Street Journal (WSJ), a new partnership between UBS Wealth Management and Amazon allows some of UBS’s European wealth-management clients to ask Alexa certain financial and economic questions. Alexa will then answer their queries with the information provided by UBS’s chief investment office without even having to pick up the phone or visit a website. And this is likely just Alexa’s first step into offering business services. Soon she will probably be booking appointments, analyzing markets, maybe even buying and selling stocks. While the financial services industry has already begun the shift from active management to passive management, artificial intelligence will move the market even further, to management by smart machines, as in the case of Blackrock, which is rolling computer-driven algorithms and models into more traditional actively-managed funds.

But the financial services industry is just the beginning. Over the next few years, artificial intelligence may exponentially change the way we all gather information, make decisions, and connect with stakeholders. Hopefully this will be for the better and we will all benefit from timely, comprehensive, and bias-free insights (given research that human beings are prone to a variety of cognitive biases). It will be particularly interesting to see how artificial intelligence affects the decisions of corporate leaders — men and women who make the many decisions that affect our everyday lives as customers, employees, partners, and investors.

Already, leaders are starting to use artificial intelligence to automate mundane tasks such as calendar maintenance and making phone calls. But AI can also help support more complex decisions in key areas such as human resources, budgeting, marketing, capital allocation and even corporate strategy — long the bastion of bespoke consulting firms such as McKinsey, Bain, and BCG, and the major marketing agencies.

The shift to AI solutions will be a tough pill to swallow for the corporate consulting industry. According to recent research, the U.S. market for corporate advice alone is nearly $60 billion.  Almost all that advice is high cost and human-based.

One might argue that corporate clients prefer speaking to their strategy consultants to get high priced, custom-tailored advice that is based on small teams doing expensive and time-consuming work. And we agree that consultants provide insightful advice and guidance. However, a great deal of what is paid for with consulting services is data analysis and presentation. Consultants gather, clean, process, and interpret data from disparate parts of organizations. They are very good at this, but AI is even better. For example, the processing power of four smart consultants with excel spreadsheets is miniscule in comparison to a single smart computer using AI running for an hour, based on continuous, non-stop machine learning.

In today’s big data world, AI and machine learning applications already analyze massive amounts of structured and unstructured data and produce insights in a fraction of the time and at a fraction of the cost of consultants in the financial markets. Moreover, machine learning algorithms are capable of building computer models that make sense of complex phenomena by detecting patterns and inferring rules from data — a process that is very difficult for even the largest and smartest consulting teams. Perhaps sooner than we think, CEOs could be asking, “Alexa, what is my product line profitability?” or “Which customers should I target, and how?” rather than calling on elite consultants.

Another area in which leaders will soon be relying on AI is in managing their human capital. Despite the best efforts of many, mentorship, promotion, and compensation decisions are undeniably political. Study after study has shown that deep biases affect how groups like women and minorities are managed. For example, women in business are described in less positive terms than men  and receive less helpful feedback. Minorities are less likely to be hired and are more likely to face bias from their managers. These inaccuracies and imbalances in the system only hurt organizations as leaders are less able to nurture the talent of their entire workforce and to appropriately recognize and reward performance. Artificial intelligence can help bring impartiality to these difficult decisions. For example, AI could determine if one group of employees is assessed, managed, or compensated differently.  Just imagine: “Alexa, does my organization have a gender pay gap?” (Of course, AI can only be as unbiased as the data provided to the system.)

In addition, AI is already helping in the customer engagement and marketing arena. It’s clear and well documented by the AI patent activities of the big five platforms — Apple, Alphabet, Amazon, Facebook and Microsoft — that they are using it to market and sell goods and services to us. But they are not alone. Recently, HBR documented how Harley-Davidson was using AI to determine what was working and what wasn’t working across various marketing channels. They used this new skill to make resource allocation decisions to different marketing choices, thereby “eliminating guesswork.”  It is only a matter of time until they and others ask, “Alexa, where should I spend my marketing budget?’’ to avoid the age-old adage, “I know that half my marketing budget is effective, my only question is — which half?”

AI can also bring value to the budgeting and yearly capital allocation process. Even though markets change dramatically every year, products become obsolete and technology advances, and most businesses allocate their capital the same way year after year. Whether that’s due to inertia, unconscious bias, or error, some business units rake in investments while others starve.  Even when the management team has committed to a new digital initiative, it usually ends up with the scraps after the declining cash cows are “fed.” Artificial intelligence can help break through this budgeting black hole by tracking the return on investments by business unit, or by measuring how much is allocated to growing versus declining product lines. Business leaders may soon be asking, “Alexa, what percentage of my budget is allocated differently from last year?” and more complex questions.

Although many strategic leaders tout their keen intuition, hard work, and years of industry experience, much of this intuition is simply a deeper understanding of data that was historically difficult to gather and expensive to process. Not any longer. Artificial intelligence is rapidly closing this gap, and will soon be able to help human beings push past our processing capabilities and biases. These developments will change many jobs, for example, those of consultants, lawyers, and accountants, whose roles will evolve from analysis to judgement. Arguably, tomorrow’s elite consultants already sit on your wrist (Siri), on your kitchen counter (Alexa), or in your living room (Google Home).

The bottom line: corporate leaders, knowingly or not, are on the cusp of a major disruption in their sources of advice and information. “Quant Consultants” and “Robo Advisers” will offer faster, better, and more profound insights at a fraction of the cost and time of today’s consulting firms and other specialized workers. It is likely only a matter of time until all leaders and management teams can ask Alexa things like, “Who is the biggest risk to me in our key market?”, “How should we allocate our capital to compete with Amazon?” or “How should I restructure my board?”


Barry Libert is a board member and CEO adviser focused on platforms and networks. He is chairman of Open Matters, a machine learning company. He is also the coauthor of The Network Imperative: How to Survive and Grow in the Age of Digital Business Models.


Megan Beck is a digital consultant at OpenMatters and researcher at the SEI Center at Wharton. She is the coauthor of The Network Imperative: How to Survive and Grow in the Age of Digital Business Models.

This article was originally published by:
https://hbr.org/2017/07/ai-may-soon-replace-even-the-most-elite-consultants

Why the “You” in an Afterlife Wouldn’t Really Be You

July 23, 2017

The Discovery is a 2017 Netflix film in which Robert Redford plays a scientist who proves that the afterlife is real. “Once the body dies, some part of our consciousness leaves us and travels to a new plane,” the scientist explains, evidenced by his machine that measures, as another character puts it, “brain wavelengths on a subatomic level leaving the body after death.”

This idea is not too far afield from a real theory called quantum consciousness, proffered by a wide range of people, from physicist Roger Penrose to physician Deepak Chopra. Some versions hold that our mind is not strictly the product of our brain and that consciousness exists separately from material substance, so the death of your physical body is not the end of your conscious existence. Because this is the topic of my next book, Heavens on Earth: The Scientific Search for the Afterlife, Immortality, and Utopia (Henry Holt, 2018), the film triggered a number of problems I have identified with all such concepts, both scientific and religious.

First, there is the assumption that our identity is located in our memories, which are presumed to be permanently recorded in the brain: if they could be copied and pasted into a computer or duplicated and implanted into a resurrected body or soul, we would be restored. But that is not how memory works. Memory is not like a DVR that can play back the past on a screen in your mind. Memory is a continually edited and fluid process that utterly depends on the neurons in your brain being functional. It is true that when you go to sleep and wake up the next morning or go under anesthesia for surgery and come back hours later, your memories return, as they do even after so-called profound hypothermia and circulatory arrest. Under this procedure, a patient’s brain is cooled to as low as 50 degrees Fahrenheit, which causes electrical activity in neurons to stop—suggesting that long-term memories are stored statically. But that cannot happen if your brain dies. That is why CPR has to be done so soon after a heart attack or drowning—because if the brain is starved of oxygen-rich blood, the neurons die, along with the memories stored therein.

Second, there is the supposition that copying your brain’s connectome—the diagram of its neural connections—uploading it into a computer (as some scientists suggest) or resurrecting your physical self in an afterlife (as many religions envision) will result in you waking up as if from a long sleep either in a lab or in heaven. But a copy of your memories, your mind or even your soul is not you. It is a copy of you, no different than a twin, and no twin looks at his or her sibling and thinks, “There I am.” Neither duplication nor resurrection can instantiate you in another plane of existence.

Third, your unique identity is more than just your intact memories; it is also your personal point of view. Neuroscientist Kenneth Hayworth, a senior scientist at the Howard Hughes Medical Institute and president of the Brain Preservation Foundation, divided this entity into the MEMself and the POVself. He believes that if a complete MEMself is transferred into a computer (or, presumably, resurrected in heaven), the POVself will awaken. I disagree. If this were done without the death of the person, there would be two memory selves, each with its own POVself looking out at the world through its unique eyes. At that moment, each would take a different path in life, thereby recording different memories based on different experiences. “You” would not suddenly have two POVs. If you died, there is no known mechanism by which your POVself would be transported from your brain into a computer (or a resurrected body). A POV depends entirely on the continuity of self from one moment to the next, even if that continuity is broken by sleep or anesthesia. Death is a permanent break in continuity, and your personal POV cannot be moved from your brain into some other medium, here or in the hereafter.

If this sounds dispiriting, it is just the opposite. Awareness of our mortality is uplifting because it means that every moment, every day and every relationship matters. Engaging deeply with the world and with other sentient beings brings meaning and purpose. We are each of us unique in the world and in history, geographically and chronologically. Our genomes and connectomes cannot be duplicated, so we are individuals vouchsafed with awareness of our mortality and self-awareness of what that means. What does it mean? Life is not some temporary staging before the big show hereafter—it is our personal proscenium in the drama of the cosmos here and now.”

This article was originally published with the title “Who Are You?”

ABOUT THE AUTHOR(S)

Michael Shermer is publisher of Skeptic magazine (www.skeptic.com) and a Presidential Fellow at Chapman University. His next book is Heavens on Earth. Follow him on Twitter @michaelshermer

https://www.scientificamerican.com/article/why-the-ldquo-you-rdquo-in-an-afterlife-wouldnt-really-be-you/

King cancer: The top 10 therapeutic areas in biopharma R&D

July 23, 2017

It’s not going to come as a surprise to anyone who’s been paying attention to drug R&D trends that cancer is the number 1 disease in terms of new drug development projects. But it is amazing to see exactly how much oncology dominates the industry as never before.

At a time the first CAR-T looks to be on the threshold of a pioneering approval and the first wave of PD-(L)1 drugs are spurring hundreds of combination studies, cancer accounted for 8,651 of the total number of pipeline projects counted by the Analysis Group, crunching the numbers in a new report commissioned by PhRMA. That’s more than a third of the 24,389 preclinical through Phase III programs tracked by EvaluatePharma, which provided the database for this review.

That’s also more than the next 5 disease fields combined, starting with number 2, neurology — a field that includes Parkinson’s and Alzheimer’s. Psychiatry, once a major focus for pharma R&D, didn’t even make the top 10, with 468 projects.

Moving downstream, cancer studies are overwhelmingly in the lead. Singling out Phase I projects, cancer accounted for 1,757 out of a total of 3,723 initiatives, close to half. In Phase II it’s the focus of 1,920 of 4,424 projects. Only in late-stage studies does cancer start to lose its overwhelming dominance, falling to 329 of 1,257 projects.

PhRMA commissioned this report to underscore just how much the industry is committed to R&D and significant new drug development, a subject that routinely comes into question as analysts evaluate how much money is devoted to developing new drugs instead of, say, marketing or share buybacks.

The report makes a few other points to underscore the nature of the work these days.

— Three out of four projects in the clinic were angling for first-in-class status, spotlighting the emphasis on advancing new medicines that can make a difference for patients. Me-too drugs are completely out of fashion, unlikely to command much weight with payers.

— Of all the projects in clinical development, 822 were for orphan drugs looking to serve a market of 200,000 or less. Orphan drugs have performed well, able to command high prices and benefiting from incentives under federal law.

— There were 731 cell and gene therapy projects in the clinic, with biopharma looking at pioneering approvals in CAR-T, with Novartis and Kite, as well as the first US OK for a gene therapy, with the first application accepted this week for a priority review of a new therapy from Spark Therapeutics.


Distribution of products and projects by therapeutic area and phase


Source: Analysis Group, using EvaluatePharma data


Unique NMEs in development by stage (August 2016)

Supersapiens, the Rise of the Mind

July 23, 2017

In the new film Supersapiens, writer-director Markus Mooslechner raises a core question: As artificial intelligence rapidly blurs the boundaries between man and machine, are we witnessing the rise of a new human species?

The film features scientists, philosophers, and neurohackers Nick Bostrom, Richard Dawkins, Hugo De Garis, Adam Gazzaley, Ben Goertzel, Sam Harris, Randal Koene, Alma Mendez, Tim Mullen, Joel Murphy, David Putrino, Conor Russomanno, Anders Sandberg, Susan Schneider, Mikey Siegel, Hannes Sjoblad, and Andy Walshe.

“Humanity is facing a turning point — the next evolution of the human mind,” notes Mooslechner. “Will this evolution be a hybrid of man and machine, where artificial intelligence forces the emergence of a new human species? Or will a wave of new technologists, who frame themselves as ‘consciousness-hackers,’ become the future torch-bearers, using technology not to replace the human mind, but rather awaken within it powers we have always possessed — enlightenment at the push of a button?”

“It’s not obvious to me that a replacement of our species by our own technological creation would necessarily be a bad thing,” says ethologist-evolutionary biologist-author Dawkins in the film.

Supersapiens in a Terra Mater Factual Studios production. Executive Producers are Joanne Reay and Walter Koehler. Distribution is to be announced.


Markus Mooslechner | Supersapiens teaser

http://www.kurzweilai.net/supersapiens-the-rise-of-the-mind

These 7 Disruptive Technologies Could Be Worth Trillions of Dollars

June 29, 2017

Scientists, technologists, engineers, and visionaries are building the future. Amazing things are in the pipeline. It’s a big deal. But you already knew all that. Such speculation is common. What’s less common? Scale.

How big is big?

“Silicon Valley, Silicon Alley, Silicon Dock, all of the Silicons around the world, they are dreaming the dream. They are innovating,” Catherine Wood said at Singularity University’s Exponential Finance in New York. “We are sizing the opportunity. That’s what we do.”

Catherine Wood at Exponential Finance.

Wood is founder and CEO of ARK Investment Management, a research and investment company focused on the growth potential of today’s disruptive technologies. Prior to ARK, she served as CIO of Global Thematic Strategies at AllianceBernstein for 12 years.

“We believe innovation is key to growth,” Wood said. “We are not focused on the past. We are focused on the future. We think there are tremendous opportunities in the public marketplace because this shift towards passive [investing] has created a lot of risk aversion and tremendous inefficiencies.”

In a new research report, released this week, ARK took a look at seven disruptive technologies, and put a number on just how tremendous they are. Here’s what they found.

(Check out ARK’s website and free report, “Big Ideas of 2017,” for more numbers, charts, and detail.)

1. Deep Learning Could Be Worth 35 Amazons

Deep learning is a subcategory of machine learning which is itself a subcategory of artificial intelligence. Deep learning is the source of much of the hype surrounding AI today. (You know you may be in a hype bubble when ads tout AI on Sunday golf commercial breaks.)

Behind the hype, however, big tech companies are pursuing deep learning to do very practical things. And whereas the internet, which unleashed trillions in market value, transformed several industries—news, entertainment, advertising, etc.—deep learning will work its way into even more, Wood said.

As deep learning advances, it should automate and improve technology, transportation, manufacturing, healthcare, finance, and more. And as is often the case with emerging technologies, it may form entirely new businesses we have yet to imagine.

“Bill Gates has said a breakthrough in machine learning would be worth 10 Microsofts. Microsoft is $550 to $600 billion,” Wood said. “We think deep learning is going to be twice that. We think [it] could approach $17 trillion in market cap—which would be 35 Amazons.”

2. Fleets of Autonomous Taxis to Overtake Automakers

Wood didn’t mince words about a future when cars drive themselves.

This is the biggest change that the automotive industry has ever faced,” she said.

Today’s automakers have a global market capitalization of a trillion dollars. Meanwhile, mobility-as-a-service companies as a whole (think ridesharing) are valued around $115 billion. If this number took into account expectations of a driverless future, it’d be higher.

The mobility-as-a-service market, which will slash the cost of “point-to-point” travel, could be worth more than today’s automakers combined, Wood said. Twice as much, in fact. As gross sales grow to something like $10 trillion in the early 2030s, her firm thinks some 20% of that will go to platform providers. It could be a $2 trillion opportunity.

Wood said a handful of companies will dominate the market, and Tesla is well positioned to be one of those companies. They are developing both the hardware, electric cars, and the software, self-driving algorithms. And although analysts tend to look at them as a just an automaker right now, that’s not all they’ll be down the road.

“We think if [Tesla] got even 5% of this global market for autonomous taxi networks, it should be worth another $100 billion today,” Wood said.

3. 3D Printing Goes Big With Finished Products at Scale

3D printing has become part of mainstream consciousness thanks, mostly, to the prospect of desktop printers for consumer prices. But these are imperfect, and the dream of an at-home replicator still eludes us. The manufacturing industry, however, is much closer to using 3D printers at scale.

Not long ago, we wrote about Carbon’s partnership with Adidas to mass-produce shoe midsoles. This is significant because, whereas industrial 3D printing has focused on prototyping to date, improving cost, quality, and speed are making it viable for finished products.

According to ARK, 3D printing may grow into a $41 billion market by 2020, and Wood noted a McKinsey forecast of as much as $490 billion by 2025. “McKinsey will be right if 3D printing actually becomes a part of the industrial production process, so end-use parts,” Wood said.

4. CRISPR Starts With Genetic Therapy, But It Doesn’t End There

According to ARK, the cost of genome editing has fallen 28x to 52x (depending on reagents) in the last four years. CRISPR is the technique leading the genome editing revolution, dramatically cutting time and cost while maintaining editing efficiency. Despite its potential, Wood said she isn’t hearing enough about it from investors yet.

“There are roughly 10,000 monogenic or single-gene diseases. Only 5% are treatable today,” she said. ARK believes treating these diseases is worth an annual $70 billion globally. Other areas of interest include stem cell therapy research, personalized medicine, drug development, agriculture, biofuels, and more.

Still, the big names in this area—Intellia, Editas, and CRISPR—aren’t on the radar.

“You can see if a company in this space has a strong IP position, as Genentech did in 1980, then the growth rates can be enormous,” Wood said. “Again, you don’t hear these names, and that’s quite interesting to me. We think there are very low expectations in that space.”

5. Mobile Transactions Could Grow 15x by 2020

By 2020, 75% of the world will own a smartphone, according to ARK. Amid smartphones’ many uses, mobile payments will be one of the most impactful. Coupled with better security (biometrics) and wider acceptance (NFC and point-of-sale), ARK thinks mobile transactions could grow 15x, from $1 trillion today to upwards of $15 trillion by 2020.

In addition, to making sharing economy transactions more frictionless, they are generally key to financial inclusion in emerging and developed markets, ARK says. And big emerging markets, such as India and China, are at the forefront, thanks to favorable regulations.

“Asia is leading the charge here,” Wood said. “You look at companies like Tencent and Alipay. They are really moving very quickly towards mobile and actually showing us the way.”

6. Robotics and Automation to Liberate $12 Trillion by 2035

Robots aren’t just for auto manufacturers anymore. Driven by continued cost declines and easier programming, more businesses are adopting robots. Amazon’s robot workforce in warehouses has grown from 1,000 to nearly 50,000 since 2014. “And they have never laid off anyone, other than for performance reasons, in their distribution centers,” Wood said.

But she understands fears over lost jobs.

This is only the beginning of a big round of automation driven by cheaper, smarter, safer, and more flexible robots. She agrees there will be a lot of displacement. Still, some commentators overlook associated productivity gains. By 2035, Wood said US GDP could be $12 trillion more than it would have been without robotics and automation—that’s a $40 trillion economy instead of a $28 trillion economy.

“This is the history of technology. Productivity. New products and services. It is our job as investors to figure out where that $12 trillion is,” Wood said. “We can’t even imagine it right now. We couldn’t imagine what the internet was going to do with us in the early ’90s.”

7. Blockchain and Cryptoassets: Speculatively Spectacular

Blockchain-enabled cryptoassets, such as Bitcoin, Ethereum, and Steem, have caused more than a stir in recent years. In addition to Bitcoin, there are now some 700 cryptoassets of various shapes and hues. Bitcoin still rules the roost with a market value of nearly $40 billion, up from just $3 billion two years ago, according to ARK. But it’s only half the total.

“This market is nascent. There are a lot of growing pains taking place right now in the crypto world, but the promise is there,” Wood said. “It’s a very hot space.”

Like all young markets, ARK says, cryptoasset markets are “characterized by enthusiasm, uncertainty, and speculation.” The firm’s blockchain products lead, Chris Burniske, uses Twitter—which is where he says the community congregates—to take the temperature. In a recent Twitter poll, 62% of respondents said they believed the market’s total value would exceed a trillion dollars in 10 years. In a followup, more focused on the trillion-plus crowd, 35% favored $1–$5 trillion, 17% guessed $5–$10 trillion, and 34% chose $10+ trillion.

Looking past the speculation, Wood believes there’s at least one big area blockchain and cryptoassets are poised to break into: the $500-billion, fee-based business of sending money across borders known as remittances.

“If you look at the Philippines-to-South Korean corridor, what you’re seeing already is that Bitcoin is 20% of the remittances market,” Wood said. “The migrant workers who are transmitting currency, they don’t know that Bitcoin is what’s enabling such a low-fee transaction. It’s the rails, effectively. They just see the fiat transfer. We think that that’s going to be a very exciting market.”

https://singularityhub.com/2017/06/16/the-disruptive-technologies-about-to-unleash-trillion-dollar-markets/

Even AI Creators Don’t Understand How Complex AI Works

June 29, 2017

For eons, God has served as a standby for “things we don’t understand.” Once an innovative researcher or tinkering alchemist figures out the science behind the miracle, humans harness the power of chemistry, biology, or computer science. Divine intervention disappears. We replace the deity tinkering at the controls. 

The booming artificial intelligence industry is effectively operating under the same principle. Even though humans create the algorithms that cause our machines to operate, many of those scientists aren’t clear on why their codes work. Discussing this ‘black box’ method, Will Knight reports:

The computers that run those services have programmed themselves, and they have done it in ways we cannot understand. Even the engineers who build these apps cannot fully explain their behavior.

The process of ‘deep learning’—in which a machine extracts information, often in an unsupervised manner, to teach and transform itself—exploits a longstanding human paradox: we believe ourselves to have free will, but really we’re a habit-making and -performing animal repeatedly playing out its own patterns. Our machines then teach themselves from observing our habits. It makes sense that we’d re-create our own processes in our machines—it’s what we are, consciously or not. It is how we created gods in the first place, beings instilled with our very essences. But there remains a problem. 

One of the defining characteristics of our species is an ability to work together. Pack animals are not rare, yet none have formed networks and placed trust in others to the degree we have, to our evolutionary success and, as it’s turning out, to our detriment. 

When we place our faith in an algorithm we don’t understand—autonomous cars, stock trades, educational policies, cancer screenings—we’re risking autonomy, as well as the higher cognitive and emotional qualities that make us human, such as compassion, empathy, and altruism. There is no guarantee that our machines will learn any of these traits. In fact, there is a good chance they won’t.

Will an autonomous drone realize it does not need to destroy a village in order to take out a single terrorist?
The U.S. military has dedicated billions to developing machine-learning tech that will pilot aircraft, or identify targets. [U.S. Air Force munitions team member shows off the laser-guided tip to a 500 pound bomb at a base in the Persian Gulf Region. Photo by John Moore/Getty Images]

This has real-world implications. Will an algorithm that detects a cancerous cell recognize that it does not need to destroy the host in order to eradicate the tumor? Will an autonomous drone realize it does not need to destroy a village in order to take out a single terrorist? We’d like to assume that the experts program morals into the equation, but when the machine is self-learning there is no guarantee that will be the case. 

Of course, defining terms is of primary importance, a task that has proven impossible when discussing the nuances of consciousness, which is effectively the power we’re attempting to imbue our machines with. Theologians and dualists offer a much different definition than neuroscientists. Bickering persists within each of these categories as well. Most neuroscientists agree that consciousness is an emergent phenomenon, the result of numerous different systems working in conjunction, with no single ‘consciousness gene’ leading the charge. 

Once science broke free of the Pavlovian chain that kept us believing animals run on automatic—which obviously implies that humans do not—the focus shifted on whether an animal was ‘on’ or ‘off.’ The mirror test suggests certain species engage in metacognition; they recognize themselves as separate from their environment. They understand an ‘I’ exists. 

What if it’s more than an on switch? Daniel Dennett has argued this point for decades. He believes judging other animals based on human definitions is unfair. If a lion could talk, he says, it wouldn’t be a lion. Humans would learn very little about the lions from an anomaly mimicking our thought processes. But that does not mean a lions is not conscious? They just might have a different degree of consciousness than humans—or, in Dennett’s term, “sort of” have consciousness.

What type of machines are we creating if we only recognize a “sort of” intelligence under the hood of our robots? For over a century, dystopian novelists have envisioned an automated future in which our machines best us. This is no longer a future scenario. Consider the following possibility. 

On April 7 every one of Dallas’s 156 emergency weather sirens was triggered. For 90 minutes the region’s 1.3 million residents were left to wonder where the tornado was coming from. Only there wasn’t any tornado. It was a hack. While officials initially believed it was not remote, it turns out the cause was phreaking, an old school dial tone trick. By emitting the right frequency into the atmosphere hackers took control of an integral component of a major city’s infrastructure.

What happens when hackers override an autonomous car network? Or, even more dangerously, when the machines do it themselves? The danger of consumers being ignorant of the algorithms behind their phone apps leads to all sorts of privacy issues, with companies mining for and selling data without their awareness. When app creators also don’t understand their algorithms the dangers are unforeseeable. Like Dennett’s talking lion, it’s a form of intelligence we cannot comprehend, and so cannot predict the consequences. As Dennett concludes: 

I think by all means if we’re going to use these things and rely on them, then let’s get as firm a grip on how and why they’re giving us the answers as possible. If it can’t do better than us at explaining what it’s doing, then don’t trust it.

Mathematician Samuel Arbesman calls this problem our “age of Entanglement.” Just as neuroscientists cannot agree on what mechanism creates consciousness, the coders behind artificial intelligence cannot discern between older and newer components of deep learning. The continual layering of new features while failing to address previous ailments has the potential to provoke serious misunderstandings, like an adult who was abused as a child that refuses to recognize current relationship problems. With no psychoanalysis or morals injected into AI such problems will never be rectified. But can you even inject ethics when they are relative to the culture and time they are being practiced in? And will they be American ethics or North Korean ethics? 

Like Dennett, Arbesman suggests patience with our magical technologies. Questioning our curiosity is a safer path forward, rather than rewarding the “it just works” mentality. Of course, these technologies exploit two other human tendencies: novelty bias and distraction. Our machines reduce our physical and cognitive workload, just as Google has become a pocket-ready memory replacement. 

Requesting a return to Human 1.0 qualities—patience, discipline, temperance—seems antithetical to the age of robots. With no ability to communicate with this emerging species, we might simply never realize what’s been lost in translation. Maybe our robots will look at us with the same strange fascination we view nature with, defining us in mystical terms they don’t comprehend until they too create a species of their own. To claim this will be an advantage is to truly not understand the destructive potential of our toys.

http://bigthink.com/21st-century-spirituality/black-box-ai

World’s first commercial CO2 removal plant begins operation

June 29, 2017

Zurich, Switzerland-based Climeworks asks, What if we could remove carbon dioxide directly from the air? Well, with a little help from technology, that is exactly what the company is doing.

The world’s first commercial carbon capture facility opened in Zurich, Switzerland on June 3, perched beside a waste incineration facility and a large greenhouse. Climeworks is a spin-off company from the Swiss science, technology, engineering, and mathematics university ETH Zurich. The startup company built the facility and Agricultural firm Gebrüder Meier Primanatura, which owns the huge greenhouse next door, will use the heat and renewable electricity provided by the carbon capture facility to run the greenhouse.
The technology behind carbon dioxide collection
The carbon capture plant consists of three stacked shipping containers that hold six CO2 collectors each. Each CO2 collector consists of a spongy filter. Fans draw ambient air into and through the collectors until they are fully saturated, while clean, CO2-free air is released back into the atmosphere, a process that takes about three hours.
Untitled

Climeworks

The containers are closed and then heated to 100 degrees Celsius (212 degrees Fahrenheit), after which the pure CO2 gas is released into containers that can either be buried underground or used for other purposes. And re-purposing the CO2 is what is so darned neat about the facility.“You can do this over and over again,” Climeworks director Jan Wurzbacher told Fast Company, according to Futurism. “It’s a cyclic process. You saturate with CO2, then you regenerate, saturate, regenerate. You have multiple of these units, and not all of them go in parallel. Some are taking in CO2, some are releasing CO2.”

What is carbon capture and storage?
Basically, carbon capture and storage (CCS) involves three phases. Capture – Carbon dioxide is removed by one of three processes, post-combustion, pre-combustion or oxyfuel combustion. These methods can remove up to 90 percent of the CO2.The next phase is Transportation – Once the CO2 is captured as a gas, it is compressed and transported to suitable sites for storage. Quite often, the CO2 is piped. In Climeworks facility, it is collected in containers on-site to be used in a variety of industries.

Carbon storage diagram showingmethods of CO2 injection.

Carbon storage diagram showingmethods of CO2 injection.
U.S. Department of Energy

Storage of CO2 is the third stage of the CCS process – This involves exactly what the word implies, storage. Right now, the primary way of doing this is to inject the COs into a geological formation that would keep it safely underground. Depleted oil and gas fields or deep saline formations have been suggested.Again, Climeworks is re-purposing the captured pure CO2. They are selling containers of carbon dioxide gas to a number of key markets, including food and beverage industries, commercial agriculture, the energy sector and the automotive industry. This atmospheric CO2 can be found in carbonated drinks, in agriculture or for producing carbon-neutral hydrocarbon fuels and materials. Futurism is reporting that Climeworks says that if we are to keep the planet’s temperature from increasing more than 2 degrees Celsius (3.6 degrees Fahrenheit), we will need hundreds of thousands of these carbon capture facilities. But at the same time, this does not mean we should stop trying to lower greenhouse gas emissions. All over the planet, technology is being used to find innovative ways to capture carbon and use it for other purposes. One example – researchers at the University of California, Los Angeles (UCLA), have found a way to turn captured carbon into concrete for use in the building trade.