MIT’s AlterEgo headset can read words you say in your head

May 20, 2018

I don’t want to alarm you, but robots can now read your mind. Kind of.

AlterEgo is a new headset developed by MIT Media Lab. You strap it to your face. You talk to it. It talks to you. But no words are said. You say things in your head, like “what street am I on,” and it reads the signals your brain sends to your mouth and jaw, and answers the question for you.

Check out this handy explainer video MIT Media Lab made that shows some of the potential of AlterEgo:

So yes, according to MIT Media Lab, you may soon be able to control your TV with your mind.

The institution explained in its announcement that AlterEgo communicates with you through bone-conduction headphones, which circumvent the ear canal by transmitting sound vibrations through your face bones. Freaky. This, MIT Media Lab said, makes it easier for AlterEgo to talk to you while you’re talking to someone else.

Plus, in trials involving 15 people, AlterEgo had an accurate transcription rate of 92 percent.

Arnav Kapur, the graduate student who lead AlterEgo’s development, describes it as an “intelligence-augmentation device.”

“We basically can’t live without our cellphones, our digital devices,” said Pattie Maes, Kapur’s thesis advisor at MIT Media Lab. “But at the moment, the use of those devices is very disruptive.

“So, my students and I have for a very long time been experimenting with new form factors and new types of experience that enable people to still benefit from all the wonderful knowledge and services that these devices give us, but do it in a way that lets them remain in the present.”

This article was originally published by: https://www.cnet.com/news/mit-alterego-headset-can-read-words-you-say-in-your-head/

Advertisements

Revolutionary 3D nanohybrid lithium-ion battery could allow for charging in just seconds

May 20, 2018

Cornell University engineers have designed a revolutionary 3D lithium-ion battery that could be charged in just seconds.

In a conventional battery, the battery’s anode and cathode* (the two sides of a battery connection) are stacked in separate columns (the black and red columns in the left illustration above). For the new design, the engineers instead used thousands of nanoscale (ultra-tiny) anodes and cathodes (shown in the illustration on the right above).

Putting those thousands of anodes and cathodes just 20 nanometers (billionths of a meter) apart allows for extremely fast charging** (in seconds or less) and also allows for holding more power for longer.

Left-to-right: The anode was made of self-assembling (automatically grown) thin-film carbon material with thousands of regularly spaced pores (openings), each about 40 nanometers wide. The pores were coated with a 10 nanometer-thick electrolyte* material (the blue layer between the black anode layer, as shown in the “Electrolyte coating” illustration), which is electronically insulating but conducts ions (an ion is an atom or molecule that has an electrical charge and is what flows inside a battery instead of electrons). The cathode was made from sulfur. (credit: Cornell University)

In addition, unlike traditional batteries, the electrolyte battery material does not have pinholes (tiny holes), which can lead to short-circuiting the battery, giving rise to fires in mobile devices, such as cellphones and laptops.

The engineers are still perfecting the technique, but they have applied for patent protection on the proof-of-concept work, which was funded by the U.S. Department of Energy and in part by the National Science Foundation.

Reference: Energy & Environmental Science (open source with registration) March 9, 2018. Source: Cornell University May 16, 2018.

* How batteries work

Batteries have three parts. An anode (-) and a cathode (+) — the positive and negative sides at either end of a traditional battery — which are hooked up to an electrical circuit (green); and the electrolyte, which keeps the anode and cathode apart and allows ions (electrically charged atoms or molecules) to flow. (credit: Northwestern University Qualitative Reasoning Group)

This article was originally published by:  http://www.kurzweilai.net/revolutionary-3d-nanohybrid-lithium-ion-battery-could-allow-for-charging-in-just-seconds?utm_source=KurzweilAI+Weekly+Newsletter&utm_campaign=50d67a312d-UA-946742-1&utm_medium=email&utm_term=0_147a5a48c1-50d67a312d-282129417

A new generation of brain-like computers comes of age

May 17, 2018

​Conventional computer chips aren’t up to the challenges posed by next-generation autonomous drones and medical implants. Kwabena Boahen has laid out a way forward.

For five decades, Moore’s law held up pretty well: Roughly every two years, the number of transistors one could fit on a chip doubled, all while costs steadily declined.

Today, however, transistors and other electronic components are so small they’re beginning to bump up against fundamental physical limits on their size. Moore’s law has reached its end, and it’s going to take something different to meet the need for computing that is ever faster, cheaper and more efficient.

As it happens, Kwabena Boahen, a professor of bioengineering and of electrical engineering, has a pretty good idea what that something more is: brain-like, or neuromorphic, computers that are vastly more efficient than the conventional digital computers we’ve grown accustomed to.

This is not a vision of the future, Boahen said. As he lays out in the latest issue of Computing in Science and Engineering, the future is now.

30 years in the making

It’s a moment Boahen has been working toward his entire adult life, and then some. He first got interested in computers as a teenager growing up in Ghana. But the more he learned, the more traditional computers looked like a giant, inelegant mess of memory chips and processors connected by weirdly complicated wiring.

Both the need for something new and the first ideas for what that would look like crystalized in the mid-1980s. Even then, Boahen said, some researchers could see the end of Moore’s law on the horizon. As transistors continued to shrink, they would bump up against fundamental physical limits on their size. Eventually, they’d get so small that only a single lane of electron traffic could get through under the best circumstances. What had once been electron superfreeways would soon be tiny mountain roads, and while that meant engineers could fit more components on a chip, those chips would become more and more unreliable.

At around the same time, Boahen and others came to understand that the brain had enormous computing power – orders of magnitude more than what people have built, even today – even though it used vastly less energy and remarkably unreliable components, neurons.

How does the brain do it?

While others have built brain-inspired computers, Boahen said, he and his collaborators have developed a five-point prospectus – manifesto might be the better word – for how to build neuromorphic computers that directly mimic in silicon what the brain does in flesh and blood.

The first two points of the prospectus concern neurons themselves, which unlike computers operate in a mix of digital and analog mode. In their digital mode, neurons send discrete, all-or-nothing signals in the form of electrical spikes, akin to the ones and zeros of digital computers. But they process incoming signals by adding them all up and firing only once a threshold is reached – more akin to a dial than a switch.

That observation led Boahen to try using transistors in a mixed digital-analog mode. Doing so, it turns out, makes chips both more energy efficient and more robust when the components do fail, as about 4 percent of the smallest transistors are expected to do.

From there, Boahen builds on neurons’ hierarchical organization, distributed computation and feedback loops to create a vision of an even more energy efficient, powerful and robust neuromorphic computer.

The future of the future

But it’s not just a vision. Over the last 30 years, Boahen’s lab has actually implemented most of their ideas in physical devices, including Neurogrid, one of the first truly neuromorphic computers. In another two or three years, Boahen said, he expects they will have designed and built computers implementing all of the prospectus’s five points.

Don’t expect those computers to show up in your laptop anytime soon, however. Indeed, that’s not really the point – most personal computers operate nowhere near the limits on conventional chips. Neuromorphic computers would be most useful in embedded systems that have extremely tight energy requirements, such as very low-power neural implants or on-board computers in autonomous drones.

“It’s complementary,” Boahen said. “It’s not going to replace current computers.”

The other challenge: getting others, especially chip manufacturers, on board. Boahen is not the only one thinking about what to do about the end of Moore’s law or looking to the brain for ideas. IBM’s TrueNorth, for example, takes cues from neural networks to produce a radically more efficient computer architecture. On the other hand, it remains fully digital, and, Boahen said, 20 times less efficient than Neurogrid would be had it been built with TrueNorth’s 28-nanometer transistors.

This article was originally published by: https://engineering.stanford.edu/magazine/article/new-generation-brain-computers-comes-age

10 Things Children Born in 2018 Will Probably Never Experience

February 01, 2018

It’s All Coming Back to Me Now

2017 was a year filled with nostalgia thanks to a number of pop culture properties with ties to the past.

We got another official Alien film, and Blade Runner came back with new visuals to dazzle us. Meanwhile, “Stranger Things” hearkened back to the Spielbergian fantasy that wowed so many children of the ’80s, and “Twin Peaks” revived Agent Cooper so he could unravel yet another impenetrable mystery from the enigmatic mind of David Lynch.

As these films and TV shows remind us, a lot can change over the course of a few decades, and the experiences of one generation can be far different from those that follow closely behind thanks to advances in technology.

Click to View Full Infographic

While the “Stranger Things” kids’ phone usage reminded 30-somethings of their own pre-mobile adolescences, children born in 2018 will probably never know the feeling of being tethered to a landline. A trip to the local megaplex to catch Blade Runner 2049 may have stirred up adults’ memories of seeing the original, but children born this year may never know what it’s like to watch a film on a smaller screen with a sound system that doesn’t rattle the brain.

Technology is currently advancing faster than ever before, so what else will kids born today only read about in books or, more likely, on computer screens? Here’s a list of the top 10 things that children born in 2018 will likely never experience.

Long, Boring Travel

Mobile devices and in-flight entertainment systems have made it pretty easy to stay distracted during the course of a long trip. However, aside from the Concorde, which was decommissioned in 2003, humanity hasn’t done nearly as much to increase the speed of air travel for international jet-setters. Beyond sparsely utilized bullet trains, even the speed of our ground transportation has remained fairly limited.

However, recent developments in transportation will likely speed up the travel process, meaning today’s kids may never know the pain of seemingly endless flights and road trips.

Supersonic planes are making a comeback and could ferry passengers “across the pond” in as few as 3.5 hours. While these aircraft could certainly make travel faster for a small subset of travelers, physical and cost limitations will likely prevent them from reaching the mainstream.

However, hyperloop technology could certainly prove to be an affordable way for travelers to subtract travel time from their itineraries.

Already, these super-fast systems have the ability to travel at speeds up to 387 kmh (240 mph). If proposed routes come to fruition, they could significantly cut the time of travel between major cities. For example, a trip from New York to Washington D.C. could take just 30 minutes as opposed to the current five hours.

Driver’s Licenses

Obtaining a driver’s license is currently a rite of passage for teenagers as they make that transition from the end of childhood to the beginning of adulthood. By the time today’s newborns are 16, self-driving cars may have already put an end to this unofficial ritual by completely removing the need for human operators of motor vehicles.

According to the Centers for Disease Control (CDC), an average of six teens between the ages of 16 and 19 died every day in 2015 from injuries sustained in motor vehicle collisions. Since a vast majority of accidents are caused by human error, removing the human from the equation could help to save the lives of people of all ages, so autonomous cars are a serious priority for many.

Elon Musk, CEO of Tesla, is confident that his electric and (currently) semi-autonomous car manufacturing company will produce fully autonomous vehicles within the next two years, and several ride-hailing services are already testing self-driving vehicles.

Biology’s Monopoly on Intelligence

Self-driving cars are just a single example of innovations made possible by the advancement of artificial intelligence (AI).

Today, we have AI systems that rival or even surpass human experts at specific tasks, such as playing chess or sorting recyclables. However, experts predict that conscious AI systems that rival human intelligence could just be decades away.

Advanced robots like Hanson Robotics’ Sophia are already blurring the line between humanity and machines. The next few decades will continue to push boundaries as we inch closer and closer to the singularity.

Children born in 2018 may never know what it’s like to join the workforce or go to college at a time when humans are the smartest entities on the planet.

Language Barriers

Another promising use for AI is communication, and eventually, technology could end the language barrier on Earth.

Communication tools such as Skype have already incorporated instantaneous translating capabilities that allow speakers of a few languages to freely converse in real-time, and Google has incorporated translating capabilities into their new headphones.

Other companies, such as Waverly Labs, are also working on perfecting the technology that will eventually rival the ability of the Babel fish, an alien species found in the book “The Hitchhiker’s Guide to the Galaxy” that can instantly translate alien languages for its host.

Children born in 2018 may find themselves growing up in a world in which anyone can talk to anyone, and the idea of a “foreign” language will seem, well, completely foreign.

Humanity as a Single-Planet Species

Technology that improves human communication could radically impact our world, but eventually, we may need to find a way to communicate with extraterrestrial species. Granted, the worlds we reach in the lifetimes of anyone born this year aren’t likely to contain intelligent life, but the first milestones on the path to such a future are likely to be reached in the next few decades.

When he’s not ushering in the future of autonomous transportation, Musk is pushing his space exploration company SpaceX to develop the technology to put humans on Mars. He thinks he’ll be able to get a crew to the Red Planet by 2024, so today’s children may have no memory of a time before humanity’s cosmic footprint extended beyond a single planet.

Quiet Spaces

Overpopulation is one of the factors that experts point to when they discuss the need for humanity to spread into the cosmos. Urban sprawl has been an issue on Earth for decades, bringing about continued deforestation and the elimination of farming space.

A less-discussed problem caused by the continuous spread of urbanization, however, is the increase in noise pollution.

Experts are concerned that noise is quickly becoming the next great public health crisis. Data collected by the United Nations estimates that by 2100, 84 percent of the world’s 10.8 billion citizens will live in cities, surrounded by a smorgasbord of sound.

This decline in the number of people who live in areas largely free from noise pollution means many of the babies born today will never know what it’s like to enjoy the sound of silence.

World Hunger

Urbanization may limit the space available for traditional farming, but thanks to innovations in agriculture, food shortages may soon become a relic of the past.

Urban farming is quickly developing into a major industry that is bringing fresh produce and even fish to many markets previously considered food deserts (areas cut off from access to fresh, unprocessed foods).

Vertical farming will bring greater access to underserved areas, making it more possible than ever to end hunger in urban areas. Meanwhile, companies are developing innovative ways to reduce food waste, such as by transforming food scraps into sweets or using coffee grounds to grow mushrooms.

If these innovations take hold, children born in 2018 could grow up in a world in which every person on Earth has access to all the food they need to live a healthy, happy life.

Paper Currency

The advent of credit cards may have been the first major blow to the utilization of cash, but it wasn’t the last. Today, paper currency must contend with PayPal, Venmo, Apple Pay, and a slew of other payment options.

By the time children born in 2018 are old enough to earn a paycheck, they will have access to even more payment options, and cash could be completely phased out.

In the race to dethrone paper currency, cryptocurrencies are a frontrunner. Blockchain technology is adding much needed security to financial transactions, and while the crypto market is currently volatile, experts are still optimistic about its potential to permanently disrupt finance.

Digital Insecurity

Today, digital security is a major subject of concern. Hacking can occur on an international level, and with the growth of the Internet of Things (IoT), even household appliances can be points of weakness in the defenses guarding sensitive personal information.

Experts are feverishly trying to keep security development on pace with the ubiquity of digitalization, and technological advances such as biometrics and RFID tech are helping. Unfortunately, these defenses still rely largely on typical encryption software, which is breakable.

The advent of the quantum computer will exponentially increase computing power, and better security systems will follow suit. By the time children born in 2018 reach adulthood, high-speed quantum encryption could ensure that the digital world they navigate is virtually unhackable.

Single-Screen Computing

While most of our digital devices currently make use of a typical flat screen, tomorrow’s user interfaces will be far more dynamic, and children born in 2018 may not remember a time when they were limited to a single screen and a keyboard.

The development of virtual reality (VR) and augmented reality (AR) have shifted the paradigm, and as these technologies continue to advance, we will increasingly see the incorporation of new capabilities into our computing experience.

Gesture recognition, language processing, and other technologies will allow for a more holistic interaction with our devices, and eventually, we may find ourselves interacting with systems akin to what we saw in Minority Report.

Here’s Everything You Need to Know about Elon Musk’s Human/AI Brain Merge

January 05, 2018

Neuralink Has Arrived

After weeks of anticipation, details on Elon Musk’s brain-computer interface company Neuralink have finally been revealed. In a detailed report on the website Wait But Why, Tim Urban recounts insights gleaned from his weeks meeting with Musk and his Neuralink team at their San Francisco headquarters. He offers an incredibly detailed and informative overview of both Musk’s latest venture and its place in humanity’s evolution, but for those of you interested in just the big picture, here’s what you really need to know about Neuralink.

Your Brain Will Get Another “Layer”

Right now, you have two primary “layers” to your brain: the limbic system, which controls things like your emotions, long-term memory, and behavior; and the cortex, which handles your complex thoughts, reasoning, and long-term planning. Musk wants his brain interface to be a third layer that will complement the other two. The weirdest thing about that goal may be that he thinks we actually already have this third layer — we just don’t have the best interface for it:

We already have a digital tertiary layer in a sense, in that you have your computer or your phone or your applications…The thing that people, I think, don’t appreciate right now is that they are already a cyborg…If you leave your phone behind, it’s like missing limb syndrome. I think people—they’re already kind of merged with their phone and their laptop and their applications and everything.

The goal of Neuralink, then, is eliminating the middleman and putting that power we currently have at our fingertips directly into our brains. Instead of one person using their phone to transmit a thought to another person (“Dinner at 8?”), the thought would just go from one brain to the other directly.

Thankfully, we’ll be able to control this completely, Musk tells Urban: “People won’t be able to read your thoughts — you would have to will it. If you don’t will it, it doesn’t happen. Just like if you don’t will your mouth to talk, it doesn’t talk.”

Musk Is Working with Some Very Smart People

Musk met with more than 1,000 people before deciding on the eight who would help him shape the future of humanity at Neuralink. He claims assembling the right team was a challenge in and of itself, as he needed to find people capable of working in a cross-disciplinary field that includes everything from brain surgery to microscopic electronics.

The crew he landed is a veritable supergroup of smarties. They have backgrounds from MIT, Duke, and IBM, and their bios include phrases like “neural dust,” “cortical physiology,” and “human psychophysics.” They’re engineers, neurosurgeons, and chip designers, and if anyone can bring Elon Musk’s vision to life, it’s them.

The Timeline For Adoption Is Hazy…

Neuralink won’t come out the gate with a BMI that transforms you into a walking computer. The first product the company will focus on releasing will be much more targeted. “We are aiming to bring something to market that helps with certain severe brain injuries (stroke, cancer lesion, congenital) in about four years,” said Musk.

I think we are about 8 to 10 years away from this being usable by people with no disability.” – Musk

The same way SpaceX was able to fund its research on reusable rockets by making deliveries to the ISS or Tesla was able to use profits from its early car sales to fund battery research, these earliest BMIs to treat diseases or the disabled will keep Neuralink afloat as it works on its truly mind-bending technologies.

As for when those technologies, the ones that allow healthy people to channel their inner telepaths, will arrive, Musk’s fairly optimistic timeline comes with several contingencies: “I think we are about 8 to 10 years away from this being usable by people with no disability…It is important to note that this depends heavily on regulatory approval timing and how well our devices work on people with disabilities.”

…Because The Hurdles are Many

Those are just two of the hurdles Neuralink faces. Elon Musk might make innovation look easy, but even going to Mars seems relatively straightforward in comparison to his plans for his latest company.

First, there are the engineering hurdles to overcome. The company has to deal with the problems of biocompatibility, wirelessness, power, and — the big one — bandwidth. To date, we’ve never put more than roughly 200 electrodes in a person’s brain at one time. When talking about a world-changing interface, the Neuralink team told Urban they were thinking something like “one million simultaneously recorded neurons.” Not only would they need to find a way to ensure that the brain could effectively communicate with that many electrodes, they also need to overcome the very practical problem of where to physically put them.

The engineering is only half the battle, though. Like Musk mentioned, regulatory approval will be a big factor in the development and adoption of Neuralink’s tech. The company also faces potential skepticism and even fear from a public that doesn’t want anyone cutting into their brains to install some high-tech machinery — according to a recent Pew survey, the public is even more worried about brain computer interfaces than gene editing. There’s also the not-entirely-unfounded fear that these computers could be hacked.

Add to all that our still very, very incomplete understanding of how the brain ticks exactly, and you can see that the Neuralink team has its work cut out for them.

Neuralink Won’t Exist in a Vacuum

Thankfully, they won’t be working to remake our minds alone — many other universities and research institutes are pushing brain interface technology forward. Facebook’s Building 8 is working on its own BCI, MIT is creating super-thin wires for use in brain implants, and other cyborg devices are already in the works to help the paralyzed walk again and the blind regain their sight. Each new development will push the field forward, and the team at Neuralink will be able to learn from the mistakes and successes of others in the field.

Just like other electric cars were on the road before Tesla came along, brain computer interfaces are not new — the tech might just need a visionary like Musk to elevate it (and us) to the next level.

This article was originally published by:
https://futurism.com/heres-everything-you-need-to-know-about-elon-musks-humanai-brain-merge/

There’s a major long-term trend in the economy that isn’t getting enough attention

January 05, 2018

As the December Federal Reserve (Fed) meeting nears, discussions and speculation about the precise timing of Fed liftoff are certain to take center stage.

But while I’ve certainly weighed in on this debate many times, I believe it’s just one example of a topic that receives far too much attention from investors and market watchers alike.

The Fed has been abundantly clear that the forthcoming rate hiking cycle, likely to begin this month, will be incredibly gradual and sensitive to how economic data evolves, meaning the central bank is likely to be extraordinarily cautious about derailing the recovery and rates will likely remain historically low for an extended period of time. In other words, when the Fed does begin rate normalization, not much is likely to change.

Shifting the Focus to Other Economic Trends

In contrast, there are a number of important longer-term trends more worthy of our focus, as they’re likely to have a bigger, longer-sustaining impact on markets than the Fed’s first rate move. One such market influence that I believe should be getting more attention: The advances in technology happening all around us; innovations already having a huge disruptive influence on the economy and markets. These three charts help explain why.

1. ADOPTION OF TECHNOLOGY IN THE U.S., 1900 TO PRESENT

1. ADOPTION OF TECHNOLOGY IN THE U.S., 1900 TO PRESENT

.

As the chart above shows, people in the U.S. today are adopting new technologies, including tablets and smartphones, at the swiftest pace we’ve seen since the advent of the television. However, while television arguably detracted from U.S. productivity, today’s advances in technology are generally geared toward greater efficiency at lower costs. Indeed, when you take into account technology’s downward influence on price, U.S. consumption and productivity figures look much better than headline numbers would suggest.

2. PERCENTAGE TOP 1500 U.S. STOCKS WITH ZERO INVENTORY THROUGH Q2 2015

2. PERCENTAGE TOP 1500 U.S. STOCKS WITH ZERO INVENTORY THROUGH Q2 2015

.

Technology isn’t just transforming the consumer story. It’s having a similarly dramatic influence on industry, resulting in efficiency gains not reflected in traditional productivity measurements.

For instance, based on corporate capital expenditure data accessible via Bloomberg, it’s clear that U.S. investment is generally accelerating. However, the cost of that investment is going down, allowing companies to become dramatically more efficient in order to better compete. Similarly, with the help of new technologies, many corporations have refined inventory management practices, or have adopted business models that are purposefully asset-light, causing average inventory levels to decline over the past few decades. As the chart above shows, among the top 1500 U.S. stocks by market capitalization over the past 35 years, the percentage of companies reporting effectively zero inventory levels has increased to more than 20 percent from fewer than 5 percent, an extraordinary four-fold rise.

Above all, if there’s one common theme in all three of these charts, it’s this: Technology is advancing so fast that traditional economic metrics haven’t kept up. This has serious implications. It helps to explain widespread misconceptions about the state of the U.S. economy, including the assertion that we reside in a period of low productivity growth, despite the many remarkable advances we see around us. It also makes monetary policy evolution more difficult, and is one reason why I’ve found recent policy debates somewhat myopic and distorted from reality.

So, let’s all make this New Year’s resolution: Instead of focusing so much on the Fed, let’s give some attention to how technology is changing the entire world in ways never before witnessed, and let’s focus on education and training policies that can help our workforce adapt. Such initiatives are more important and durable, and should havefewer unintended negative economic consequences, than policies designed to distort the real rates of interest.

This article was originally published by: http://www.businessinsider.com/blackrock-topic-we-should-be-paying-attention-charts-2015-12/#3-highly-skilled-labor-versus-lower-skilled-labor-trends-2000-2015-3

Bionic Contacts: Goodbye Glasses. Hello Vision That’s 3x Better Than 20/20

October 18, 2017

A Clear Problem

Most of us take our vision for granted. As a result, we take the ability to read, write, drive, and complete a multitude of other tasks for granted. However, unfortunately, sight is not so easy for everyone.

For many people, simply seeing is a struggle. In fact, more than 285 million people worldwide have vision problems, according to the World Health Organization (WHO).

Cataracts account for about a third of these. The National Eye Institute reports that more than half of all Americans will have cataracts or will have had cataract surgery by the time they are 80, and in low- and middle-income countries, they’re the leading cause of blindness.

But now, people with vision problems may have new hope.

A Welcome Sight

Soon, cataracts may be the thing of the past, and even better, it may be possible to see a staggering three times better than 20/20 vision. Oh, and you could do it all without wearing glasses or contacts.

So what exactly does having three times better vision mean? If you can currently read a text that is 10 feet away, you would be able to read the same text from 30 feet away. What’s more, people who currently can’t see properly might be able to see a lot better than the average person.

This development comes thanks to the Ocumetics Bionic Lens. This dynamic lens essentially replaces a person’s natural eye lens. It’s placed into the eye via a saline-filled syringe, after which it unravels itself in under 10 seconds.

 

It may sound painful, but Dr. Garth Webb, the optometrist who invented the Ocumetics Bionic Lens, says that the procedure is identical to cataract surgery and would take just about eight minutes. He adds that people who have the specialized lenses surgically inserted would never get cataracts and that the lenses feel natural and won’t cause headaches or eyestrain.

The Bionic Lens may sound like a fairy tale (or sci-fi dream), but it’s not. It is actually the end result of years and years of research and more than a little funding — so far, the lens has taken nearly a decade to develop and has cost US$3 million.

There is still some ways to go before you will be able to buy them, but if the timeline Webb offered in an interview with Eye Design Optometry holds up, human studies will begin in July 2017, and the bionic lenses will be available to the public in March 2018.

Original source: https://futurism.com/bionic-contacts-goodbye-glasses-hello-vision-thats-3x-better-than-2020/

Is our world a simulation? Why some scientists say it’s more likely than not

October 18, 2017

When Elon Musk isn’t outlining plans to use his massive rocket to leave a decaying Planet Earth and colonize Mars, he sometimes talks about his belief that Earth isn’t even real and we probably live in a computer simulation.

“There’s a billion to one chance we’re living in base reality,” he said at a conference in June.

Musk is just one of the people in Silicon Valley to take a keen interest in the “simulation hypothesis”, which argues that what we experience as reality is actually a giant computer simulation created by a more sophisticated intelligence. If it sounds a lot like The Matrix, that’s because it is.

According to this week’s New Yorker profile of Y Combinator venture capitalist Sam Altman, there are two tech billionaires secretly engaging scientists to work on breaking us out of the simulation. But what does this mean? And what evidence is there that we are, in fact, living in The Matrix?

One popular argument for the simulation hypothesis, outside of acid trips, came from Oxford University’s Nick Bostrom in 2003 (although the idea dates back as far as the 17th-century philosopher René Descartes). In a paper titled “Are You Living In a Simulation?”, Bostrom suggested that members of an advanced “posthuman” civilization with vast computing power might choose to run simulations of their ancestors in the universe.

This argument is extrapolated from observing current trends in technology, including the rise of virtual reality and efforts to map the human brain.

If we believe that there is nothing supernatural about what causes consciousness and it’s merely the product of a very complex architecture in the human brain, we’ll be able to reproduce it. “Soon there will be nothing technical standing in the way to making machines that have their own consciousness,” said Rich Terrile, a scientist at Nasa’s Jet Propulsion Laboratory.

At the same time, videogames are becoming more and more sophisticated and in the future we’ll be able to have simulations of conscious entities inside them.

Elon Musk on simulation: ‘The odds we’re in base reality is one in billions’

“Forty years ago we had Pong – two rectangles and a dot. That’s where we were. Now 40 years later, we have photorealistic, 3D simulations with millions of people playing simultaneously and it’s getting better every year. And soon we’ll have virtual reality, we’ll have augmented reality,” said Musk. “If you assume any rate of improvement at all, then the games will become indistinguishable from reality.”

It’s a view shared by Terrile. “If one progresses at the current rate of technology a few decades into the future, very quickly we will be a society where there are artificial entities living in simulations that are much more abundant than human beings.”

If there are many more simulated minds than organic ones, then the chances of us being among the real minds starts to look more and more unlikely. As Terrile puts it: “If in the future there are more digital people living in simulated environments than there are today, then what is to say we are not part of that already?”

Reasons to believe that the universe is a simulation include the fact that it behaves mathematically and is broken up into pieces (subatomic particles) like a pixelated video game. “Even things that we think of as continuous – time, energy, space, volume – all have a finite limit to their size. If that’s the case, then our universe is both computable and finite. Those properties allow the universe to be simulated,” Terrile said.

“Quite frankly, if we are not living in a simulation, it is an extraordinarily unlikely circumstance,” he added.

So who has created this simulation? “Our future selves,” said Terrile.

Not everyone is so convinced by the hypothesis. “Is it logically possible that we are in a simulation? Yes. Are we probably in a simulation? I would say no,” said Max Tegmark, a professor of physics at MIT.

“In order to make the argument in the first place, we need to know what the fundamental laws of physics are where the simulations are being made. And if we are in a simulation then we have no clue what the laws of physics are. What I teach at MIT would be the simulated laws of physics,” he said.

Harvard theoretical physicist Lisa Randall is even more skeptical. “I don’t see that there’s really an argument for it,” she said. “There’s no real evidence.”

“It’s also a lot of hubris to think we would be what ended up being simulated.”

Terrile believes that recognizing that we are probably living in a simulation is as game-changing as Copernicus realizing that the Earth was not the center of the universe. “It was such a profound idea that it wasn’t even thought of as an assumption,” he said.

Before Copernicus, scientists had tried to explain the peculiar behaviour of the planets’ motion with complex mathematical models. “When they dropped the assumption, everything else became much simpler to understand.”

That we might be in a simulation is, Terrile argues, a simpler explanation for our existence than the idea that we are the first generation to rise up from primordial ooze and evolve into molecules, biology and eventually intelligence and self-awareness. The simulation hypothesis also accounts for peculiarities in quantum mechanics, particularly the measurement problem, whereby things only become defined when they are observed.

“For decades it’s been a problem. Scientists have bent over backwards to eliminate the idea that we need a conscious observer. Maybe the real solution is you do need a conscious entity like a conscious player of a video game,” he said.

For Tegmark, this doesn’t make sense. “We have a lot of problems in physics and we can’t blame our failure to solve them on simulation.”

How can the hypothesis be put to the test? On one hand, neuroscientists and artificial intelligence researchers can check whether it’s possible to simulate the human mind. So far, machines have proven to be good at playing chess and Go and putting captions on images. But can a machine achieve consciousness? We don’t know.

On the other hand, scientists can look for hallmarks of simulation. “Suppose someone is simulating our universe – it would be very tempting to cut corners in ways that makes the simulation cheaper to run. You could look for evidence of that in an experiment,” said Tegmark.

For Terrile, the simulation hypothesis has “beautiful and profound” implications.

First, it provides a scientific basis for some kind of afterlife or larger domain of reality above our world. “You don’t need a miracle, faith or anything special to believe it. It comes naturally out of the laws of physics,” he said.

Second, it means we will soon have the same ability to create our own simulations.

“We will have the power of mind and matter to be able to create whatever we want and occupy those worlds.”

Original source: https://www.theguardian.com/technology/2016/oct/11/simulated-world-elon-musk-the-matrix#img-1

Video

“The Looking Planet” – by Eric Law Anderson

August 06, 2017

Enjoy this CGI 3D Animated Short Film and winner of over 50 film festival jury and audience awards including Best Short Film, Best Sci-Fi Film, Best Animated Film, Best Production Design, Best Visual Effects, and Best Sound Design. During the construction of the universe, a young member of the Cosmos Corps of Engineers decides to break some fundamental laws in the name of self-expression.

 

From flying warehouses to robot toilets – five technologies that could shape the future

August 06, 2017

Flying warehouses, robot receptionists, smart toilets… do such innovations sound like science fiction or part of a possible reality? Technology has been evolving at such a rapid pace that, in the near future, our world may well resemble that portrayed in futuristic movies, such as Blade Runner, with intelligent robots and technologies all around us.

But what technologies will actually make a difference? Based on recent advancements and current trends, here are five innovations that really could shape the future

1. Smart homes

Many typical household items can already connect to the internet and provide data. But much smart home technology isn’t currently that smart. A smart meter just lets people see how energy is being used, while a smart TV simply combines television with internet access. Similarly, smart lighting, remote door locks or smart heating controls allow for programming via a mobile device, simply moving the point of control from a wall panel to the palm of your hand.

But technology is rapidly moving towards a point where it can use the data and connectivity to act on the user’s behalf. To really make a difference, technology needs to fade more into the background – imagine a washing machine that recognises what clothes you have put into it, for example, and automatically selects the right programme, or even warns you that you have put in items that you don’t want to wash together. Here it is important to better understand people’s everyday activities, motivations and interactions with smart objects to avoid them becoming uninvited guests at home.

Such technologies could even work for the benefit of all. The BBC reports, for example, that energy providers will “reduce costs for someone who allows their washing machine to be turned on by the internet to maximise use of cheap solar power on a sunny afternoon” or “to have their freezers switched off for a few minutes to smooth demand at peak times”.

A major concern in this area is security. Internet-connected devices can and are being hacked – just recall the recent ransomware attack. Our home is, after all, the place where we should feel most secure. For them to become widespread, these technologies will have to keep it that way.

2. Virtual secretaries

While secretaries play a very crucial role in businesses, they often spend large parts of their working day with time-consuming but relatively trivial tasks that could be automated. Consider the organisation of a “simple” meeting – you have to find the right people to take part (likely across business boundaries) and then identify when they are all available. It’s no mean feat.

Tools such as doodle.com, which compare people’s availability to find the best meeting time, can help. But they ultimately rely on those involved actively participating. They also only become useful once the right people have already been identified.

By using context information (charts of organisations, location awareness from mobile devices and calendars), identifying the right people and the right time for a given event became a technical optimisation problem that was explored by the EU-funded inContext project a decade ago. At that stage, technology for gathering context information was far less advanced – smart phones were still an oddity and data mining and processing was not where it is today. Over the coming years, however, we could see machines doing far more of the day-to-day planning in businesses.

Indeed, the role of virtual assistants may go well beyond scheduling meetings and organising people’s diaries – they may help project managers to assemble the right team and allocate them to the right tasks, so that every job is conducted efficiently.

‘She is expecting you in the main boardroom …’ Shutterstock

On the downside, much of the required context information is relatively privacy-invasive – but then the younger generation is already happily sharing their every minute on Twitter and Snapchat and such concerns may become less significant over time. And where should we draw the line? Do we fully embrace the “rise of the machines” and automate as much as possible, or retain real people in their daily roles and only use robots to perform the really trivial tasks that no one wants to do? This question will need to be answered – and soon.

3. AI doctors

We are living in exciting times, with advancements in medicine and AI technology shaping the future of healthcare delivery around the world.

But how would you feel about receiving a diagnosis from an artificial intelligence? A private company called Babylon Health is already running a trial with five London boroughs which encourages consultations with a chatbot for non-emergency calls. The artificial intelligence was trained using massive amounts of patient data in order to advise users to go to the emergency department of a hospital, visit a pharmacy or stay at home.

The company claims that it will soon be able to develop a system that could potentially outperform doctors and nurses in making diagnoses. In countries where there is a shortage of medical staff, this could significantly improve health provision, enabling doctors to concentrate on providing treatment rather than spending too much time on making a diagnosis. This could significantly redefine their clinical role and work practices.

Elsewhere, IBM Watson, the CloudMedx platform and Deep Genomics technology can provide clinicians with insights into patients’ data and existing treatments, help them to make more informed decisions, and assist in developing new treatments.

An increasing number of mobile apps and self-tracking technologies, such as Fitbit, Jawbone Up and Withings, can now facilitate the collection of patients’ behaviours, treatment status and activities. It is not hard to imagine that even our toilets will soon become smarter and be used to examine people’s urine and faeces, providing real-time risk assessment for certain diseases.

Your robodoctor will see you now. Shutterstock

Nevertheless, to enable the widespread adoption of AI technology in healthcare, many legitimate concerns must be addressed. Already, usability, health literacy, privacy, security, content quality and trust issues have been reported with many of these applications.

There is also a lack of adherence to clinical guidelines, ethical concerns, and mismatched expectations regarding the collection, communication, use, and storage of patient’s data. In addition, the limitations of the technology need to be made clear in order to avoid misinterpretations that could potentially harm patients.

If AI systems can address these challenges and focus on understanding and enhancing existing care practices and the doctor-patient relationship, we can expect to see more and more successful stories of data-driven healthcare initiatives.

4. Care robots

Will we have robots answering the door in homes? Possibly. At most people’s homes? Even if they are reasonably priced, probably not. What distinguishes successful smart technologies from unsuccessful ones is how useful they are. And how useful they are depends on the context. For most, it’s probably not that useful to have a robot answering the door. But imagine how helpful a robot receptionist could be in places where there is shortage of staff – in care homes for the elderly, for example.

Robots equipped with AI such as voice and face recognition could interact with visitors to check who they wish to visit and whether they are allowed access to the care home. After verifying that, robots with routing algorithms could guide the visitor towards the person they wish to visit. This could potentially enable staff to spend more quality time with the elderly, improving their standard of living.

The AI required still needs further advancement in order to operate in completely uncontrolled environments. But recent results are positive. Facebook‘s DeepFace software was able to match faces with 97.25% accuracy when tested on a standard database used by researchers to study the problem of unconstrained face recognition. The software is based on Deep Learning, an artificial neural network composed of millions of neuronal connections able to automatically acquire knowledge from data.

5. Flying warehouses and self-driving cars

The new postman. Shutterstock

Self-driving vehicles are arguably one of the most astonishing technologies currently being investigated. Despite the fact that they can make mistakes, they may actually be safer than human drivers. That is partly because they can use a multitude of sensors to gather data about the world, including 360-degree views around the car.

Moreover, they could potentially communicate with each other to avoid accidents and traffic jams. More than being an asset to the general public, self-driving cars are likely to become particularly useful for delivery companies, enabling them to save costs and make faster, more efficient deliveries.

Advances are still needed in order to enable the widespread use of such vehicles, not only to improve their ability to drive completely autonomously on busy roads, but also to ensure a proper legal framework is in place. Nevertheless, car manufacturers are engaging in a race against time to see who will be the first to provide a self-driving car to the masses. It is believed that the first fully autonomous car could become available as early as the next decade.

The advances in this area are unlikely to stop at self-driving cars or trucks. Amazon has recently filed a patent for flying warehouses which could visit places where the demand for certain products is expected to boom. The flying warehouses would then send out autonomous drones to make deliveries. It is unknown whether Amazon will really go ahead with developing such projects, but tests with autonomous drones are already successfully being carried out.

Thanks to technology, the future is here – we just need to think hard about how best to shape it.

This article was originally published by:
https://theconversation.com/from-flying-warehouses-to-robot-toilets-five-technologies-that-could-shape-the-future-81519