10 Things Children Born in 2018 Will Probably Never Experience

February 01, 2018

It’s All Coming Back to Me Now

2017 was a year filled with nostalgia thanks to a number of pop culture properties with ties to the past.

We got another official Alien film, and Blade Runner came back with new visuals to dazzle us. Meanwhile, “Stranger Things” hearkened back to the Spielbergian fantasy that wowed so many children of the ’80s, and “Twin Peaks” revived Agent Cooper so he could unravel yet another impenetrable mystery from the enigmatic mind of David Lynch.

As these films and TV shows remind us, a lot can change over the course of a few decades, and the experiences of one generation can be far different from those that follow closely behind thanks to advances in technology.

Click to View Full Infographic

While the “Stranger Things” kids’ phone usage reminded 30-somethings of their own pre-mobile adolescences, children born in 2018 will probably never know the feeling of being tethered to a landline. A trip to the local megaplex to catch Blade Runner 2049 may have stirred up adults’ memories of seeing the original, but children born this year may never know what it’s like to watch a film on a smaller screen with a sound system that doesn’t rattle the brain.

Technology is currently advancing faster than ever before, so what else will kids born today only read about in books or, more likely, on computer screens? Here’s a list of the top 10 things that children born in 2018 will likely never experience.

Long, Boring Travel

Mobile devices and in-flight entertainment systems have made it pretty easy to stay distracted during the course of a long trip. However, aside from the Concorde, which was decommissioned in 2003, humanity hasn’t done nearly as much to increase the speed of air travel for international jet-setters. Beyond sparsely utilized bullet trains, even the speed of our ground transportation has remained fairly limited.

However, recent developments in transportation will likely speed up the travel process, meaning today’s kids may never know the pain of seemingly endless flights and road trips.

Supersonic planes are making a comeback and could ferry passengers “across the pond” in as few as 3.5 hours. While these aircraft could certainly make travel faster for a small subset of travelers, physical and cost limitations will likely prevent them from reaching the mainstream.

However, hyperloop technology could certainly prove to be an affordable way for travelers to subtract travel time from their itineraries.

Already, these super-fast systems have the ability to travel at speeds up to 387 kmh (240 mph). If proposed routes come to fruition, they could significantly cut the time of travel between major cities. For example, a trip from New York to Washington D.C. could take just 30 minutes as opposed to the current five hours.

Driver’s Licenses

Obtaining a driver’s license is currently a rite of passage for teenagers as they make that transition from the end of childhood to the beginning of adulthood. By the time today’s newborns are 16, self-driving cars may have already put an end to this unofficial ritual by completely removing the need for human operators of motor vehicles.

According to the Centers for Disease Control (CDC), an average of six teens between the ages of 16 and 19 died every day in 2015 from injuries sustained in motor vehicle collisions. Since a vast majority of accidents are caused by human error, removing the human from the equation could help to save the lives of people of all ages, so autonomous cars are a serious priority for many.

Elon Musk, CEO of Tesla, is confident that his electric and (currently) semi-autonomous car manufacturing company will produce fully autonomous vehicles within the next two years, and several ride-hailing services are already testing self-driving vehicles.

Biology’s Monopoly on Intelligence

Self-driving cars are just a single example of innovations made possible by the advancement of artificial intelligence (AI).

Today, we have AI systems that rival or even surpass human experts at specific tasks, such as playing chess or sorting recyclables. However, experts predict that conscious AI systems that rival human intelligence could just be decades away.

Advanced robots like Hanson Robotics’ Sophia are already blurring the line between humanity and machines. The next few decades will continue to push boundaries as we inch closer and closer to the singularity.

Children born in 2018 may never know what it’s like to join the workforce or go to college at a time when humans are the smartest entities on the planet.

Language Barriers

Another promising use for AI is communication, and eventually, technology could end the language barrier on Earth.

Communication tools such as Skype have already incorporated instantaneous translating capabilities that allow speakers of a few languages to freely converse in real-time, and Google has incorporated translating capabilities into their new headphones.

Other companies, such as Waverly Labs, are also working on perfecting the technology that will eventually rival the ability of the Babel fish, an alien species found in the book “The Hitchhiker’s Guide to the Galaxy” that can instantly translate alien languages for its host.

Children born in 2018 may find themselves growing up in a world in which anyone can talk to anyone, and the idea of a “foreign” language will seem, well, completely foreign.

Humanity as a Single-Planet Species

Technology that improves human communication could radically impact our world, but eventually, we may need to find a way to communicate with extraterrestrial species. Granted, the worlds we reach in the lifetimes of anyone born this year aren’t likely to contain intelligent life, but the first milestones on the path to such a future are likely to be reached in the next few decades.

When he’s not ushering in the future of autonomous transportation, Musk is pushing his space exploration company SpaceX to develop the technology to put humans on Mars. He thinks he’ll be able to get a crew to the Red Planet by 2024, so today’s children may have no memory of a time before humanity’s cosmic footprint extended beyond a single planet.

Quiet Spaces

Overpopulation is one of the factors that experts point to when they discuss the need for humanity to spread into the cosmos. Urban sprawl has been an issue on Earth for decades, bringing about continued deforestation and the elimination of farming space.

A less-discussed problem caused by the continuous spread of urbanization, however, is the increase in noise pollution.

Experts are concerned that noise is quickly becoming the next great public health crisis. Data collected by the United Nations estimates that by 2100, 84 percent of the world’s 10.8 billion citizens will live in cities, surrounded by a smorgasbord of sound.

This decline in the number of people who live in areas largely free from noise pollution means many of the babies born today will never know what it’s like to enjoy the sound of silence.

World Hunger

Urbanization may limit the space available for traditional farming, but thanks to innovations in agriculture, food shortages may soon become a relic of the past.

Urban farming is quickly developing into a major industry that is bringing fresh produce and even fish to many markets previously considered food deserts (areas cut off from access to fresh, unprocessed foods).

Vertical farming will bring greater access to underserved areas, making it more possible than ever to end hunger in urban areas. Meanwhile, companies are developing innovative ways to reduce food waste, such as by transforming food scraps into sweets or using coffee grounds to grow mushrooms.

If these innovations take hold, children born in 2018 could grow up in a world in which every person on Earth has access to all the food they need to live a healthy, happy life.

Paper Currency

The advent of credit cards may have been the first major blow to the utilization of cash, but it wasn’t the last. Today, paper currency must contend with PayPal, Venmo, Apple Pay, and a slew of other payment options.

By the time children born in 2018 are old enough to earn a paycheck, they will have access to even more payment options, and cash could be completely phased out.

In the race to dethrone paper currency, cryptocurrencies are a frontrunner. Blockchain technology is adding much needed security to financial transactions, and while the crypto market is currently volatile, experts are still optimistic about its potential to permanently disrupt finance.

Digital Insecurity

Today, digital security is a major subject of concern. Hacking can occur on an international level, and with the growth of the Internet of Things (IoT), even household appliances can be points of weakness in the defenses guarding sensitive personal information.

Experts are feverishly trying to keep security development on pace with the ubiquity of digitalization, and technological advances such as biometrics and RFID tech are helping. Unfortunately, these defenses still rely largely on typical encryption software, which is breakable.

The advent of the quantum computer will exponentially increase computing power, and better security systems will follow suit. By the time children born in 2018 reach adulthood, high-speed quantum encryption could ensure that the digital world they navigate is virtually unhackable.

Single-Screen Computing

While most of our digital devices currently make use of a typical flat screen, tomorrow’s user interfaces will be far more dynamic, and children born in 2018 may not remember a time when they were limited to a single screen and a keyboard.

The development of virtual reality (VR) and augmented reality (AR) have shifted the paradigm, and as these technologies continue to advance, we will increasingly see the incorporation of new capabilities into our computing experience.

Gesture recognition, language processing, and other technologies will allow for a more holistic interaction with our devices, and eventually, we may find ourselves interacting with systems akin to what we saw in Minority Report.


We’re living in the Last Era Before Artificial General Intelligence

January 05, 2018

When we think of preparing for our future, we used to think about going to good college and moving for a good job that would put us on a relatively good career trajectory for a stable life where we will prosper in a free market meritocracy where we compete against fellow humans.

However, over the course of the next few decades homo sapiens including generation GenZ and Alpha, may be among the last people to grow up in a pre automation and pre AGI world.

Considering the exponential levels of technological progress expected in the next 30 years, that’s hard to put into words or even historical context. Namely, because there’s no historical precedent and no words to describe what the next-gen AI might become.

Kurzweil believes that the 21st century will achieve 1,000 times the progress of the 20th century.”

Pre Singularity Years

In the years before wide scale automation and sophisticated AI, we live believing things are changing fast. Retail is shifting to E-commerce and new modes of buying and convenience, self-driving and electric cars are coming, Tech firms in specific verticals still rule the planet, and countries still vye for dominance with outdated military traditions, their own political bubbles and outdated modes of hierarchy, authority and economic privilege.

We live in a world where AI is gaining momentum in popular thought, but in practice is still at the level of ANI: Artificial Narrow Intelligence. Rudimentary NLP, computer vision, robotic movement, and so on and so forth. We’re beginning to interact with personal assistants via smart speakers, but not in any fluid way. The interactions are repetitive. Like Google searching the same thing, on different days.

In this reality, we think about AI in terms useful to us, such as trying to teach machines to learn so that they can do things that humans do, but in turn help humans. A kind of machine learning that’s more about coding and algorithms than any actual artificial intelligence. Our world here is starting to shift into something else: the internet is maturing, software is getting smarter on the cloud, data is being collective, but no explosion takes place, even as more people on the planet get access to the Web.

When Everything Changes

Between 2014 and 2021, an entire 20th century’s worth of progress will have occurred, and then something strange happens, it begins to accelerate until more progress is being made in shorter and shorter time periods. We have to remember, the fruit of this transformation won’t belong just to Facebook, or Google or China or the U.S., it will be just the new normal for everyone.

Many believe sometime between 2025 and 2050, AI becomes native to self-learning, in that it adopts an Artificial General Intelligence, that completely changes the game.

After that point, not only does AI outperform human beings in tasks, problem solving and even human constructs of creativity, emotional intelligence, manipulating complex environments and predicting the future — it reaches Artificial Super Intelligence relatively quickly thereafter.

We live in Anticipation of the Singularity

As such in 2017–18, we might be living in the last “human” era. Here we think of AI as “augmenting” our world, we think of smart phones as miniaturized super computers and the cloud as an expansion of our neocortex in a self-serving existence where concepts such as wealth, consumption, and human quality of life trumps all other considerations.

Here we view computers as man-made tools, robots as slaves, and AI as a kind of “software magic” that’s obliged to our bidding.

Whatever the bottle-necks of carbon based life forms might be, silicon based AGI may have many advantages. Machines that can self-learn, self-replicate and program themselves might come into being in part due to copying how the human brain works, but like the difference between Alpha Go and Alpha Go Zero, the real breakthrough might be made from a blank slate.

While humans appear destined to create AGI, it doesn’t stand to reason that AGI will think, behave or have motivations like people, cultures or even our models of what super-intelligence might be like exhibit.

Artificial Intelligence with Creative Agency

For human beings, the Automation Economy only arrives after a point where AGI has come into being. Such an AGI would be able to program robots, facilitate smart cities and help humans govern themselves in a way that is impossible today.

AGI could also manipulate and advance STEM fields such as green tech, biotech, 3D-printing, nanotech, predictive algorithms, and quantum physics likely in ways humans up to that point could only achieve relatively slowly.

Everything pre singularity would feel like ancient history. A far more radical past than before the invention of computers or the internet. AGI could impact literally everything, as we are already seeing with primitive machine intelligence systems.

In such a world AGI would not only be able to self-learn and surpass all of human knowledge and data collected up to that point, but create its own fields, set its own goals and have its own interests (beyond which humans would likely be able to recognize). We might term this Artificially Intelligent Creative Agency (AICA).

AI Not as a Slave, but as a Legacy

Such a being would indeed feel like a God to us. Not a God that created man, but an entity that humanity made, in just a few thousand years since we were storytellers, explorers and then builders and traders.

A human brain consists of 86 billion neurons linked by trillions of synapses, but it’s not networked well to other nodes and external reality. It has to “experience” them in systems of relatedness and remain in relative isolation from them. AICA, would not have this constraint. It would be networked to all IoT devices, be able to hack into any human system, network or quantum computer. AICA would not be led by instincts of possession, mating, aggression or other emotive agencies of the mammalian brain. Whatever ethics, values and philosophical constraints it might have, could be refined over centuries, not mere months and years of an ordinary human lifetime.

AGI might not be humanity’s last invention, but symbolically, it would usher in the 4th industrial revolution and then some. There would be many grades and incidents of limited self-learning in deep learning algorithms. But AGI would represent a different quality. Likely it would instigate a self-aware separation between humanity and the descendent order of AI, whatever it might be.

High-Speed Quantum Evolution to AGI

The years before the Singularity

The road from ANI to AGI to ASI to some speculative AICA is not just a journey from narrow to general to super intelligence, but an evolutionary corridor of humanity across a distance of progress that’s could also be symbiotic. It’s not clear how this might work, but some human beings to protect their species might undertake “alterations”. Whatever these cybernetic, genetic or how invasive these changes might be, AI is surely going to be there every step of the way.

In the corporate race to AI, governments like China and the U.S. also want to “own” and monetize this for their own purposes. Fleets of cars and semi-intelligent robots will make certain individuals and companies very rich. There might be no human revolution from wealth inequality until AGI, because comparatively speaking, the conditions for which AGI arises may be closer than we might assume.

We Were Here

If the calculations per second (cps) of the human brain are static, at around 1⁰¹⁶, or 10 quadrillion cps, how much does it take for AI to replicate some kind of AGI field? Certainly it’s not just processing power or exponentially faster super-computers or quantum computing, or improved deep learning algorithms, but a combination of all of these and perhaps many other factors as well. In late 2017, Alpha Go Zero “taught itself” Go without using human data but generating its own data by gaming itself.

Living in a world that can better imagine AGI will mean planning ahead, not just coping with change to human systems. In a world where democracy can be hacked, and one- party socialism likely is the heir apparent to future iterations of artificial intelligence where concepts like freedom of speech, human rights or an openness to diversity of ideas is not practiced in the same way, it’s interesting to imagine the kinds of AI human controlled systems that might occur before AGI arrives (if it ever even arrives).

The Human Hybrid Dilemma

Considering our own violent history of the annihilation of biodiversity, modeling AI by plagiarizing the brain through some kind of whole brain emulation, might not be ethical. While it might mimic and lead to self-awareness, such an AGI might be dangerous. In the same sense we are a danger to ourselves and to other life forms in the galaxy.

Moore’s Law might have sounded like an impressive analogy to the Singularity in the 1990s, but not today. More people working in the AI field, are rightfully skeptical of AGI. It’s plausible that even most of them suffering from a linear vs. exponential bias of thinking. In the path towards the Singularity, we are still living in slow motion.

We Aren’t Ready for What’s Inevitable

We’re living in the last era before Artificial General Intelligence, and as usual, human civilization appears quite stupid. We don’t even actively know what’s coming.

While our simulations are improving, and we’re “discovery” exoplanets that are most likely to be life-like, our ability to predict the future in terms of the speed of technology, is mortifyingly bad. Our understanding of the implications of AGI and even machine intelligence on the planet are poor. Is it because this has never happend in recorded history, and represents such a paradigm shift, or could there be another reason?

Amazon can create and monetize patents in a hyper business model, Google, Facebook, Alibaba and Tencent can fight over talent AI talent luring academics to corporate workaholic lifestyles with the ability to demand their salary requests, but in 2017, humanity’s vision of the future is still myopic.

We can barely imagine that our prime directive in the universe might not be to simply grow, explore and make babies and exploit all within our path. And, we certainly can’t imagine a world where intelligent machines aren’t simply our slaves, tools and algorithms designed to make our lives more pleasurable and convenient.

This article was originally published by:

What If One Country Achieves the Singularity First?

April 27, 2015

Zoltan Istvan is a futurist, author of The Transhumanist Wager, and founder of and presidential candidate for the Transhumanist Party. He writes an occasional column for Motherboard in which he ruminates on the future beyond natural human ability.

The concept of a technological singu​larity is tough to wrap your mind around. Even experts have differing definitions. Vernor Vinge, responsible for spreading the idea in the 1990s, believes it’s a moment when growing superintelligence renders our human models of understanding obsolete. Google’s Ray Kurzweil says it’s “a future period during which the pace of technological change will be so rapid, its impact so deep, that human life will be irreversibly transformed.” Kevin Kelly, founding editor of Wired, says, “Singularity is the point at which all the change in the last million years will be superseded by the change in the next five minutes.” Even Christian theologians have chimed in, sometimes referring to it as “the rapture of the nerds.”

My own definition of the singularity is: the point where a fully functioning human mind radically and exponentially increases its intelligence and possibilities via physically merging with technology.

All these definitions share one basic premise—that technology will speed up the acceleration of intelligence to a point when biological human understanding simply isn’t enough to comprehend what’s happening anymore.

That also makes a technological singularity something quasi-spiritual, since anything beyond understanding evokes mystery. It’s worth noting that even most naysayers and luddites who disdain the singularity concept don’t doubt that the human race is heading towards it.

No matter how you look at this, it’s bizarre futurist stuff

In March 2015, I published a Motherboard article titled A Global Arms Race to Create a Superintelligent AI is Looming. The article argued a concept I call the AI Imperative, which says that nations should do all they can to develop artificial intelligence, because whichever country produces an AI first will likely end up ruling the world indefinitely, since that AI will be able to control all other technologies and their development on the planet.

The article generated many thoughtful comments on Red​dit Futurology, Less​Wrong, and elsewhere. I tend not to comment on my own articles in an effort to stay out of the way, but I do always carefully read comment sections. One thing the message boards on this story made me think about was the possibility of a “nationalistic” singularity—what might also be called an exclusive, or private singularity.

If you’re a technophile like me, you probably believe the key to reaching the singularity is two-fold: the creation of a superintelligence, and the ability to merge humans with that intelligence. Without both, it’s probably impossible for people to reach it. With both, it’s probably inevitable.

Currently, the technology to merge the human brain with a machine is already underway. In fact, hundreds of thousands of people around the world already have brain implants of some sort, and last year telepathy was performed between researchers in different countries. Thoughts were passed from one mind to another using a machine interface, without speaking a word.

Fast forward 25 years in the future, and some experts like Kurzweil believe we might already be able to upload our entire consciousness into a machine. I tend to agree with him, and I even think it could occur sooner, such as in 15 to 20 years time.

Here’s the crux: If an AI exclusively belonged to one nation (which is likely to happen), and the technology of merging human brains and machines grows sufficiently (which is also likely to happen), then you could possibly end up with one nation controlling the pathways into the singularity.

As insane as this sounds, it’s possible that the controlling nation could start offering its citizens the opportunity to be uploaded fully into machines, in preparation to enter the singularity. Whether there would then be two distinct entities—one biological and one uploaded—for every human who choses to do this is a natural question, and it’s only one that could be decided at the time, probably by governments and law. Furthermore, once uploaded, would your digital self be able to interact with your biological self? Would one self be able to help the other? Or would laws force an either-or situation, where uploaded people’s biological selves must remain in cryogenically frozen states or even be eliminated altogether?

No matter how you look at this, it’s bizarre futurist stuff. And it presents a broad array of challenging ethical issues, since some technologists see the singularity as something akin to a totally new reality or even a so-called digital heaven. And to have one nation or government controlling it, or even attempting to limit it exclusively to its populace, seems potentially morally dubious.

For example, what if America created the AI first, then used its superintelligence to pursue a singularity exclusively for Americans?

(Historically, this wouldn’t be that far off from what many Abrahamic world religions advocate for, such as Christianity or Islam. In both religions, only certain types of people get to go to heaven. Those left behind get tortured for eternity. This concept of exclusivity is the single largest reason I became an atheist at 18.)

Worse, what if a government chose only to allow the super wealthy to pursue its doorway to the singularity—to plug directly into its superintelligent AI? Or what if the government only gave access to high-ranked party officials? For example, how would Russia’s Vladimir Putin deal with this type of power? And it is a tremendous power. After all, you’d be connected to a superintelligence and would likely be able to rewrite all the nuclear arms codes in the world, stop dams and power plants from operating, and create a virus to shut down Wi-Fi worldwide, if you wanted.

And at some point, we won’t see a difference between matter, energy, judgment, and ourselves.

Of course, given the option, many people would probably choose not to undergo the singularity at all. I suspect many would choose to remain as they are on Earth. However, some of those people might be keen on acquiring the technology of getting to the singularity. They might want to sell that tech, and offer paid one-way trips for people who want to have a singularity. For that matter, individuals or corporations might try to patent it. What you’d be selling is the path to vast amounts of power and immortality.

Such moral leanings and concepts that someone or group could control, patent, or steal the singularity ultimately lead us to another imperative: the Singularity Disparity.

The first person or group to experience the singularity will protect and preserve the power and intelligence they’ve acquired in the singularity process—which ultimately means they will do whatever is necessary to lessen the power and intelligence accumulation of the singularity experience for others. That way the original Singularitarians can guarantee their power and existence indefinitely.

In my philosophical novel The Transhumanist Wager, this type of thinking belongs to the Omnipotender, someone who is actively seeking and contending for as much power as possible, and bases their actions on such endeavors.

I’m not trying to argue any of this is good or bad, moral or immoral. I’m just explaining how this phenomena of the singularity likely could unfold. Assuming I’m correct, and technology continues to grow rapidly, the person who will become the leading omnipotender on Earth is already born.

Of course, religions will appreciate that fact, because such a person will fulfill elements of either the Antichrist or the Second Coming of a Jesus, which is important to both the apocalyptic beliefs in Christianity and Isla​m. At least the “End Times” are really here, faith-touters will be able to finally say.

The good news, though, is that maybe a singularity is not an exclusive event. Maybe there can be many singularities.

A singularity is likely to be mostly a consciousness phenomenon. We will be nearly all digital and interconnected with machines, but we will still able to recognize ourselves, values, memories, and our purposes—otherwise I don’t think we’d go through with it. On the cusp of the singularity, our intelligence will begin to grow tremendously. I expect the software of our minds will be able to be rewritten and upgraded almost instantaneously in real time. I also think the hardware we exist through—whatever form of computing it’ll be—will also be able to be reshaped and remade in real time. We’ll learn how to reassemble processors and their particles in the moment, on-demand, probably with the same agility and speed we have when thinking about something, such as figuring out a math problem. We’ll understand the rules and think about what we want, and the best answer, strategy, and path will occur. We’ll get exceedingly efficient at such things, too. And at some point, we won’t see a difference between matter, energy, judgment, and ourselves.

What’s important here is the likely fact that we won’t care much about what’s left on Earth. In just days or even hours, the singularity will probably render us into some form of energy that can organize and advance itself superintelligently, perhaps into a trillion minds on a million Earths.

If the singularity occurs like this, then, on the surface, there’s little ethically wrong with a national or private singularity, because other nations or groups could implement their own in time. However, the larger issue is: How would people on Earth protect themselves from someone or some group in the singularity who decides the Earth and its inhabitants aren’t worth keeping around, or worse, wants to enslave everyone on Earth? There’s no easy answer to this, but the question itself makes me frown upon the singularity idea, in exactly the same way I frown upon an omnipotent God and heaven. I don’t like any other single entity or group having that much possible power over another.