Canadian province trials basic income for thousands of residents

January 05, 2018

Canada is testing a basic income to discover what impact the policy has on unemployed people and those on low incomes.

The province of Ontario is planning to give 4,000 citizens thousands of dollars a month and assess how it affects their health, wellbeing, earnings and productivity.

It is among a number of regions and countries across the globe that are now piloting the scheme, which sees residents given a certain amount of money each month regardless of whether or not they are in work.

Although it is too early for the Ontario pilot to deliver clear results, some of those involved have already reported a significant change.

One recipient, Tim Button, said the monthly payments were making a “huge difference” to his life. He worked as a security guard before having to quit after a fall from a roof left him unable to work.

“It takes me out of depression”, he told the Associated Press. “I feel more sociable.”

The basic income payments have boosted his income by almost 60 per cent and have allowed him to make plans to visit his family for Christmas for the first time in years. He has also been able to buy healthier food, see a dentist and look into taking an educational course to help him find work.

Under the Ontario experiment, unemployed people or those with a low income can receive up to C$17,000 (£9,900) and are allowed to also keep half of what they earn at work, meaning there is still an incentive to work. Couples are entitled to C$24,000 (£13,400).

If the trial proves successful, the scheme could be expanded to more of the province’s 14.2 million residents and may inspire more regions of Canada and other nations to adopt the policy.

Support for a basic income has grown in recent years, fuelled in part by fears about the impact that new technology will have on jobs. As machines and robots are able to complete an increasing number of tasks, attention has turned to how people will live when there are not enough jobs to go round.

Ontario’s Premier, Kathleen Wynne, said this was a major factor in the decision to trial a basic income in the province.

She said: “I see it on a daily basis. I go into a factory and the floor plant manager can tell me where there were 20 people and there is one machine. We need to understand what it might look like if there is, in fact, the labour disruption that some economists are predicting.”

Ontario officials have found that many people are reluctant to sign up to the scheme, fearing there is a catch or that they will be left without money once the pilot finishes.

Many of those who are receiving payments, however, say their lives have already been changed for the better.

Dave Cherkewski, 46, said the extra C$750 (£436) a month he receives has helped him to cope with the mental illness that has kept him out of work since 2002.

“I’ve never been better after 14 years of living in poverty,” he said.

He hopes to soon find work helping other people with mental health challenges.

He said: “With basic income I will be able to clarify my dream and actually make it a reality, because I can focus all my effort on that and not worry about, ‘Well, I need to pay my $520 rent, I need to pay my $50 cellphone, I need to eat and do other things’.”

Finland is also trialling a basic income, as is the state of Hawaii, Oakland in California and the Dutch city of Utrecht.

This article was originally published by:


Will the coming robot nanny era turn us into technophiles?

November 14, 2016

A vector illustration of a robot ironing clothes

Robots intrigue us. We all like them. But most of us don’t love them. That may dramatically change over the next 10 years as the “robot nanny” makes its way into our households.

In as little time as a decade, affordable robots that can bottle-feed babies, change diapers and put a child to sleep might be here. The human-machine bond that a new generation of kids grows up with may be unbreakable. We may end up literally loving our machines almost like we do our mothers and fathers.

I’ve already seen some of this bonding in action. I have a four-foot interactive Meccanoid robot aboard my Immortality Bus, which I’ve occasionally used for my presidential campaign. The robot can do about 1,000 functions, including basic interaction with people, like talking, answering questions and making wisecracks. When my five-year-old rides with me on the bus, she adores it. After being introduced to it, she obsessively wanted to watch Inspector Gadget videos and read books on robots.

My two daughters (the other one is two years old) have always been near technology, and both were able to successfully navigate YouTube watching videos on iPhones by the time they were 12 months old. Yet, while my kids love the iPhone, and they want to use it regularly, it doesn’t bond them to technology in a maternal sense like the Meccanoid robot does. More importantly, the smartphone doesn’t bond them to technology in an anthropomorphic sense — where one gives technology human attributes, like personalities.

My kids instinctively know the iPhone is a tool. But Meccanoid is a friend. If you kick the robot, leave it in the rain or lock it away in the closet, my kids will freak out. To them, the robot is personal — and the love is real.

If some of this reminds you of Rosie the Robot — the cleaning, cooking nanny robot from the Jetsons — you’re not alone. Humans will soon regularly engage with machines as fellow companions in life, giving psychologists, anthropologists and Congress new ideas to consider. There is already chatter all across the internet in the transhumanist community about humans wanting the right to marry machines — and all that goes with that. In fact, in the Transhumanist Bill of Rights I delivered to Washington, DC, we explicitly aim to give future conscious beings personhood — as well as other rights covered by the 1948-adopted United Nations Universal Declaration of Human Rights.

Despite the thorniness of some of the issues between humans and robots, the reason we are entering this robot age is because of one simple fact: functionality. Robots will make our lives far easier. In fact, the robot nanny is a prime example: It will be adored by parents — and likely much more so than the human nannies who are known to call in sick, show up to work late and, on occasion, sue their employers when they hurt themselves on the job. Robot nannies will replace real nannies like the automobile replaced the horse and cart — allowing parents much new free time and opportunity to pursue careers.

One major factor going for the development of robot nannies is their cost effectiveness. I’ve been either watching my kids or hiring nannies for the last five years. About $200,000 later (which is what 8-hour weekday childcare costs in San Francisco for five years), it’s safe to say a robot nanny is not going to cost as much as I’ve spent. And once my kids are old enough and no longer need immediate supervision, I’ll be left with the robot to sell or give to a family in need.

But essential questions remain: Will some robots be allowed to watch kids when parents go out for the night or off to work — and other robots not? Who will make that determination? The parent? The manufacturer? The government?

Will robots that can perform CPR, put out fires, squish poisonous spiders and perform the Heimlich maneuver on a choking child be authorized while others are not? Will robots that can detect smoke and carbon monoxide, where others can’t, make the “nanny-worthy” grade?

And then come the questions ethicists and programmers are already facing with driverless cars. If an autonomous vehicle is forced into a choice to hit a young family of five or an old man, what does it choose? Nanny robots may one day be programmed with similar instructions and values.

But what if a robot nanny is watching twins, and both start choking at the same time? Which child will it choose to help first? Will programmers allow parents to program which child should be helped first?

The questions are endless. I suspect, like the U.S. Department of Transportation’s National Highway Traffic Safety Administration’s Federal Motor Vehicle Safety Standards and Regulations, a robot equivalent will have to be established.

It’s been years since the American household has gotten a new fixture that all households must have. One of the last major ones was the computer — and now nearly 85 percent of American households have one. I suspect nanny robots will be one of the next commonplace items we have in our homes. And our love for them will grow as they influence and play an integral part of the next generation’s upbringing.

Will the coming robot nanny era turn us into technophiles?

Bill Gates talks about why artificial intelligence is nearly here and how to solve two big problems it creates

July 10, 2016


Bill Gates is excited about the rise of artificial intelligence but acknowledged the arrival of machines with greater-than-human capabilities will create some unique challenges.

After years of working on the building blocks of speech recognition and computer vision, Gates said enough progress has been made to ensure that in the next 10 years there will be robots to do tasks like driving and warehouse work as well as machines that can outpace humans in certain areas of knowledge.

“The dream is finally arriving,” Gates said, speaking with wife Melinda Gates on Wednesday at the Code Conference. “This is what it was all leading up to.”

However, as he said in an interview with Recode last year, such machine capabilities will pose two big problems.

The first is, it will eliminate a lot of existing types of jobs. Gates said that creates a need for a lot of retraining but notes that until schools have class sizes under 10 and people can retire at a reasonable age and take ample vacation, he isn’t worried about a lack of need for human labor.

The second issue is, of course, making sure humans remain in control of the machines. Gates has talked about that in the past, saying that he plans to spend time with people who have ideas on how to address that issue, noting work being done at Stanford, among other places.

And, in Gatesian fashion, he suggested a pair of books that people should read, including Nick Bostrom’s book on superintelligence and Pedro Domingos’ “The Master Algorithm.”

Melinda Gates noted that you can tell a lot about where her husband’s interest is by the books he has been reading. “There have been a lot of AI books,” she said.

Why “utility fogs” could be the technology that changes the world

March 13, 2016


Arthur C. Clarke is famous for suggesting that any sufficiently advanced technology would be indistinguishable from magic. There’s no better example of this than the ultra-speculative prospect of “utility fogs” — swarms of networked microscopic robots that could assume the shape and texture of virtually anything.

We may be decades away from this sort of technological wizardry, but futurists are already thinking about how we could use it.

We spoke to J. Storrs Hall, the independent researcher who came up with the concept of utility fogs back in 1993. He believes that utility fogs will irrevocably alter our physical landscape — and quite possibly our bodies as well.

Indeed, Hall’s idea has inspired both scientists and science fiction writers. The potential for utility fogs has been seriously considered by futurists like Ray Kurzweil and Robert Freitas. And we’ve seen scifi visions of the technology with Warren Ellis’s foglet beings in Transmetropolitan, Neal Stephenson’s personal nanodefense systems in The Diamond Age, and many others.

Here’s how utility fogs are going to work.

Active Polymorphic Materials

Hall came up with idea for utility fogs when imagining what an advanced form of seat belt might look like.

Why "utility fogs" could be the technology that changes the world

“I came up with this vision of form fitting foam — one that could take on the shape of anything inside it and on the fly,” he told io9, “which got me to wondering if we could ever possibly build something like that.” The answer, says Hall, came to him by considering the nascent field of molecular nanotechnology. By designing and creating objects at the molecular scale, Hall envisioned a fog that could quickly morph along with the movements of anything around it — including the passengers of cars.

In essence, the utility fog would be a polymorphic material comprised of trillions of interlinked microscopic ‘foglets’, each equipped with a tiny computer. These nanobots would be capable of exerting force in all three dimensions, thus enabling the larger emergent object to take on various shapes and textures. So, instead of building an object atom by atom, these tiny robots would link their contractible arms together to form objects with varying properties, such as a fluid or solid mass.

Read more at:

The Robots Are Coming for Wall Street

March 03, 2016


Hundreds of financial analysts are being replaced with software. What office jobs are next?

When Daniel Nadler woke on Nov. 6, he had just enough time to pour himself a glass of orange juice and open his laptop before the Bureau of Labor Statistics released its monthly employment report at 8:30 a.m. He sat at the kitchen table in his one-bedroom apartment in Chelsea, nervously refreshing his web browser — Command-R, Command-R, Command-R — as the software of his company, Kensho, scraped the data from the bureau’s website. Within two minutes, an automated Kensho analysis popped up on his screen: a brief overview, followed by 13 exhibits predicting the performance of investments based on their past response to similar employment reports.

Nadler couldn’t have double-checked all this analysis if he wanted to. It was based on thousands of numbers drawn from dozens of databases. He just wanted to make sure that Kensho had pulled the right number — the overall growth in American payrolls — from the employment report. It was the least he could do, given that within minutes, at 8:35 a.m., Kensho’s analysis would be made available to employees at Goldman Sachs.

In addition to being a customer, Goldman is also Kensho’s largest investor. Nadler, who is 32, spent the rest of the morning checking in with some of the bank’s most regular Kensho users — a top executive on the options-and-derivatives-trading desks, a fund manager — then took an Uber down for a lunch meeting at Goldman’s glass tower just off the West Side Highway in Manhattan. While almost everyone in the building dresses in neatly pressed work attire, Nadler rarely deviates from his standard outfit: Louis Vuitton leather sandals and a casual but well-cut T-shirt and pants, both by the designer Alexander Wang. Nadler owns 10 sets of these. His austere aesthetic is informed by the summer vacations he spent in Japan while pursuing a doctoral degree in economics from Harvard, mostly visiting temples and meditating. (‘‘Kensho’’ is the Japanese term for one of the first states of awareness in the Zen Buddhist progression.) He also wrote a volume of poetry — imagined ancient love poems — that Farrar Straus & Giroux will publish later this year.

More on:

Watch a Boston Dynamics humanoid robot wander around outside

August 17, 2015


Boston Dynamics, which Google bought in 2013, has begun testing one of its humanoid robots — those that are designed to function like humans — out in the wild.

Marc Raibert, the founder of Boston Dynamics, talked about the research and showed footage of the project during a talk on Aug. 3 at the 11th Fab Lab Conference and Symposium in Cambridge, Mass.

“Out in the world is just a totally different challenge than in the lab,” Raibert said at the conference, which was organized by the Fab Foundation, a division of the Massachusetts Institute of Technology’s Center for Bits and Atoms. “You can’t predict what it’s going to be like.”

Boston Dynamics has tested its LS3 quadruped (four-legged) robot out in natural settings in the past. But humanoid robots are different — they can be much taller and have a higher center of gravity. Keeping them moving on paved asphalt is one thing, but maneuvering them through rugged terrain, which is what Boston Dynamics’ Atlas robots dealt with recently during the DARPA Robotics Challenge, can be trickier.

See for yourself how this humanoid robot performs in the woods.

Google patents robots with personalities in first step towards the singularity

April 27, 2015


Google has been awarded a patent for the ‘methods and systems for robot personality development’, a glimpse at a future where robots react based on data they mine from us and hopefully don’t unite and march on city hall.

The company outlines a process by which personalities could be downloaded from the cloud to “provide states or moods representing transitory conditions of happiness, fear, surprise, perplexion, thoughtfulness, derision and so forth. ”

Its futuristic vision seems to be not of a personalised robot for each human but a set of personality traits that can be transferred between different robots.

“The personality and state may be shared with other robots so as to clone this robot within another device or devices,” it said in the patent.

“In this manner, a user may travel to another city, and download within a robot in that city (another “skin”) the personality and state matching the user’s “home location” robot. The robot personality thereby becomes transportable or transferable.”

It doesn’t sound dissimilar from the opening of a Will Smith sci-fi movie, with one robot’s evil data genes spreading via the cloud to all its other robot brethren.

While this sounds far-fetched, the technological singularity – the point at which artificial intelligence exceeds man’s intellectual capacity and produces a runaway effect – is something that Stephen Hawking, Bill Gates and Elon Musk have all expressed concern over.

Google is probably just safeguarding for the future, however, and is unlikely to release any products that require the patent to be employed anytime soon. We’ve still yet to create a robot that can convincingly walk up stairs, so an apocalyptic army is probably a long way off.


When Robots Take Over Most Jobs, What Will Be the Purpose of Humans?

September 6, 2014


In March of 2013, four economics researchers from the New York Federal Reserve published a report on job “polarization” — the phenomenon of routine task work disappearing and only the highest and lowest skilled work still available. The authors stated:

An occupation is routine if its main tasks require following explicit instructions and obeying well-defined rules. These tend to be middle-skilled jobs. If the job involves flexibility, problem solving or creativity, it’s considered nonroutine. Job polarization occurs when employment moves to nonroutine occupations, a category that contains the highest- and lowest-skilled jobs.

They based their analysis on data from the U.S. Census Bureau, which demonstrates that around 2005, the U.S. passed a threshold where more than 50 percent of all occupations are non-routine. In fact, extrapolating from the relatively straight line on the graph, at this point we should be over 60 percent nonroutine.

chart 1

These researchers also broke out the four quadrants of the work sphere, with routine versus nonroutine work arrayed against cognitive versus manual work.

routine nonroutine

The central takeaway from this exposition is that routine jobs have been decreasing in both cognitive and manual forms, and nonroutine jobs have been increasing largely in cognitive form. Again, here’s the census data:

chart 3

The indications are fairly stark. The work in routine occupations is trending toward zero. This fall lines up fairly well with the rise of automation of various kinds. For example, computer programs are doing the work of paralegals and x-ray technicians, and factory robots are displacing large numbers of automobile assembly line workers. There are applications that can write sports newspaper articles, based simply on the scoring history in the game.

Of course, for those who consider science fiction as the best oracle for an unknowable future, consider this shot in the dark from Isaac Asimov, who wrote in 1964 about a visit to the World’s Fair of 2014:

The world of A.D. 2014 will have few routine jobs that cannot be done better by some machine than by any human being. Mankind will therefore have become largely a race of machine tenders.

Soon, all that will be left for human beings will be the non-routine, creative work. How many of our occupations will our software overlords steal away from us? Many more than today, according to Carl Benedict Frey and Michael A. Osborne, two researchers at Oxford who looked at 702 current occupations.

“Soon, all that will be left for human beings will be the non-routine, creative work.”

The researchers found that approximately half of current occupations (47 percent) are at risk of going the way of the telephone operator within just a decade or two. These two researchers relied on the same matrix of work as the Federal Reserve team, and examined how quickly robotic dexterity and A.I. cognition would hollow out jobs that seem to be the preserve of humans today:

Our findings could be interpreted as two waves of computerisation, separated by a “technological plateau”. In the first wave, we find that most workers in transportation and logistics occupations, together with the bulk of office and administrative support workers, and labour in production occupations, are likely to be substituted by computer capital.

Note that the “transportation and logistics” sector includes many occupations that will be slammed by autonomous vehicles, like truckers (the number one occupation for men in the U.S. currently), taxi drivers and warehouse workers. Administrative support is the number one job for women in the US, so our robot overlords are equal opportunity, at least.

Frey and Osborne suggest that the second future wave of displacement will come at some later date, when A.I. gains the secrets of creativity and social intelligence. That may take a longer time, but at some future date, lawyers, engineers, brain surgeons and even actors might be displaced by ‘bots. In fact, one venture capital firm, Deep Knowledge Ventures, has already appointed an algorithm to its board of directors.

“Lawyers, engineers, brain surgeons and even actors might be displaced by ‘bots.”

So, we are confronted with the critical question of 2025, as I stated in the recent Pew Internet report, AI, Robotics, and the Future of Jobs:

What are people for in a world that does not need their labor, and where only a minority are needed to guide the ‘bot-based economy?

While it is likely that for the next few decades the educated, creative and inventive will find avenues to gainful employment, that will not be the case for all. How will we organize our world if machines can provide goods and services at lower and lower costs while fewer and fewer have income enough to buy anything?

Can we educate our way out of this mess, or will people be forced into a return to the land, tending 40 acres with the help of several mechanical mules? Can we legislate a Luddite future, where the new levels of automation are made illegal? Or will the techno utopians be vindicated by new sorts of work — as yet unseen — emerge to engage the surplus workers now being displaced?

The end state is uncertain, but we are headed toward a disruption of our society on the same order of magnitude as the rise of agriculture and industrialism, but in a much more compressed time frame: decades, not generations or centuries. And that question — what are people for? — will taunt us because it’s unclear if there is an answer or whether it is just an irresolvable dilemma.