Scientists at Large Hadron Collider hope to make contact with parallel universe in days

November 8, 2015

Image of protons colliding at LHC

Atom art: An image of two protons smashed together at the LHC But next week’s experiment is considered to be a game changer.

The staggeringly complex LHC ‘atom smasher’ at the CERN centre in Geneva, Switzerland, will be fired up to its highest energy levels ever in a bid to detect – or even create – miniature black holes.

If successful a completely new universe will be revealed – rewriting not only the physics books but the philosophy books too.

It is even possible that gravity from our own universe may ‘leak’ into this parallel universe, scientists at the LHC say.

The experiment is sure to inflame alarmist critics of the LHC, many of whom initially warned the high energy particle collider would spell the end of our universe with the creation a black hole of its own.

But so far Geneva remains intact and comfortably outside the event horizon.

Indeed the LHC has been spectacularly successful. First scientists proved the existence of the elusive Higgs boson ‘God particle’ – a key building block of the universe – and it is seemingly well on the way to nailing ‘dark matter’ – a previously undetectable theoretical possibility that is now thought to make up the majority of matter in the universe.

Mir Faizal, one of the three-strong team of physicists behind the experiment, said: “Just as many parallel sheets of paper, which are two dimensional objects [breadth and length] can exist in a third dimension [height], parallel universes can also exist in higher dimensions.

“We predict that gravity can leak into extra dimensions, and if it does, then miniature black holes can be produced at the LHC.

“Normally, when people think of the multiverse, they think of the many-worlds interpretation of quantum mechanics, where every possibility is actualised.

“This cannot be tested and so it is philosophy and not science.

“This is not what we mean by parallel universes. What we mean is real universes in extra dimensions.

“As gravity can flow out of our universe into the extra dimensions, such a model can be tested by the detection of mini black holes at the LHC.

“We have calculated the energy at which we expect to detect these mini black holes in ‘gravity’s rainbow’ [a new scientific theory].

“If we do detect mini black holes at this energy, then we will know that both gravity’s rainbow and extra dimensions are correct.”

When the LHC is fired up the energy is measured in Tera electron volts – a TeV is 1,000,000,000,000, or one trillion, electron Volts

So far, the LHC has searched for mini black holes at energy levels below 5.3 TeV.

But the latest study says this is too low.

Instead, the model predicts that black holes may form at energy levels of at least 9.5 TeV in six dimensions and 11.9 TeV in 10 dimensions.


Russian scientists successfully implant the first 3D-printed thyroid gland

November 8, 2015


A Russian company announced a successful experiment implanting 3D-printed thyroid glands into mice, and the results will be published next week, said Dmitri Fadin, development director at 3D Printing Solutions.

“We had some difficulties during the study, but in the end the thyroid gland turned out to be functional,” Mr. Fadin told RBTH.

3D Bioprinting Solutions printed the thyroid gland – or to be exact, the gland’s organ construct – in March of this year. At that time, scientific laboratories were saying that they will start printing human thyroid glands if the experiment is successful.

3D Bioprinting Solutions uses existing 3D print technology that makes items from plastic, ceramic and metals, but it had to make adaptations for biological material, that is, for cells. Before transplanting the artificial gland, scientists “carved out” a thyroid in the mice using radioactive iodine.

Vladimir Mironov founded 3D Bioprinting Solutions in 2013. He a tissue engineer, and co-founder of two startups in the U.S., Cardiovascular Tissue Technology, and Cuspis.

How Facebook will use artificial intelligence to organize insane amounts of data into the perfect News Feed and a personal assistant with superpowers

November 8, 2015


Facebook CTO Mike Schroepfer

Using some quick and dirty math, Facebook CTO Mike Schroepfer estimates that the amount of content that Facebook considers putting on your News Feed grows 40% to 50% year-over-year.

But because people aren’t gaining more time in the day, the company’s algorithms have to be much more selective about what they actually show you.

“We need systems that can help us understand the world and help us filter it better,” Schroepfer said at a press event prior to his appearance at the Dublin Web Summit Tuesday morning.

That’s why the company’s artificial intelligence team (called FAIR) has been hard at work training Facebook’s systems to make them understand the world more like humans, through language, images, planning, and prediction.

It already has trained its computer vision system to segment out individual objects from photos and then label them. The company plans to present a paper next month that shows how it can segment images 30 percent faster, using much less training data, than it previously could.

Ultimately, Schroepfer explains, this could have practical applications like helping you search through all your photos to surface any that contain ocean scenes or dogs. Or, you could tell your News Feed that you like seeing pictures with babies, but hate seeing photos of latte art.

It could also come in handy for photo editing. For example, you could tell the system to turn everything in a photo black-and-white, except one object.

These improving visual skills pair well with Facebook’s language recognition.

Schroepfer says that the company is in the early stages of building a product for the 285 million people around the world with low vision capabilities and the 40 million who are blind that will let them communicate with an artificial intelligence system to find out details about what is in any photo on their feed.

“We’re getting closer to that magical experience that we’re all hoping for,” he says.

The team is also tackling predictive, unsupervised learning and planning.

Making M into a superpower

Both of these research areas will be important to powering M, the virtual personal assistant that Facebook launched earlier this summer in its chat app, Messenger. Right now it’s in limited beta in the Bay Area, but the goal, Schroepfer says, is to make it feel like M is a superpower bestowed upon every Messenger user on earth.

Right now, everything M can do is supervised by real human beings. However, those people are backed up by artificial intelligence. Facebook has hooked up its memory networks to M’s console to train on the data that it’s gotten from its beta testers.

It might sound obvious, but the memory networks have helped M realize what questions to ask first if someone tells M they want to order flowers: “What’s your budget?” and “Where do you want them sent?”

The AI system discovered this by watching a handful of interactions between users and the people currently powering M.

“There is already some percentage of responses that are coming straight from the AI, and we’re going to increase that percentage over time, so that it allows us to train up these systems,” Schroepfer says.

“The reason this is exciting is that it’s scalable. We cannot afford to hire operators for the entire world, to be their virtual assistant, but with the right AI technology, we could deploy that for the entire planet, so that everyone in the world would have an automated assistant that helps them manage their own online world. And that ends up being a kind of superpower deployed to the whole world.”

Schroepfer says that the team has made a lot of progress over the last year, and plans to accelerate that progress over time.

“The promise I made to all the AI folks that joined us, is that we’re going to be the best place to get your work to a billion people as fast as possible.”

Artificial intelligence: ‘Homo sapiens will be split into a handful of gods and the rest of us’

November 8, 2015


If you wanted relief from stories about tyre factories and steel plants closing, you could try relaxing with a new 300-page report from Bank of America Merrill Lynch which looks at the likely effects of a robot revolution.

But you might not end up reassured. Though it promises robot carers for an ageing population, it also forecasts huge numbers of jobs being wiped out: up to 35% of all workers in the UK and 47% of those in the US, including white-collar jobs, seeing their livelihoods taken away by machines.

Haven’t we heard all this before, though? From the luddites of the 19th century to print unions protesting in the 1980s about computers, there have always been people fearful about the march of mechanisation. And yet we keep on creating new job categories.

However, there are still concerns that the combination of artificial intelligence (AI) – which is able to make logical inferences about its surroundings and experience – married to ever-improving robotics, will wipe away entire swaths of work and radically reshape society.

“The poster child for automation is agriculture,” says Calum Chace, author of Surviving AI and the novel Pandora’s Brain. “In 1900, 40% of the US labour force worked in agriculture. By 1960, the figure was a few per cent. And yet people had jobs; the nature of the jobs had changed.

“But then again, there were 21 million horses in the US in 1900. By 1960, there were just three million. The difference was that humans have cognitive skills – we could learn to do new things. But that might not always be the case as machines get smarter and smarter.”

What if we’re the horses to AI’s humans? To those who don’t watch the industry closely, it’s hard to see how quickly the combination of robotics and artificial intelligence is advancing. Last week a team from the Massachusetts Institute of Technology released a video showing a tiny drone flying through a lightly forested area at 30mph, avoiding the trees – all without a pilot, using only its onboard processors. Of course it can outrun a human-piloted one.

MIT has also built a “robot cheetah” which can jump over obstacles of up to 40cm without help. Add to that the standard progress of computing, where processing power doubles roughly every 18 months (or, equally, prices for capability halve), and you can see why people like Chace are getting worried.

Drone flies autonomously through a forested area


But the incursion of AI into our daily life won’t begin with robot cheetahs. In fact, it began long ago; the edge is thin, but the wedge is long. Cooking systems with vision processors can decide whether burgers are properly cooked. Restaurants can give customers access to tablets with the menu and let people choose without needing service staff.

Lawyers who used to slog through giant files for the “discovery” phase of a trial can turn it over to a computer. An “intelligent assistant” called Amy will, via email, set up meetings autonomously. Google announced last week that you can get Gmail to write appropriate responses to incoming emails. (You still have to act on your responses, of course.)

Further afield, Foxconn, the Taiwanese company which assembles devices for Apple and others, aims to replace much of its workforce with automated systems. The AP news agency gets news stories written automatically about sports and business by a system developed by Automated Insights. The longer you look, the more you find computers displacing simple work. And the harder it becomes to find jobs for everyone.

So how much impact will robotics and AI have on jobs, and on society? Carl Benedikt Frey, who with Michael Osborne in 2013 published the seminal paper The Future of Employment: How Susceptible Are Jobs to Computerisation? – on which the BoA report draws heavily – says that he doesn’t like to be labelled a “doomsday predictor”.

He points out that even while some jobs are replaced, new ones spring up that focus more on services and interaction with and between people. “The fastest-growing occupations in the past five years are all related to services,” he tells the Observer. “The two biggest are Zumba instructor and personal trainer.”

Frey observes that technology is leading to a rarification of leading-edge employment, where fewer and fewer people have the necessary skills to work in the frontline of its advances. “In the 1980s, 8.2% of the US workforce were employed in new technologies introduced in that decade,” he notes. “By the 1990s, it was 4.2%. For the 2000s, our estimate is that it’s just 0.5%. That tells me that, on the one hand, the potential for automation is expanding – but also that technology doesn’t create that many new jobs now compared to the past.”

This worries Chace. “There will be people who own the AI, and therefore own everything else,” he says. “Which means homo sapiens will be split into a handful of ‘gods’, and then the rest of us.

“I think our best hope going forward is figuring out how to live in an economy of radical abundance, where machines do all the work, and we basically play.”

Arguably, we might be part of the way there already; is a dance fitness programme like Zumba anything more than adult play? But, as Chace says, a workless lifestyle also means “you have to think about a universal income” – a basic, unconditional level of state support.

Perhaps the biggest problem is that there has been so little examination of the social effects of AI. Frey and Osborne are contributing to Oxford University’s programme on the future impacts of technology; at Cambridge, Observer columnist John Naughton and David Runciman are leading a project to map the social impacts of such change. But technology moves fast; it’s hard enough figuring out what happened in the past, let alone what the future will bring.

But some jobs probably won’t be vulnerable. Does Frey, now 31, think that he will still have a job in 20 years’ time? There’s a brief laugh. “Yes.” Academia, at least, looks safe for now – at least in the view of the academics.

Foxconn sign
Smartphone manufacturer Foxconn is aiming to automate much of its production facility. Photograph: Pichi Chuang/Reuters

The danger of change is not destitution, but inequality

Productivity is the secret ingredient in economic growth. In the late 18th century, the cleric and scholar Thomas Malthus notoriously predicted that a rapidly rising human population would result in misery and starvation.

But Malthus failed to anticipate the drastic technological changes – from the steam-powered loom to the combine harvester – that would allow the production of food and the other necessities of life to expand even more rapidly than the number of hungry mouths. The key to economic progress is this ability to do more with the same investment of capital and labour.

The latest round of rapid innovation, driven by the advance of robots and AI, is likely to power continued improvements.

Recent research led by Guy Michaels at the London School of Economics looked at detailed data across 14 industries and 17 countries over more than a decade, and found that the adoption of robots boosted productivity and wages without significantly undermining jobs.

Robotisation has reduced the number of working hours needed to make things; but at the same time as workers have been laid off from production lines, new jobs have been created elsewhere, many of them more creative and less dirty. So far, fears of mass layoffs as the machines take over have proven almost as unfounded as those that have always accompanied other great technological leaps forward.

There is an important caveat to this reassuring picture, however. The relatively low-skilled factory workers who have been displaced by robots are rarely the same people who land up as app developers or analysts, and technological progress is already being blamed for exacerbating inequality, a trend Bank of America Merrill Lynch believes may continue in future.

So the rise of the machines may generate huge economic benefits; but unless it is carefully managed, those gains may be captured by shareholders and highly educated knowledge workers, exacerbating inequality and leaving some groups out in the cold. Heather Stewart

Beyond ‘Back to the Future’: Experts Serve Up Tech Predictions for 2045

November 8, 2015


In “Back to the Future Part II,” Marty McFly and Doc Brown travel from 1985 to October 21, 2015, to find a world filled with flying cars, hoverboards and self-drying jackets.

Those predictions didn’t exactly pan out, although people are working on each of those concepts. (Screenwriter Bob Gale did get a lot of things — from drones to fingerprint scanners — right, as he told TODAY earlier this year.)

The future is now, and it’s pretty cool. But what will the world be like in another 30 years? Three futurists shared their predictions with NBC News.

Katie Aquino, a.k.a. ‘Miss Metaverse’: Super-fast travel, nanomedicine and virtual immortality

“No longer will expensive and lengthy flights be the norm for world travel,” said the futurist and filmmaker known as Miss Metaverse. Instead, frictionless maglev trains will allow “us to travel at speeds in excess of 6,000 miles per hour while only feeling a G1 gravitational force, the same we feel when riding in a car.”

At those speeds, going from New York to Beijing will only take two hours. And if you get sick on your trip?

“Nanotechnology, although not a hot topic today, will likely unlock the keys to destroying cancer cells and ‘programming’ stem cells for a myriad of health benefits in the future,” she said.

Customized drugs will solve a lot of ailments. Death isn’t one of them, but we could still find a way to become immortal … kind of.

“Our future lives may truly be limitless thanks to an organization known as the 2045 Initiative that’s working towards a goal of uploading human consciousness into synthetic avatars,” she said. “Much like in the movie ‘Avatar,’ humans may evolve into what are theoretically known as post-humans, human consciousness in upgraded or synthetic bodies.”

Jamais Cascio: Jurassic pets and augmented reality clothing

Love the movie “Jurassic Park” but don’t like being eaten by terrifying symbols of man’s hubris?

Synthetic biology could let people create “miniature versions of various dinosaurs or other prehistorical creatures,” said writer and futurist Jamais Cascio. That might include a “mini-Velociraptor on a leash, with the right behavior modification to make sure it’s safe to be around,” or a “micro-Brontosaurus that’s perfect for a kid to ride.”

Of course, you want to look when riding around on your designer dinosaur. Hence the augmented reality clothing, which, Cascio said, will be visible to people wearing the “ubiquitous smart glasses, digital contact lenses, and eye upgrades.”

“Imagine a dress that looks like (and acts like) it’s made of water,” he said. “Or a Halloween costume that appears to be entirely made of living spiders.”

That would look great with a pair of sneakers with power laces.

James Canton: Digital memories and robot soldiers

Sharing a Facebook photo won’t seem very impressive in 2045, according to James Canton, a futurist, writer and business consultant.

That’s because people will share “entertainment memories,” which are “like real-time videos,” he said, “except others can experience the emotion, physical sensation and actual experience as if they were there.”

Intense and kind of creepy! Gene-editing will eliminate genetic diseases, he predicted, and replica organs will be printable on demand.

And yes, “Terminator” fans, there will be machine combat as “robots fight our wars, no more human soldiers.”

There is no guarantee that any of these technologies will arrive, of course. Hopefully by 2045, we will see at least some of the predictions from “Back to the Future Part II” come true, although there is a decent chance we still won’t have flying cars and the Chicago Cubs (currently losing in the playoffs to the Mets) will still be waiting for their first World Series win since 1908.