IBM scientists achieve storage memory breakthrough

June 04, 2016

573afc3b4ca56

For the first time, scientists at IBM Research have demonstrated reliably storing 3 bits of data per cell using a relatively new memory technology known as phase-change memory (PCM).

The current landscape spans from venerable DRAM to hard disk drives to ubiquitous flash. But in the last several years PCM has attracted the industry’s attention as a potential universal memory technology based on its combination of read/write speed, endurance, non-volatility and density. For example, PCM doesn’t lose data when powered off, unlike DRAM, and the technology can endure at least 10 million write cycles, compared to an average flash USB stick, which tops out at 3,000 write cycles.

This research breakthrough provides fast and easy storage to capture the exponential growth of data from mobile devices and the Internet of Things.

Applications

IBM scientists envision standalone PCM as well as hybrid applications, which combine PCM and flash storage together, with PCM as an extremely fast cache. For example, a mobile phone’s operating system could be stored in PCM, enabling the phone to launch in a few seconds. In the enterprise space, entire databases could be stored in PCM for blazing fast query processing for time-critical online applications, such as financial transactions.

Machine learning algorithms using large datasets will also see a speed boost by reducing the latency overhead when reading the data between iterations.

How PCM Works

PCM materials exhibit two stable states, the amorphous (without a clearly defined structure) and crystalline (with structure) phases, of low and high electrical conductivity, respectively.

To store a ‘0’ or a ‘1’, known as bits, on a PCM cell, a high or medium electrical current is applied to the material. A ‘0’ can be programmed to be written in the amorphous phase or a ‘1’ in the crystalline phase, or vice versa. Then to read the bit back, a low voltage is applied. This is how re-writable Blue-ray Discs store videos.

Previously scientists at IBM and other institutes have successfully demonstrated the ability to store 1 bit per cell in PCM, but today at the IEEE International Memory Workshop in Paris, IBM scientists are presenting, for the first time, successfully storing 3 bits per cell in a 64k-cell array at elevated temperatures and after 1 million endurance cycles.

“Phase change memory is the first instantiation of a universal memory with properties of both DRAM and flash, thus answering one of the grand challenges of our industry,” said Dr. Haris Pozidis, an author of the paper and the manager of non-volatile memory research at IBM Research – Zurich. “Reaching three bits per cell is a significant milestone because at this density the cost of PCM will be significantly less than DRAM and closer to flash.”

To achieve multi-bit storage IBM scientists have developed two innovative enabling technologies: a set of drift-immune cell-state metrics and drift-tolerant coding and detection schemes.

More specifically, the new cell-state metrics measure a physical property of the PCM cell that remains stable over time, and are thus insensitive to drift, which affects the stability of the cell’s electrical conductivity with time. To provide additional robustness of the stored data in a cell over ambient temperature fluctuations a novel coding and detection scheme is employed. This scheme adaptively modifies the level thresholds that are used to detect the cell’s stored data so that they follow variations due to temperature change. As a result, the cell state can be read reliably over long time periods after the memory is programmed, thus offering non-volatility.

“Combined these advancements address the key challenges of multi-bit PCM, including drift, variability, temperature sensitivity and endurance cycling,” said Dr. Evangelos Eleftheriou, IBM Fellow.

The experimental multi-bit PCM chip used by IBM scientists is connected to a standard integrated circuit board. The chip consists of a 2 × 2 Mcell array with a 4- bank interleaved architecture. The memory array size is 2 × 1000 μm × 800 μm. The PCM cells are based on doped-chalcogenide alloy and were integrated into the prototype chip serving as a characterization vehicle in 90 nm CMOS baseline technology.

More information: Aravinthan Athmanathan et al. Multilevel-Cell Phase-Change Memory: A Viable Technology, IEEE Journal on Emerging and Selected Topics in Circuits and Systems (2016). DOI: 10.1109/JETCAS.2016.2528598

M. Stanisavljevic, H. Pozidis, A. Athmanathan, N. Papandreou, T. Mittelholzer, and E. Eleftheriou,”Demonstration of Reliable Triple-Level-Cell (TLC) Phase-Change Memory,” in Proc. International Memory Workshop, Paris, France, May 16-18, 2016

Read more at: http://phys.org/news/2016-05-ibm-scientists-storage-memory-breakthrough.html#jCp

Advertisements
Video

Jeremy Howard: The wonderful and terrifying implications of computers that can learn

February 03, 2016

4707d8e88ba824e4a9ad05ee2446d93576117d21_2880x1620

What happens when we teach a computer how to learn? Technologist Jeremy Howard shares some surprising new developments in the fast-moving field of deep learning, a technique that can give computers the ability to learn Chinese, or to recognize objects in photos, or to help think through a medical diagnosis. (One deep learning tool, after watching hours of YouTube, taught itself the concept of “cats.”) Get caught up on a field that will change the way the computers around you behave … sooner than you probably think.

Video

Ray Kurzweil: Get ready for hybrid thinking

March 2014

20110913_salerno_uri_raykurzweil_frame188-edit

Two hundred million years ago, our mammal ancestors developed a new brain feature: the neocortex. This stamp-sized piece of tissue (wrapped around a brain the size of a walnut) is the key to what humanity has become. Now, futurist Ray Kurzweil suggests, we should get ready for the next big leap in brain power, as we tap into the computing power in the cloud.

http://www.ted.com/talks/ray_kurzweil_get_ready_for_hybrid_thinking?utm_content=awesm-publisher&utm_campaign=&utm_source=direct-on.ted.com&utm_medium=on.ted.com-facebook-share&awesm=on.ted.com_e0E6Q

Nanoelectronic circuits that operate more than 10,000 times faster than current microprocessors

April 21, 2014

Nano_BIG

Circuits that can operate at frequencies up to 245 terahertz — tens of thousands times faster than today’s state-of-the-art microprocessors — have been designed and fabricated by researchers at National University of Singapore and Agency for Science, Technology and Research (A*STAR).

The new circuits can potentially be used to construct ultra-fast computers or single-molecule detectors in the future, and open up new possibilities in nanoelectronic devices. For example, by changing the molecules in the molecular electronic device, the frequency of the circuits can be altered over hundreds of terahertz.

The invention uses a new physical process called “quantum plasmonic tunneling.” Plasmons are collective, ultra-fast oscillations of electrons that can be manipulated by light at the nanoscale.

The researchers next plan to explore integration of these devices into real electronic circuits.

Results of the research were published in the journal Science on March 28, 2014. The study is funded by the Singapore National Research Foundation (NRF) and A*STAR.


Abstract of Science paper

Quantum tunneling between two plasmonic resonators links nonlinear quantum optics with terahertz nanoelectronics. We describe the direct observation of and control over quantum plasmon resonances at length scales in the range 0.4 to 1.3 nanometers across molecular tunnel junctions made of two plasmonic resonators bridged by self-assembled monolayers (SAMs). The tunnel barrier width and height are controlled by the properties of the molecules. Using electron energy-loss spectroscopy, we directly observe a plasmon mode, the tunneling charge transfer plasmon, whose frequency (ranging from 140 to 245 terahertz) is dependent on the molecules bridging the gaps.

Researchers use DNA strands to create nanobot computer inside living animal

Apr 10, 2014

1276

(Phys.org) —A team of researchers at Bar-Ilan University in Israel has successfully demonstrated an ability to use strands of DNA to create a nanobot computer inside of a living creature—a cockroach. In their paper published in Nature Nanotechnology, the researchers describe how they created several nanobot structures using strands of DNA, injected them into a living cockroach, then watched as they worked together as a computer to target one of the insects cells.

Prior research has shown that DNA strands can be programmable, mimicking circuits and even solving simple math problems. The team in Israel has now extended that work to show that such programmability can be used inside of a living organism to perform work, such as destroying cancer cells.

DNA strands can be programmed because of their natural tendency to react to different proteins. In this new effort, the team unwound DNA strands and then tied them together in an origami type box structure. The box was then “filled” with a single chemical molecule. Next, other such objects were created for the purpose of interacting with both the box structure and certain proteins found inside of the cockroach. The whole point was to create multiple scenarios in which the box would open automatically upon colliding with certain proteins. Adding multiple nanostructures allows for increasing the number of possibilities. For example, if the box structure will only open if it encounters three kinds of proteins, one made naturally by the cockroach, and two others carried by two different DNA origami structures. By mixing the combinations, it’s possible to cause the box to open using logic operations such as AND, OR, NOT (where the box will not open if a certain protein is present) etc., and that of course means that computational operations can be carried out—all inside of a living organism.

In their study, the researchers filled the origami box with a chemical that binds with hemolymph molecules, which are found inside a cockroach’s version of a bloodstream. All of the injected nanobots were imbued with a fluorescent marker so that the researchers could follow their progress inside the . They report that their experiments worked as envisioned—they were able to get the box to open or not, depending on the programming of the entire fleet of nanobots sent into the insect on multiple occasions under a variety of scenarios. Cleary impressed with their own results, the team suggests that similar nanobot computers could be constructed and be ready for trial in humans in as little as five years.

by Bob Yirka
http://phys.org/news/2014-04-dna-strands-nanobot-animal.html

This Could Be the First Animal to Live Entirely Inside a Computer

Apr 10, 2014

i6nbdrtt2qeq4iviezho

Animals are exceptionally complicated things. So complicated, in fact, that we’ve never actually built one ourselves. But the day is fast approaching when we’ll be able to create digital versions of organisms on a computer — from the way they move right through to their behaviors. Here’s how we’ll do it.

I spoke to neuroscientist Stephen Larson, a co-founder and project coordinator for the OpenWorm project. His team is busy at work trying to create a digital version of an actual nematode worm in a computer.

But before we get to our conversation, let’s do a quick review.

The Path To Virtual Organisms

To be fair, scientists have already created a computational model of an actual organism, namely the exceptionally small free-living bacteria known as Mycoplasma genitalia. It’s an amazing accomplishment, but the pathogen — with its 525 genes — is one of the world’s simplest organisms. Contrast that with E. coli, which has 4,288 genes, and humans, who have anywhere from 35,000 to 57,000 genes.

Scientists have also created synthetic DNA that can self-replicate and an artificial chromosome from scratch. Breakthroughs like these suggest it won’t be much longer before we start creating synthetic animals for the real world. Such endeavors could result in designer organisms to help in the manufacturing of vaccines, medicines, sustainable fuels, and with toxic clean-ups.

There’s a very good chance that many of these organisms, including drugs, will be designed and tested in computers first. Eventually, our machines will be powerful enough and our understanding of biology deep enough to allow us to start simulating some of the most complex biological functions — from entire microbes right through to the human mind itself (what will be known as whole brain emulations).

Needless to say we’re not going to get there in one day. We’ll have to start small and work our way up. Which is why Larson and his team have started to work on their simulated nematode worm.

Analog and Digital Worlds Converge

To kick off our conversation, I asked Larson to clarify what he means by “simulation.” How is it, exactly, that biological attributes can be translated to the digital realm?

“At the end of the day, biology must obey the laws of physics,” he responded. “Our project is to simulate as much of the important physics — or biophysics — of the C. elegans as we can, and then compare against measurements from real worms. When we say simulation, we are specifically referring to writing computer programs that use equations from physics that are applied to what we know about the worm.”

This, he says, is what’s allowing them to predict what its cells are doing and how they add up to the overall physiology and behavior of a worm.

But why C. elegans?

“This tiny worm is by far the most understood and studied animal with a brain in all of biology,” he says. “All of the ~1,000 cells of this organism have been mapped, including a tiny brain composed of 302 neurons and their network composed by give-or-take 5,500 connections.”

Additionally, Larson says that three different Nobel prizes have been awarded for work on this worm, and it is increasingly being used as a model to gain an enhanced understanding of disease and health relevant to all organisms, including humans.

“When making a complex computer model, it is important to start where the data are the most complete,” he says.

Simulation Versus Emulation

We also talked about the various attributes of the worm they’re trying to digitize. Given that they’re also trying to simulate its brain, I wondered if their project is aimed more at emulation than simulation.

“We are currently addressing the challenge of closing the ‘brain-behavior loop’ in C. elegans,” he says. “In other words, through this simulation we want to understand how its proto-brain controls its muscles to move its body around an environment, and then how the environment is interpreted by the proto-brain. That means leaving aside reproduction or digestion or other internal functions for now until that first part is complete. Once we get there, we will move on to these other aspects.

As for the emulation versus simulation distinction, Larson says that, when it comes to brains, he’s seen the two terms used interchangeably: “I’m not sure there is a meaningful difference.”

On this point I actually disagree. A simulation seeks to recreate an approximation or appearance of something, whereas an emulation seeks to recreate de facto functionality. So, if the Open Worm project is successful, and the brain of a nematode worm perfectly recreated in the digital realm, we’d be talking about an emulation and not a simulation. This is an important distinction from an ethical perspective, because there’s the potential for harm, and consequently, moral consideration.

An Incomplete Map

Larson, who has a bachelor of science and master of engineering from MIT in computer science, along with a Ph.D in neuroscience from the University of California, San Diego, also told me about some of the challenges they’re facing.

“Despite being the best understood animal, there are still aspects of this worm on the frontier of our understanding of biology as a whole that biologists in this field do not have complete data for, and this obviously limits us,” he told io9.

For example, he described to me how neuroscientists have made progress by poking a sharp glass electrode into a neuron from a mouse or rat to analyze neuronal electrical behavior.

“However, this is much more difficult to do in worms so it hasn’t been done as much, and as a consequence there is not as much data present,” he says. “However, recently scientists are using breakthroughs in optical imaging of neuronal behavior and laser control of neurons to catch up to the last 50 years of understanding neurons in rodents.”

Larson says there’s an explosion of data on its way and they’re doing their best to collect as much insight from this work so that they can build these neural behaviors into their model.

“We can also use some clever tricks from computer science to help us fill in some of the gaps,” he adds. “The good news is that this will only get easier as the tools and techniques get better over time.”

Speaking of tools, the Open Worm team is utilizing modern programming languages like Java, Python, C++, and related technologies. They’re also using a lot of cutting edge open source libraries across all these languages. And for organizing themselves online, they’ve been using GitHub, Google Drive, and Google+ Hangouts.

Progress to Date

The first major goal of Open Worm is to connect a simulation that deals with the body, muscles, and environment of a worm to a simulation that deals with the neurons and neuronal activity of the worm.

“We’ve spent the last three years making these two elements as accurate as possible,” he told me. “Late last year we were pleased that we got the part dealing with the body, muscles, and environment to do a simple ‘wiggle.’ Even this extremely simple behavior was exciting because it showed proof of concept — we could create a sophisticated simulated C. elegans body that we could eventually do sophisticated brain-behavior in silico experiments with.

Looking ahead, the team is working on a few different areas based on the interests of their contributors

“We are refining a published data set of real worm behaviors into a form where we can automatically compare our model to real data,” he says. “We are connecting the body model to the nervous system model.”

They’re also working to make all of it more accessible to the internet via the web.

Open and Crowdfunded

One of the more exciting aspects of this project is the open source nature of it all. Larson says that every line of code produced by the project is shared on GitHub as it is written, meaning that anyone in the world can watch as they assemble the simulation.

“Our roadmap is open too, so anyone can see where we are going and participate,” Larson told io9. “We also hold scientific discussions online that you can see on our YouTube channel. Essentially we try to invert the weakness of not being able to meet in person very often into a strength of transparency in our communications over the internet.”

The Open Worm team is also launching a Kickstarter campaign (launching April 19).

We’re raising money to enable us to put the simulation — which we’re calling a WormSim for simplicity — up online and accessible through your web browser,” he says. “This will make the experience of seeing the results of OpenWorm much more tangible for folks because they’ll be able to explore the activity of the model in a 3D virtual world for themselves. Today we have more static representations of the worm we have already put online and these are already being used by scientists around the world.”

A Template For the Future

Encouragingly, the Open Worm approach to simulating an organism could easily translate to similar projects. Indeed, they’ve already received inquiries from groups who are doing related projects on fruit flies and ants to explore collaborations.

“We hope that what we do here in C. elegans will create a template that can be used for other organisms, but for the moment we’re sticking with showing that this way works first.

http://io9.com/this-could-be-the-first-animal-to-live-entirely-inside-1561810195