IBM is one step closer to mimicking the human brain

September 24, 2016

810

Scientists at IBM have claimed a computational breakthrough after imitating large populations of neurons for the first time.

Neurons are electrically excitable cells that process and transmit information in our brains through electrical and chemical signals. These signals are passed over synapses, specialised connections with other cells.

It’s this set-up that inspired scientists at IBM to try and mirror the way the biological brain functions using phase-change materials for memory applications.

Using computers to try to mimic the human brain is something that’s been theorised for decades due to the challenges of recreating the density and power. Now, for the first time, scientists have created their own “randomly spiking” artificial neurons that can store and process data.

“The breakthrough marks a significant step forward in the development of energy-efficient, ultra-dense integrated neuromorphic technologies for applications in cognitive computing,” the scientists said.

The artificial neurons consist of phase-change materials, including germanium antimony telluride, which exhibit two stable states, an amorphous one (without a clearly defined structure) and a crystalline one (with structure). These materials are also the basis of re-writable Blue-ray but in this system the artificial neurons do not store digital information; they are analogue, just like the synapses and neurons in a biological brain.

The beauty of these powerful phase-change-based artificial neurons, which can perform various computational primitives such as data-correlation detection and unsupervised learning at high speeds, is that they use very little energy – just like human brain.

In a demonstration published in the journal Nature Nanotechnology, the team applied a series of electrical pulses to the artificial neurons, which resulted in the progressive crystallisation of the phase-change material, ultimately causing the neuron to fire.

In neuroscience, this function is known as the integrate-and-fire property of biological neurons. This is the foundation for event-based computation and, in principle, is quite similar to how a biological brain triggers a response when an animal touches something hot, for instance.

Tomas Tuma, co-author of the paper, said the breakthrough could help create a new generation of extremely dense neuromorphic computing systems
Tomas Tuma, co-author of the paper, said the breakthrough could help create a new generation of extremely dense neuromorphic computing systems

As part of the study, the researchers organised hundreds of artificial neurons into populations and used them to represent fast and complex signals. When tested, the artificial neurons were able to sustain billions of switching cycles, which would correspond to multiple years of operation at an update frequency of 100Hz.

The energy required for each neuron update was less than five picojoule and the average power less than 120 microwatts — for comparison, 60 million microwatts power a 60 watt light bulb, IBM’s research paper said.

When exploiting this integrate-and-fire property, even a single neuron can be used to detect patterns and discover correlations in real-time streams of event-based data. “This will significantly reduce the area and power consumption as it will be using tiny nanoscale devices that act as neurons,” IBM scientist and author, Dr. Abu Sebastian told WIRED.

This, IBM believes, could be helpful in the further development of internet of things technologies, especially when developing tiny sensors.

“Populations of stochastic phase-change neurons, combined with other nanoscale computational elements such as artificial synapses, could be a key enabler for the creation of a new generation of extremely dense neuromorphic computing systems,” said Tomas Tuma, co-author of the paper.

This could be useful in sensors collecting and analysing volumes of weather data, for instance, said Sebastian, collected at the edge, in remote locations, for faster and more accurate weather forecasts.

The artificial neurons could also detect patterns in financial transactions to find discrepancies or use data from social media to discover new cultural trends in real time. While large populations of these high-speed, low-energy nano-scale neurons could also be used in neuromorphic co-processors with co-located memory and processing units.

http://www.wired.co.uk/article/scientists-mimicking-human-brain-computation

Advertisements

IBM scientists achieve storage memory breakthrough

June 04, 2016

573afc3b4ca56

For the first time, scientists at IBM Research have demonstrated reliably storing 3 bits of data per cell using a relatively new memory technology known as phase-change memory (PCM).

The current landscape spans from venerable DRAM to hard disk drives to ubiquitous flash. But in the last several years PCM has attracted the industry’s attention as a potential universal memory technology based on its combination of read/write speed, endurance, non-volatility and density. For example, PCM doesn’t lose data when powered off, unlike DRAM, and the technology can endure at least 10 million write cycles, compared to an average flash USB stick, which tops out at 3,000 write cycles.

This research breakthrough provides fast and easy storage to capture the exponential growth of data from mobile devices and the Internet of Things.

Applications

IBM scientists envision standalone PCM as well as hybrid applications, which combine PCM and flash storage together, with PCM as an extremely fast cache. For example, a mobile phone’s operating system could be stored in PCM, enabling the phone to launch in a few seconds. In the enterprise space, entire databases could be stored in PCM for blazing fast query processing for time-critical online applications, such as financial transactions.

Machine learning algorithms using large datasets will also see a speed boost by reducing the latency overhead when reading the data between iterations.

How PCM Works

PCM materials exhibit two stable states, the amorphous (without a clearly defined structure) and crystalline (with structure) phases, of low and high electrical conductivity, respectively.

To store a ‘0’ or a ‘1’, known as bits, on a PCM cell, a high or medium electrical current is applied to the material. A ‘0’ can be programmed to be written in the amorphous phase or a ‘1’ in the crystalline phase, or vice versa. Then to read the bit back, a low voltage is applied. This is how re-writable Blue-ray Discs store videos.

Previously scientists at IBM and other institutes have successfully demonstrated the ability to store 1 bit per cell in PCM, but today at the IEEE International Memory Workshop in Paris, IBM scientists are presenting, for the first time, successfully storing 3 bits per cell in a 64k-cell array at elevated temperatures and after 1 million endurance cycles.

“Phase change memory is the first instantiation of a universal memory with properties of both DRAM and flash, thus answering one of the grand challenges of our industry,” said Dr. Haris Pozidis, an author of the paper and the manager of non-volatile memory research at IBM Research – Zurich. “Reaching three bits per cell is a significant milestone because at this density the cost of PCM will be significantly less than DRAM and closer to flash.”

To achieve multi-bit storage IBM scientists have developed two innovative enabling technologies: a set of drift-immune cell-state metrics and drift-tolerant coding and detection schemes.

More specifically, the new cell-state metrics measure a physical property of the PCM cell that remains stable over time, and are thus insensitive to drift, which affects the stability of the cell’s electrical conductivity with time. To provide additional robustness of the stored data in a cell over ambient temperature fluctuations a novel coding and detection scheme is employed. This scheme adaptively modifies the level thresholds that are used to detect the cell’s stored data so that they follow variations due to temperature change. As a result, the cell state can be read reliably over long time periods after the memory is programmed, thus offering non-volatility.

“Combined these advancements address the key challenges of multi-bit PCM, including drift, variability, temperature sensitivity and endurance cycling,” said Dr. Evangelos Eleftheriou, IBM Fellow.

The experimental multi-bit PCM chip used by IBM scientists is connected to a standard integrated circuit board. The chip consists of a 2 × 2 Mcell array with a 4- bank interleaved architecture. The memory array size is 2 × 1000 μm × 800 μm. The PCM cells are based on doped-chalcogenide alloy and were integrated into the prototype chip serving as a characterization vehicle in 90 nm CMOS baseline technology.

More information: Aravinthan Athmanathan et al. Multilevel-Cell Phase-Change Memory: A Viable Technology, IEEE Journal on Emerging and Selected Topics in Circuits and Systems (2016). DOI: 10.1109/JETCAS.2016.2528598

M. Stanisavljevic, H. Pozidis, A. Athmanathan, N. Papandreou, T. Mittelholzer, and E. Eleftheriou,”Demonstration of Reliable Triple-Level-Cell (TLC) Phase-Change Memory,” in Proc. International Memory Workshop, Paris, France, May 16-18, 2016

Read more at: http://phys.org/news/2016-05-ibm-scientists-storage-memory-breakthrough.html#jCp

IBM’s resistive computing could massively accelerate AI — and get us closer to Asimov’s Positronic Brain

April 23, 2016

cq5dam.web.1280.1280

With the recent rapid advances in machine learning has come a renaissance for neural networks — computer software that solves problems a little bit like a human brain, by employing a complex process of pattern-matching distributed across many virtual nodes, or “neurons.” Modern compute power has enabled neural networks to recognize images, speech, and faces, as well as to pilot self-driving cars, and win at Go and Jeopardy. Most computer scientists think that is only the beginning of what will ultimately be possible. Unfortunately, the hardware we use to train and run neural networks looks almost nothing like their architecture. That means it can take days or even weeks to train a neural network to solve a problem — even on a compute cluster — and then require a large amount of power to solve the problem once they’re trained.

Neuromorphic computing may be key to advancing AI

Researchers at IBM aim to change all that, by perfecting another technology that, like neural networks, first appeared decades ago. Loosely called resistive computing, the concept is to have compute units that are analog in nature, small in substance, and can retain their history so they can learn during the training process. Accelerating neural networks with hardware isn’t new to IBM. It recently announced the sale of some of its TrueNorth chips to Lawrence National Labs for AI research. TrueNorth’s design is neuromorphic, meaning that the chips roughly approximate the brain’s architecture of neurons and synapses. Despite its slow clock rate of 1 KHz, TrueNorth can run neural networks very efficiently because of its million tiny processing units that each emulate a neuron.

Until now, though, neural network accelerators like TrueNorth have been limited to the problem-solving portion of deploying a neural network. Training — the painstaking process of letting the system grade itself on a test data set, and then tweaking parameters (called weights) until it achieves success — still needs to be done on traditional computers. Moving from CPUs to GPUs and custom silicon has increased performance and reduced the power consumption required, but the process is still expensive and time consuming. That is where new work by IBM researchers Tayfun Gokmen and Yuri Vlasov comes in. They propose a new chip architecture, using resistive computing to create tiles of millions of Resistive Processing Units (RPUs), which can be used for both training and running neural networks.

Using Resistive Computing to break the neural network training bottleneck

Deep neural networks have at least one hidden layer, and often hundreds. That makes them expensive to emulate on traditional hardware.Resistive Computing is a large topic, but roughly speaking, in the IBM design each small processing unit (RPU) mimics a synapse in the brain. It receives a variety of analog inputs — in the form of voltages — and based on its past “experience” uses a weighted function of them to decide what result to pass along to the next set of compute elements. Synapses have a bewildering, and not-yet totally understood layout in the brain, but chips with resistive elements tend to have them neatly organized in two-dimensional arrays. For example, IBM’s recent work shows how it is possible to organize them in 4,096-by-4,096 arrays.

Because resistive compute units are specialized (compared with a CPU or GPU core), and don’t need to either convert analog to digital information, or access memory other than their own, they can be fast and consume little power. So, in theory, a complex neural network — like the ones used to recognize road signs in a self-driving car, for example — can be directly modeled by dedicating a resistive compute element to each of the software-described nodes. However, because RPUs are imprecise — due to their analog nature and a certain amount of noise in their circuitry — any algorithm run on them needs to be made resistant to the imprecision inherent in resistive computing elements.

Traditional neural network algorithms — both for execution and training — have been written assuming high-precision digital processing units that could easily call on any needed memory values. Rewriting them so that each local node can execute largely on its own, and be imprecise, but produce a result that is still sufficiently accurate, required a lot of software innovation.

For these new software algorithms to work at scale, advances were also needed in hardware. Existing technologies weren’t adequate to create “synapses” that could be packed together closely enough, and operate with low power in a noisy environment, to make resistive processing a practical alternative to existing approaches. Runtime execution happened first, with the logic for training a neural net on a hybrid resistive computer not developed until 2014. At the time, researchers at the University of Pittsburg and Tsinghua University claimed that such a solution could result in a 3-to-4-order-of-magnitude gain in power efficiency at the cost of only about 5% in accuracy.

IBM researchers claim an RPU-based design will be massively more efficient for neural network applications

IBM researchers claim an RPU-based design will be massively more efficient for neural network applications, shown in this Table from their paper

Moving from execution to training

This new work from IBM pushes the use of resistive computing even further, postulating a system where almost all computation is done on RPUs, with traditional circuitry only needed for support functions and input and output. This innovation relies on combining a version of a neural network training algorithm that can run on an RPU-based architecture with a hardware specification for an RPU that could run it.

As far as putting the ideas into practice, so far resistive compute has been mostly a theoretical construct. The first resistive memory (RRAM) became available for prototyping in 2012, and isn’t expected to be a mainstream product for several more years. And those chips, while they will help scale memory systems, and show the viability of using resistive technology in computing, don’t address the issue of synapse-like processing.

If RPUs can be built, the sky is the limit

The RPU design proposed is expected to accommodate a variety of deep neural network (DNN) architectures, including fully-connected and convolutional, which makes them potentially useful across nearly the entire spectrum of neural network applications. Using existing CMOS technology, and assuming RPUs in 4,096-by-4,096-element tiles with an 80-nanosecond cycle time, one of these tiles would be able to execute about 51 GigaOps per second, using a minuscule amount of power. A chip with 100 tiles and a single complementary CPU core could handle a network with up to 16 billion weights while consuming only 22 watts (only two of which are actually from the RPUs — the rest is from the CPU core needed to help get data in and out of the chip and provide overall control).

That is a staggering number compared to what is possible when chugging data through the relatively lesser number of cores in even a GPU (think about 16 million compute elements, compared with a few thousand). Using chips densely packed with these RPU tiles, the researchers claim that, once built, a resistive-computing-based AI system can achieve performance improvements of up to 30,000 times compared with current architectures, all with a power efficiency of 84,000 GigaOps per-second per-watt. If this becomes a reality, we could be on our way to realizing Isaac Asimov’s fantasy vision of the robotic Positronic brain.

IBM’s resistive computing could massively accelerate AI — and get us closer to Asimov’s Positronic Brain

Chip with Brain-inspired Non-Von Neumann Architecture has 1M Neurons, 256M Synapses

August 28, 2014

synapse chip infographic_07-30-14a

San Jose, CA Scientists from IBM have unveiled the first neurosynaptic computer chip to achieve an unprecedented scale of one million programmable neurons, 256 million programmable synapses and 46 billion synaptic operations per second per watt. At 5.4 billion transistors, this fully functional and production-scale chip is currently one of the largest CMOS chips ever built, yet, while running at biological real time, it consumes a minuscule 70mW — orders of magnitude less power than a modern microprocessor. A neurosynaptic supercomputer the size of a postage stamp that runs on the energy equivalent of a hearing-aid battery, this technology could transform science, technology, business, government and society by enabling vision, audition and multi-sensory applications.

The breakthrough, published in Science in collaboration with Cornell Tech, is a significant step toward bringing cognitive computers to society.

There is a huge disparity between the human brain’s cognitive capability and ultra-low power consumption when compared to today’s computers. To bridge the divide, IBM scientists created something that didn’t previously exist — an entirely new neuroscience-inspired scalable and efficient computer architecture that breaks path with the prevailing von Neumann architecture used almost universally since 1946.

This second-generation chip is the culmination of almost a decade of research and development, including the initial single core hardware prototype in 2011 and software ecosystem with a new programming language and chip simulator in 2013.

The new cognitive chip architecture has an on-chip two-dimensional mesh network of 4096 digital, distributed neurosynaptic cores, where each core module integrates memory, computation and communication, and operates in an event-driven, parallel and fault-tolerant fashion. To enable system scaling beyond single-chip boundaries, adjacent chips, when tiled, can seamlessly connect to each other — building a foundation for future neurosynaptic supercomputers. To demonstrate scalability, IBM also revealed a 16-chip system with sixteen million programmable neurons and four billion programmable synapses.

“IBM has broken new ground in the field of brain-inspired computers, in terms of a radically new architecture, unprecedented scale, unparalleled power/area/speed efficiency, boundless scalability, and innovative design techniques. We foresee new generations of information technology systems — that complement today’s von Neumann machines — powered by an evolving ecosystem of systems, software and services,” said Dr. Dharmendra S. Modha, IBM Fellow and IBM Chief Scientist, Brain-Inspired Computing, IBM Research. “These brain-inspired chips could transform mobility, via sensory and intelligent applications that can fit in the palm of your hand but without the need for Wi-Fi. This achievement underscores IBM’s leadership role at pivotal transformational moments in the history of computing via long-term investment in organic innovation.”

The Defense Advanced Research Projects Agency (DARPA) has funded the project since 2008 with approximately $53M via Phase 0, Phase 1, Phase 2, and Phase 3 of the Systems of Neuromorphic Adaptive Plastic Scalable Electronics (SyNAPSE) program. Current collaborators include Cornell Tech and iniLabs.

Building the Chip

The chip was fabricated using Samsung’s 28nm process technology that has a dense on-chip memory and low-leakage transistors.

“It is an astonishing achievement to leverage a process traditionally used for commercially available, low-power mobile devices to deliver a chip that emulates the human brain by processing extreme amounts of sensory information with very little power,” said Shawn Han, vice president of Foundry Marketing, Samsung Electronics. “This is a huge architectural breakthrough that is essential as the industry moves toward the next-generation cloud and big-data processing. It’s a pleasure to be part of technical progress for next-generation through Samsung’s 28nm technology.”

The event-driven circuit elements of the chip used the asynchronous design methodology developed at Cornell Tech and refined with IBM since 2008.

“After years of collaboration with IBM, we are now a step closer to building a computer similar to our brain,” said Professor Rajit Manohar, Cornell Tech.

The combination of cutting-edge process technology, hybrid asynchronous-synchronous design methodology, and new architecture has led to a power density of 20mW/cm2 which is nearly four orders of magnitude less than today’s microprocessors.

Advancing the SyNAPSE Ecosystem

The new chip is a component of a complete end-to-end vertically integrated ecosystem spanning a chip simulator, neuroscience data, supercomputing, neuron specification, programming paradigm, algorithms and applications, and prototype design models. The ecosystem supports all aspects of the programming cycle from design through development, debugging, and deployment.

To bring forth this fundamentally different technological capability to society, IBM has designed a novel teaching curriculum for universities, customers, partners and IBM employees.

Applications and Vision

This ecosystem signals a shift in moving computation closer to the data, taking in vastly varied kinds of sensory data, analyzing and integrating real-time information in a context-dependent way, and dealing with the ambiguity found in complex, real-world environments.

Looking to the future, IBM is working on integrating multi-sensory neurosynaptic processing into mobile devices constrained by power, volume and speed; integrating novel event-driven sensors with the chip; real-time multimedia cloud services accelerated by neurosynaptic systems; and neurosynaptic supercomputers by tiling multiple chips on a board, creating systems that would eventually scale to one hundred trillion synapses and beyond.

Building on previously demonstrated neurosynaptic cores with on-chip, online learning, IBM envisions building learning systems that adapt in real world settings. While today’s hardware is fabricated using a modern CMOS process, the underlying architecture is poised to exploit advances in future memory, 3-D integration, logic, and sensor technologies to deliver even lower power, denser package, and faster speed.

http://www.scientificcomputing.com/news/2014/08/chip-brain-inspired-non-von-neumann-architecture-has-1m-neurons-256m-synapses

Apple and IBM team up to conquer the enterprise market, and crush Microsoft, Blackberry, and Android

July 19, 2014

steve-jobs-ibm-pc-ipad-640x442

Every now and then, the universe likes to throw a curve ball, just to see if you’re actually paying attention. Today’s surprise announcement of a wide-ranging Apple-IBM alliance is likely to put a frown on the faces of many tech execs. As of today, IBM and Apple have signed a broad agreement to put iPads and iPhones in the hands of many of IBMs clients and customers, while Apple has pledged support for the enterprise firm’s software tools and products.

IBM is promising a whole suite of business intelligence applications, cloud services, security and analytics, and device management tools, all to be written for iOS from the ground up and with enterprise customers firmly in mind. Apple, in turn, will offer AppleCare to enterprise customers, including what looks like an enterprise-style agreement to provide on-site repair and replacement services.

This deal, assuming both sides deliver on their respective software solutions, is potentially huge. It expands IBM’s business into touchscreens and tablets, it gives businesses a guaranteed and respected solution for software and hardware, and it gives Apple enormous amounts of enterprise street cred.

Fighting the bottom-up BYOD trend

For years, pundits have predicted that the Bring Your Own Device (BYOD) trend would wreck Blackberry’s market domination (it did) and then allow Android to seize market share from iOS (evidence is mixed). Certainly manufacturers like Samsung have endeavored to beef up their own security ratings and status to steal marketshare from the lucrative business segment.

This announcement is a clear threat to the few markets where Blackberry still playsMicrosoft’s corporate cash cows, and the BYOD Android trend that Samsung and other vendors have been pushing. Unfortunately, identifying it as a threat is all we can do for now — we need to see more details before we can say more. If Apple and IBM cast their nets narrowly and mostly appeal to IBM’s existing high-end customers, then the impact might not be substantial.

If, on the other hand, the two companies use this as an excuse to try and reach new customer bases, Android, Blackberry, and Windows could all be in a world of hurt. It’s the latest in a series of moves IBM has made to expand its customer base outwards, from aligning itself with Nvidia on HPC computing initiatives to opening up the Power8 architecture.

Of all these moves, however, this IBM-Apple alliance seems the most likely to change the nature of the enterprise computing game — and to rock pretty much everyone back on their heels in the process.

http://www.extremetech.com/extreme/186372-apple-and-ibm-team-up-to-conquer-the-enterprise-market-and-crush-microsoft-blackberry-and-android

IBM discovers new class of ultra-tough, self-healing, recyclable plastics that could redefine almost every industry

May 18, 2014

terminator-2-liquid-metal-t-1000-self-healing-640x353

Stop the press! IBM Research announced this morning that it has discovered a whole new class of… plastics. This might not sound quite as sexy as, say, MIT discovering a whole new state of matter — but wait until you hear what these new plastics can do. This new class of plastics — or more accurately, polymers — are stronger than bone, have the ability to self-heal, are light-weight, and are 100% recyclable. The number of potential uses, spanning industries as disparate as aerospace and semiconductors, is dizzying. A new class of polymers hasn’t been discovered in over 20 years — and, in a rather novel twist, they weren’t discovered by chemists: they were discovered by IBM’s supercomputers.

One of the key components of modern industry and consumerism is the humble thermosetting plastic. Thermosetting plastics — which are just big lumps of gooey polymer that are shaped and then cured (baked) — are light and easy to work with, but incredibly hard and heat resistant. The problem is, once a thermoset has been cured, there’s no turning back — you can’t return it to its gooey state. This means that if you (the engineer, the designer) make a mistake, you have to start again. It also means that thermoset plastics cannot be recycled. Once you’re done with that Galaxy S5, the thermoset chassis can’t be melted down and reused; it goes straight to the dump. IBM’s new polymer retains all of a thermosetting plastic’s useful properties — but it can also be recycled.

IBM’s new class of polymers began life, as they often do in chemistry circles, as an accident. Jeannette Garcia had been working on another type of polymer, when she suddenly noticed that the solution in her flask had unexpectedly hardened. “We couldn’t get it out,” Garcia told Popular Mechanics. “We had to smash the flask with a hammer, and, even then, we couldn’t smash the material itself. It’s one of these serendipitous discoveries.” She didn’t know how she’d created this new polymer, though, and so she joined forces with IBM’s computational chemistry team to work backwards from the final polymer. Using IBM’s supercomputing might, the chemists and the techies were able to work back to mechanism that caused the surprise reaction.This new class of polymer is called polyhexahydrotriazine, or PHT. [DOI: 10.1126/science.1251484 – “Recyclable, Strong Thermosets and Organogels via Paraformaldehyde Condensation with Diamines”]. It’s formed from a reaction between paraformaldehyde and 4,4ʹ-oxydianiline (ODA), which are both already commonly used in polymer production (this is very important if they want the new polymer to be adopted by the industry). The end result shows very high strength and toughness, like other thermosets, but its heat resistance is a little lower than other thermosets (it decomposes at around 350C, rather than 425C).

Infographic-Pervasiveness-of-Polymers-640x1624

Rather uniquely, though, IBM’s new polymer is both recyclable and self-healing. As you can see in the video above, chunks of the polymer readily rejoin to create a whole — and then when stretched in the future, they break randomly, not along the joins, proving a very high level of self-healing. [Read more about self-healing plastics.] Unlike traditional thermosets, which produce tons of recyclable waste every year, IBM’s PHT can be fully reverted back to its base state with sulfuric acid — which, as Garcia points out, is “essentially free.” In short, then, IBM has created a new plastic that could impact a number of industries in a very big way. The advantages of self-healing, tough plastics are highly evident in the aerospace, transportation, and architecture/construction industries. Thermoplastics also play a big part in the electronics industry, from the low-level packaging of computer chips, through to the chassis of your smartphone. In all of these areas, recyclability and self-healing could be a huge boon. As Garcia says, “If IBM had this 15 years ago, it would have saved unbelievable amounts of money.” Not to worry, Jeannette — there’s still plenty of time for IBM to save (and make) billions of dollars with this new plastic

http://www.extremetech.com/extreme/182583-ibm-discovers-new-class-of-ultra-tough-self-healing-recyclable-plastics-that-could-redefine-almost-every-industry

 

IBM invents ’3D nanoprinter’ for microscopic objects

April 26, 2014

“With our new technique, we achieve very high resolution at 10 nanometers at greatly reduced cost and complexity. In particular, by controlling the amount of material evaporated, 3D relief patterns can also be produced at the unprecedented accuracy of merely one nanometer in a vertical direction. Now it’s up to the imagination of scientists and engineers.”

IBM scientists have invented a tiny “chisel” with a nano-sized heatable silicon tip that creates patterns and structures on a microscopic scale.

The tip, similar to the kind used in atomic force microscopes, is attached to a bendable cantilever that scans the surface of the substrate material with the accuracy of one nanometer.

Unlike conventional 3D printers, by applying heat and force, the nanosized tip can remove (rather than add) material based on predefined patterns, thus operating like a “nanomilling” machine with ultra-high precision.

By the end 2014, IBM hopes to begin exploring the use of this technology for its research with graphene.

“To create more energy-efficient clouds and crunch Big Data faster we need a new generation of technologies including new transistors, but before going into mass production, new techniques are needed for prototyping below 30 nanometers,” said Dr. Armin Knoll, a physicist at IBM Research – Zurich.

“With our new technique, we achieve very high resolution at 10 nanometers at greatly reduced cost and complexity. In particular, by controlling the amount of material evaporated, 3D relief patterns can also be produced at the unprecedented accuracy of merely one nanometer in a vertical direction. Now it’s up to the imagination of scientists and engineers.”

Other applications include nano-sized security tags to prevent the forgery of documents like currency, passports and priceless works of art, and quantum computing and communications (the nano-sized tip could be used to create high quality patterns to control and manipulate light at unprecedented precision).

The NanoFrazor

IBM has licensed this technology to a startup based in Switzerland called SwissLitho, which is bringing the technology to market under the name NanoFrazor.

Several weeks ago the firm shipped its first NanoFrazor to McGill University’s Nanotools Microfab, where scientists and students will use the tool’s unique fabrication capabilities to experiment with ideas for designing novel nano-devices.

To promote the new technology, scientists etched a microscopic National Geographic Kids magazine cover in 10 minutes onto a polymer. The resulting magazine cover is so small at 11 x 14 micrometers that 2,000 can fit on a grain of salt.

Today (April 25), IBM claimed its ninth GUINNESS WORLD RECORDS title for the Smallest Magazine Cover at the USA Science & Engineering Festival in Washington, D.C. Visible through a Zeiss microscope, the cover will be on display there on April 26 and 27.