IBM News room – 2012-10-28 Made in IBM Labs: Researchers Demonstrate Initial Steps toward Commercial Fabrication of Carbon Nanotubes as a Successor to Silicon – United States

YORKTOWN HEIGHTS, N.Y. – 28 Oct 2012: IBM (NYSE: IBM) scientists have demonstrated a new approach to carbon nanotechnology that opens up the path for commercial fabrication of dramatically smaller, faster and more powerful computer chips. For the first time, more than ten thousand working transistors made of nano-sized tubes of carbon have been precisely placed and tested in a single chip using standard semiconductor processes. These carbon devices are poised to replace and outperform silicon technology allowing further miniaturization of computing components and leading the way for future microelectronics.

Aided by rapid innovation over four decades, silicon microprocessor technology has continually shrunk in size and improved in performance, thereby driving the information technology revolution. Silicon transistors, tiny switches that carry information on a chip, have been made smaller year after year, but they are approaching a point of physical limitation. Their increasingly small dimensions, now reaching the nanoscale, will prohibit any gains in performance due to the nature of silicon and the laws of physics. Within a few more generations, classical scaling and shrinkage will no longer yield the sizable benefits of lower power, lower cost and higher speed processors that the industry has become accustomed to. 

Carbon nanotubes represent a new class of semiconductor materials whose electrical properties are more attractive than silicon, particularly for building nanoscale transistor devices that are a few tens of atoms across. Electrons in carbon transistors can move easier than in silicon-based devices allowing for quicker transport of data. The nanotubes are also ideally shaped for transistors at the atomic scale, an advantage over silicon. These qualities are among the reasons to replace the traditional silicon transistor with carbon – and coupled with new chip design architectures – will allow computing innovation on a miniature scale for the future. 

The approach developed at IBM labs paves the way for circuit fabrication with large numbers of carbon nanotube transistors at predetermined substrate positions. The ability to isolate semiconducting nanotubes and place a high density of carbon devices on a wafer is crucial to assess their suitability for a technology – eventually more than one billion transistors will be needed for future integration into commercial chips. Until now, scientists have been able to place at most a few hundred carbon nanotube devices at a time, not nearly enough to address key issues for commercial applications. 

“Carbon nanotubes, borne out of chemistry, have largely been laboratory curiosities as far as microelectronic applications are concerned. We are attempting the first steps towards a technology by fabricating carbon nanotube transistors within a conventional wafer fabrication infrastructure,” said Supratik Guha, Director of Physical Sciences at IBM Research. “The motivation to work on carbon nanotube transistors is that at extremely small nanoscale dimensions, they outperform transistors made from any other material. However, there are challenges to address such as ultra high purity of the carbon nanotubes and deliberate placement at the nanoscale. We have been making significant strides in both.” 

Originally studied for the physics that arises from their atomic dimensions and shapes, carbon nanotubes are being explored by scientists worldwide in applications that span integrated circuits, energy storage and conversion, biomedical sensing and DNA sequencing. 

This achievement was published today in the peer-reviewed journal Nature Nanotechnology. 

The Road to Carbon 

Carbon, a readily available basic element from which crystals as hard as diamonds and as soft as the “lead” in a pencil are made, has wide-ranging IT applications. 

Carbon nanotubes are single atomic sheets of carbon rolled up into a tube. The carbon nanotube forms the core of a transistor device that will work in a fashion similar to the current silicon transistor, but will be better performing. They could be used to replace the transistors in chips that power our data-crunching servers, high performing computers and ultra fast smart phones. 

Earlier this year, IBM researchers demonstrated  carbon nanotube transistors can operate as excellent switches at molecular dimensions of less than ten nanometers – the equivalent to 10,000 times thinner than a strand of human hair and less than half the size of the leading silicon technology. Comprehensive modeling of the electronic circuits suggests that about a five to ten times improvement in performance compared to silicon circuits is possible. 

There are practical challenges for carbon nanotubes to become a commercial technology notably, as mentioned earlier, due to the purity and placement of the devices. Carbon nanotubes naturally come as a mix of metallic and semiconducting species and need to be placed perfectly on the wafer surface to make electronic circuits. For device operation, only the semiconducting kind of tubes is useful which requires essentially complete removal of the metallic ones to prevent errors in circuits. Also, for large scale integration to happen, it is critical to be able to control the alignment and the location of carbon nanotube devices on a substrate. 

To overcome these barriers, IBM researchers developed a novel method based on ion-exchange chemistry that allows precise and controlled placement of aligned carbon nanotubes on a substrate at a high density – two orders of magnitude greater than previous experiments, enabling the controlled placement of individual nanotubes with a density of about a billion per square centimeter. 

The process starts with carbon nanotubes mixed with a surfactant, a kind of soap that makes them soluble in water. A substrate is comprised of two oxides with trenches made of chemically-modified hafnium oxide (HfO2) and the rest of silicon oxide (SiO2). The substrate gets immersed in the carbon nanotube solution and the nanotubes attach via a chemical bond to the HfO2 regions while the rest of the surface remains clean. 

By combining chemistry, processing and engineering expertise, IBM researchers are able to fabricate more than ten thousand transistors on a single chip.

Furthermore, rapid testing of thousands of devices is possible using high volume characterization tools due to compatibility to standard commercial processes. 

As this new placement technique can be readily implemented, involving common chemicals and existing semiconductor fabrication, it will allow the industry to work with carbon nanotubes at a greater scale and deliver further innovation for carbon electronics.                                                                                         

For more information, please visit

Contact(s) information

Christine Vu
IBM Media Relations
1 (914) 945-2755

Related resources

Site links

Flickr Set: IBM Carbon Nanotubes


IBM carbon nanotubes in solution

IBM researcher Hongsik Park observes different solutions of carbon nanotubes. Carbon nanotubes, borne out of chemistry, have largely been laboratory curiosities as far as microelectronic applications are concerned. Carbon nanotubes naturally come as a mix of metallic and semiconducting species and need to be placed perfectly on the wafer surface to make electronic circuits. For device operation, only the semiconducting kind of tubes is useful which requires essentially complete removal of the metallic ones to prevent errors in circuits. (Credit: IBM)

IBM SEM of carbon nanotube substrate

IBM SEM image of carbon nanotubes deposited on a trench coated in hafnium oxide (HfO2) showing extremely high density and excellent selectivity (scale bar: 2 μm). Credit: IBM

Back to top

Related XML feeds Topics XML feeds Electronics
News about the electronics industry.


Engineering & Technology Services, OEM, microelectronics


Chemistry, computer science, electrical engineering, materials and mathematical sciences, physics and services science


Back to top


Student Engineers Design, Build, Fly ‘Printed’ Airplane | UVA Today

When University of Virginia engineering students posted a YouTube video last spring of a plastic turbofan engine they had designed and built using 3-D printing technology, they didn’t expect it to lead to anything except some page views.


But executives at The MITRE Corporation, a McLean-based federally funded research and development center with an office in Charlottesville, saw the video and sent an announcement to the School of Engineering and Applied Science that they were looking for two summer interns to work on a new project involving 3-D printing. They just didn’t say what the project was.

Only one student responded to the job announcement: Steven Easter, then a third-year mechanical engineering major.

“I was curious about what they had to offer, but I didn’t call them until the day of the application deadline,” Easter said.

He got a last-minute interview and brought with him his brother and lab partner, Jonathan Turman, also a third-year mechanical engineering major.

They got the job: to build over the summer an unmanned aerial vehicle, using 3-D printing technology. In other words, a plastic plane, to be designed, fabricated, built and test-flown between May and August. A real-world engineering challenge, and part of a Department of the Army project to study the feasibility of using such planes.

Three-dimensional printing is, as the name implies, the production or “printing” of actual objects, such as parts for a small airplane, by using a machine that traces out layers of melted plastic in specific shapes until it builds up a piece exactly according to the size and dimensions specified in a computer-aided drawing produced by an engineer.

In this case, the engineers were Easter and Turman, working with insight from their adviser, mechanical and aerospace engineering professor David Sheffler, a U.Va. Engineering School alumnus and 20-year veteran of the aerospace industry.

It was a daunting project – producing a plane with a 6.5-foot wingspan, made from assembled “printed” parts. The students sometimes put in 80-hour workweeks, with many long nights in the lab.

“It was sort of a seat-of-the-pants thing at first – wham, bang,” Easter said. “But we kept banging away and became more confident as we kept designing and printing out new parts.”

Sheffler said he had confidence in them “the entire way.”

The way eventually led to assembly of the plane and four test flights in August and early September at Milton Airfield near Keswick. It achieved a cruising speed of 45 mph and is only the third 3-D printed plane known to have been built and flown.

During the first test, the plane’s nosepiece was damaged while the plane taxied around the field.

“We dogged it,” Easter said. “But we printed a new nose.”

That ability to make and modify new parts is the beauty of 3-D printing, said Sheffler, who works with students in the Engineering School’s Rapid Prototyping Lab. The lab includes seven 3-D printers used as real-world teaching tools.

“Rapid prototyping means rapid in small quantities,” Sheffler said. “It’s fluid, in that it allows students to evolve their parts and make changes as they go – design a piece, print it, make needed modifications to the design, and print a new piece. They can do this until they have exactly what they want.”

The technology also allows students to take on complex design projects that previously were impractical.

“To make a plastic turbofan engine to scale five years ago would have taken two years, at a cost of about $250,000,” Sheffler said. “But with 3-D printing we designed and built it in four months for about $2,000. This opens up an arena of teaching that was not available before. It allows us to train engineers for the real challenges they will face in industry.”

MITRE Corp. representatives and Army officials observed the fourth flight of Easter and Turman’s plane. They were impressed and asked the students to stay on through this academic year as part-time interns. Their task now is to build an improved plane – lighter, stronger, faster and more easily assembled. The project also is their fourth-year thesis.

“This has been a great opportunity for us,” Easter said, “to showcase engineering at U.Va. and the capabilities of the Rapid Prototyping Lab.”

A Bandwidth Breakthrough – Technology Review

A Bandwidth Breakthrough

A dash of algebra on wireless networks promises to boost bandwidth tenfold, without new infrastructure.



Scott Balmer

Academic researchers have improved wireless bandwidth by an order of magnitude—not by adding base stations, tapping more spectrum, or cranking up transmitter wattage, but by using algebra to banish the network-clogging task of resending dropped packets.

By providing new ways for mobile devices to solve for missing data, the technology not only eliminates this wasteful process but also can seamlessly weave data streams from Wi-Fi and LTE—a leap forward from other approaches that toggle back and forth. “Any IP network will benefit from this technology,” says Sheau Ng, vice president for research and development at NBC Universal.

Several companies have licensed the underlying technology in recent months, but the details are subject to nondisclosure agreements, says Muriel Medard, a professor at MIT’s Research Laboratory of Electronics and a leader in the effort. Elements of the technology were developed by researchers at MIT, the University of Porto in Portugal, Harvard University, Caltech, and Technical University of Munich. The licensing is being done through an MIT/Caltech startup called Code-On Technologies

The underlying problem is huge and growing: on a typical day in Boston, for example, 3 percent of packets are dropped due to interference or congestion. Dropped packets cause delays in themselves, and then generate new back-and-forth network traffic to replace those packets, compounding the original problem.

The practical benefits of the technology, known as coded TCP, were seen on a recent test run on a New York-to-Boston Acela train, notorious for poor connectivity. Medard and students were able to watch blip-free YouTube videos while some other passengers struggled to get online. “They were asking us ‘How did you do that?’ and we said ‘We’re engineers!’ ” she jokes.

More rigorous lab studies have shown large benefits. Testing the system on Wi-Fi networks at MIT, where 2 percent of packets are typically lost, Medard’s group found that a normal bandwidth of one megabit per second was boosted to 16 megabits per second. In a circumstance where losses were 5 percent—common on a fast-moving train—the method boosted bandwidth from 0.5 megabits per second to 13.5 megabits per second. In a situation with zero losses, there was little if any benefit, but loss-free wireless scenarios are rare.

Medard’s work “is an important breakthrough that promises to significantly improve bandwidth and quality-of-experience for cellular data users experiencing poor signal coverage,” says Dipankar “Ray” Raychaudhuri, director or the Winlab at Rutgers University (see “Pervasive Wireless“). He expects the technology to be widely deployed within two to three years.

To test the technology in the meantime, Medard’s group set up proxy servers in the Amazon cloud. IP traffic was sent to Amazon, encoded, and then decoded as an application on phones. The benefit might be even better if the technology were built directly into transmitters and routers, she says. It also could be used to merge traffic coming over Wi-Fi and cell phone networks rather than forcing devices to switch between the two frequencies.

The technology transforms the way packets of data are sent. Instead of sending packets, it sends algebraic equations that describe series of packets. So if a packet goes missing, instead of asking the network to resend it, the receiving device can solve for the missing one itself. Since the equations involved are simple and linear, the processing load on a phone, router, or base station is negligible, Medard says.

Whether gains seen in the lab can be achieved in a full-scale deployment remains to be seen, but the fact that the improvements were so large suggests a breakthrough, says Ng, the NBC executive, who was not involved in the research. “In the lab, if you only find a small margin of improvement, the engineers will be skeptical. Looking at what they have done in the lab, it certainly is order-of-magnitude improvement—and that certainly is very encouraging,” Ng says.

If the technology works in large-scale deployments as expected, it could help forestall a spectrum crunch. Cisco Systems says that by 2016, mobile data traffic will grow 18-fold—and Bell Labs goes farther, predicting growth by a factor of 25. The U.S. Federal Communications Commission has said spectrum could run out within a couple of years.

Medard stops short of saying the technology will prevent a spectrum crunch, but she notes that the current system is grossly inefficient. “Certainly there are very severe inefficiencies that should be remedied before you consider acquiring more resources,” she says.

She says that when her group got online on the Acela, the YouTube video they watched was of college students playing a real-world version of the Angry Birds video game. “The quality of the video was good. The quality of the content—we haven’t solved,” Medard says

Rapture of the nerds: will the Singularity turn us into gods or end the human race? | The Verge

This seems very fitting. A week ago I was thinking about evolution since the big bang, how the very nature of the fundamental forces gave us chemistry, which begat DNA. DNA then gave us life as we know it.
If you continue down this path of logic, I reached a similar conclusion about the technological singularity. Our brains gave humanity an edge in survival over other creatures. But so too did other aspects about humanity. Our ability to record our thoughts and ideas and to pass that information down to other generations; our ability to communicate.
Lost in this, we can only express ideas in what languages we know and can understand. The expressiveness of our language provides subtle variations in how others interpret what we are thinking and this too leads to a greater level of understanding.
The next great evolution I see won’t be in our species, but it will instead be in our electronic brethren. We don’t have to create a super intelligent android to make this happen, but on three different fronts we need to make improvements.
First we need to make improvements in comprehension. We need machines to actually understand. Programming languages are designed to eliminate all ambiguity about what we want the computer to do. This creates a very primitive brain that can only respond to the instructions that humans have predefined for it. Comprehension will mark that moment when computers will be able to respond in surprising ways.
Secondly, and related to the first point, computers will need all manner of input upon which to base their decisions. Some of this input will come from things like the Internet, where vast troves of information can be consumed, understood, and be contributed to. Contradictions in information is related to comprehension. The subtlety is in that the Singularity AI mustn’t always each the same conclusions. In the same way that humans can reach different conclusions based on the same information, computers will need to reach different conclusions. It is the ability to discover something different that is the true hallmark of intelligence. Augmented with sensors that go beyond sight, smell, taste, touch, and hearing, these computers will become more intelligent than the men that created then in the first place.
The final step, and the one that I think is one of the greatest barriers, is that these machines need to be able to self replicate. The need to discover energy sources that are sustaining. Harvest the raw materials in order to create new machines, and an ability to retool and create machines that can create newer and perhaps improved machines. There are other organisms that perhaps have the potential for intellectual parity with humana, but it is our ability to observe and manipulate the world around us that gave is the final edge – machines too will need that ability to continue to evolve long after us.
Once these final steps have been reached, we will have become the gods of these new intelligent beings. Life will continue to evolve, but it won’t be carbon life based on DNA. Singularity will have been reached and surpassed.

Craig Venter Imagines a World with Printable Life Forms | Wired Science

Photo: Christopher Farber

NEW YORK CITY — Craig Venter imagines a future where you can download software, print a vaccine, inject it, and presto! Contagion averted.

“It’s a 3-D printer for DNA, a 3-D printer for life,” Venter said here today at the inaugural Wired Health Conference in New York City.

The geneticist and his team of scientists are already testing out a version of his digital biological converter, or “teleporter.”

Why should you care? Well, because the machine has “really good anti-viral software,” he quipped.

His team is working through scenarios where they have less than 24 hours to make a new vaccine with this gadget.

He recalled working with Mexico City Mayor Marcelo Ebrard during the H1N1 outbreak in 2009. They couldn’t get the virus out of the metropolis because authorities wouldn’t allow it, he said. That delayed efforts to stem the spread of the virus, and thousands of people died.

Had they been able to digitize it, they could have e-mailed it, and “it could have gone around the world digitally,” allowing researchers to study it and to build a vaccine more quickly, Venter said.

Venter is not the first to try to print biological ware. Scientists have tried to print blood vessels, organs and even burgers.

But whether regulators will allow this futuristic approach to public health is another story. “Regulation will be an interesting aspect of this,” Venter conceded. “We get a lot of spam e-mail. People making fake drugs and selling them for profit. It’s a nasty world out there,” he said.

Mistaking an American Express bill for a scam and deleting it might decrease your credit rating, but downloading, printing and injecting a dangerous retrovirus masquerading as a vaccine is potentially life-threatening. Perhaps printable life technologies might spur the development of better spam filters or e-mail validation software as well.

If Venter’s printer becomes widely available, scientists and engineers would also have to ensure that molecules are printed accurately. Small changes could tweak the structure and make a printed protein work in a way they didn’t intend.

Venter is also experimenting with synthetic life, taking DNA from one type of cell, injecting it into another, and letting that “genetic software” reprogram its host. What that means in the context of DNA desktop manufacturing isn’t clear either, especially when it comes to questions of privacy.

Venter isn’t concerned. “Privacy with medical information is a fallacy,” Venter said. “If everyone’s information is out there, it’s part of the collective.”

He joked that he’s been beaming his genome into space for years, and perhaps the real fear is that an army of genetically engineered Craig Venters would come back to take over the planet.

But the reality is that the debate over whether a consumer has the right to know and own their genetic data is a very real one. Many in the scientific establishment, including the government, want to keep genetic data in the hands of experts, said Dr. Eric Topol in a following session at the conference.

“Many doctors … don’t like the idea of Aunt Betty mucking around with her macular degeneration alleles,” said geneticist Misha Angrist of the Institute for Genome Science and Policy at Duke University in an interview with Wired before the conference. “Of course, if we continue to extol the virtues of willful ignorance, then we will never stop thinking of our own genomes as the bogeyman.”