Nanoclast iconNanoclast

Carbon Nanotube Memory Thrown a Lifeline from Nanoelectronics Powerhouse

Massachusetts-based Nantero has faced some challenges over the last decade in getting its carbon nanotube-based non-volatile memory to market. My colleague Philip Ross did a good job characterizing perhaps the biggest obstacle here in the pages of Spectrum over four years ago: “That instant-on computer that Nantero sketched out more than six years ago? You can buy one right now for just $400; it's called the iPhone.”

My contribution to that Spectrum article was to ask whether the tiny start-up really expected the big multinationals that are today’s flash memory manufacturers to just step aside and relinquish the market by letting Nantero essentially eliminate it.

Despite these challenges the company has remained steadfast in its conviction that its carbon nanotube memory would change computing, even in the face of having to sell off part of its company four years ago to Lockheed Martin. At the time, observers were really beginning to question the wisdom of Nantero taking on the role of David to the flash memory producers' Goliath, especially since Nantero was playing the role without a slingshot.

All of this may have changed significantly yesterday with the announcement of a joint development agreement with Belgium-based nanoelectronics powerhouse Imec to develop Nantero’s carbon-nanotube-based memory. Imec may very well be the needed slingshot.

“After review of the progress to date by Nantero and its manufacturing partners, we decided that this CNT-based non-volatile memory has multiple very attractive characteristics for next-generation highly scaled memory,” said Luc Van den hove, CEO of Imec in a press release. “By taking a leadership position in this area of development, in partnership with Nantero, we will be able to bring substantial benefit to our member companies.”

On its own Nantero had managed to bring its NRAM technology to the point where it was producing high-yielding 4 megabit arrays within CMOS production environments. Now with the NRAM arrays being manufactured, tested and characterized in Imec’s facilities, the aim will be to use the memory for applications below 20 nm, such as terabit-scale memory arrays and ultra-fast gigabit-scale nonvolatile cache memories, according to Jo de Boeck, CTO of Imec.

This is the kind of development that would have been welcomed by Nantero supporters many years ago. Even at this late date, it is still hopeful news for the fortunes of the company. But while flash memory has been the company’s great rival up until now, perhaps there are new ones in the shape of graphene-based flash memory that will form the competition in the future.

High Density Carbon Nanotubes Show Way Forward for Smaller and Faster Transistors

Researchers at IBM’s T. J. Watson Research Center in Yorktown Heights, New York are reporting success in precisely locating a high density of carbon nanotubes on a substrate that should lead to high-speed and power-efficient chips and could show a way forward after silicon.

The IBM team was able to successfully place over 10,000 working transistors on the surface of a silicon wafer. Some anticipate that this research will not only allow the building of smaller transistors but also improve the clock speed of the transistors.

The research, which was published in Nature Nanotechnology (“High-density integration of carbon nanotubes via chemical self-assembly”), was hinted at last year during the IEEE International Electron Devices Meeting (IEDM).

At the IEDM meeting last year there was a lot of noise about IBM demonstrating the first transistor with sub-10nm channel lengths. The tie into that line of research and this latest work is that the IBM team built those sub-10nm nanostructures out of carbon nanotubes and grew them through self- assembly on standard 200-millimeter diameter wafers.

The latest research used ion-exchange chemistry to trigger a chemical self-assembly process for the nanostructures. The researchers place the carbon nanotubes in a solution that makes them water-soluble. Then the carbon nanotubes chemically self assemble onto the substrate in patterned arrays.

The process made it possible to place the carbon-nanotube transistors in a density high enough that the resulting material outperformed any other switches from any other material, according to Supratik Guha, director of physical sciences at IBM’s Yorktown Heights research center in a New York Times article covering the research.   “We had suspected this all along, and our device physicists had simulated this, and they showed that we would see a factor of five or more performance improvement over conventional silicon devices,” says Guha in the article.

What might be the most impressive aspect about the results is that the researchers were able to electrically test the 10,000 transistors. Being able to characterize this large a number of nanotube devices is critical for analyzing transistor performance and yield.

This step will also prove crucial in what remains to be the biggest obstacle for the technology in replacing silicon: achieving carbon nanotube purity. At the moment, the carbon nanotubes the researchers have access to contain enough metal in them that they don’t make ideal semiconductors. The IBM team are confident that they can reach a 99.99 percent pure form of carbon nanotubes that will make the devices operate even better than their current prototypes.

Image courtesy of Nature Publishing

Nanostructured Silicon Li-ion Batteries’ Capacity Figures Are In

Seven months ago I covered a small start-up called California Lithium Battery Inc. (CalBattery) that had entered into a Work for Others (WFO) agreement with Argonne National Laboratory (ANL) to develop and commercialize what they dubbed as the “GEN3” lithium-ion battery.

The GEN3 battery is largely based on ANL’s silicon-graphene battery anode process. Basically the ANL approach is to sandwich silicon between graphene sheets in the anode of the battery to allow more lithium atoms in the electrode.

This line of research was motivated by the hope of improving the charge life of Li-ion batteries. First, researchers showed that if you replaced the graphite of the anodes with silicon, the charge could be increased by a factor of ten.  There was one big drawback though. After a few charge-discharge cycles the silicon would crack and become inoperable from the expansion and contraction of the material. The solution seemed to be nanostructured silicon anodes that could last longer than the pure silicon variety, but just barely. 

The ANL silicon-graphene anode is supposed to overcome this problem and achieve comparable charge-discharge cycles of graphite, but with the charge significantly increased like you would achieve with pure silicon in the anode.

So, what’s been happening in the last seven months? Well, CalBattery has released a press announcement revealing the results of their last eight months of testing. According to the press release, the Li-ion batteries they have been testing have an energy density of 525WH/Kg and specific anode capacity of 1,250mAh/g. To offer a comparison, the company press release explains that Li-ion batteries currently on the market have an energy density of between 100-180WH/kg and a specific anode capacity of 325mAh/g.

“This equates to more than a 300% improvement in LIB (Li-ion battery) capacity and an estimated 70% reduction in lifetime cost for batteries used in consumer electronics, EVs, and grid-scale energy storage,” says CalBattery CEO Phil Roberts in the company press release.

Curiously, I didn’t see anything in the press release that talks about what numbers they were able to achieve in charge/discharge cycles with the material. And that really is the crux of the matter. Everyone has understood for the last few years that nanostructured silicon anodes have a high capacity. The problem is that it has only been slightly better than regular silicon when it comes to charge/discharge cycles.

Let’s look at Energy Secretary’s threshold numbers for making Li-ion battery-powered competitive to petrol-powered vehicles:

  • A rechargeable battery that can last for 5000 deep discharges
  • 6–7 x higher storage capacity (3.6 Mj/kg = 1000 Wh) at [a] 3x lower price

Well, we don’t know what the deep discharge figures are for this GEN3 battery. But improving the capacity 300% seems to be a little short of factor of 6 or 7. But as it was pointed out to me in the comments a 70% reduction in lifetime cost does seem to meet the criteria of a 3x lower price.

Maybe EVs don’t really need to be competitive with petrol-powered vehicles, and Secretary Chu’s figures are not pertinent, but if the dwindling sales of EVs are any indication, maybe those figures are relevant and EVs actually do need to be competitive with petrol-powered vehicles…for now.

Nanotechnology Won't Be Delivering a Utopia Anytime Soon

Dr. Michio Kaku, a theoretical physicist at the City University of New York, and regular contributor to numerous TV documentaries on subjects within physics, has offered up an intriguingly titled video on the website Big Think that’s spreading like wildfire through social media: Can Nanotechnology Create Utopia? You can watch the video (5+ minutes long) below.

Kaku starts off by discussing the quest for utopias that fueled the early settlers of the New World. Oddly, his main example is the Shakers. A little background is in order here: The Shakers took a monastic pledge of celibacy. That sort of lifestyle inevitably dwindles your numbers. In fact, the sect probably would have died out in the 19th century, but for the Civil War, during which they took in large numbers of orphans. However, Kaku cites as one of the main causes of Shakers disappearing (they haven’t by the way, there are still a few Shakers) the issue of scarce resources. You see, in the cold winters they couldn’t get enough seeds and this created scarcity, which in turn led to conflict, or so Kaku seems to believe or wants us to believe. In point of fact, the Shakers were among the first organizations to create a business through mail order, and one of the main things they sold were seeds.

But Kaku’s expertise is not the history of religions in the colonial United States, rather, it's theoretical physics. I think based on his next example of utopias we can safely say that the emphasis should be on “theoretical” because what we get is a discussion of a Star Trek replicator.

In the video Kaku draws a comparison between a Star Trek replicator and the molecular manufacturing  of table-top factories and universal assemblers. He explains that with these universal assemblers we become “like gods” able to turn wood into glass. If I had these god-like powers, I doubt my first feat would be turn wood into glass, but I guess he was just illustrating a point.

And what is his point? Well, when we get these universal assemblers and can make anything we want just by pressing the button “Ferrari,” we will be in a world of such abundance that it will seem like a kind of utopia.

However, Kaku cautions that this utopia has interesting philosophical repercussions, which he illustrates by describing a near-entire episode of Star Trek. I won’t bother you with the plot here, but the question it raises are along the lines of: Will we lose the will to work with all this abundance?

This kind of handwringing over a nanotechnology future is really the bailiwick of the Center for Responsible Nanotechnology. They could devote great tomes to concerns over a possible outcome that is so far away that not only can nobody predict when it might come (with the exception of an exceptional futurist like Ray Kurzweil) but some believe may never come.

These naysayers who question the physics of universal assemblers do not deter Kaku because we already have a replicator as proof: Ribosomes. Kaku explains that ribosomes turn your cheeseburger into the DNA that makes for the next generations of humans.

Of course, it took nature a few billion years to come up with this feat of molecular manufacturing. With that time frame in mind, should we really be worrying ourselves about the ironic hardships delivered upon us by a utopia that was created by a reproductive technology for which there is little indication will happen anytime soon?

NASA’s Decline as Leader in Nanotechnology Research

A recent study out of Rice University ("NASA's Relationship with Nanotechnology: Past, Present and Future Challenges"; pdf) points out that NASA is “the only U.S. federal agency to scale back investment in this area [nanotechnology]”.

The numbers in the report, produced by the James A. Baker III Institute for Public Policy at Rice, are alarming: “NASA reduced annual nanotechnology R&D expenditures from $47 million to $20 million.” And the Ames Research Center in California, which had set up its Center for Nanotechnology five years before the National Nanotechnology Initiative (NNI) was even established, had its staff reduced from 60 researchers with an $18 million budget down to 10 researchers and a $2 million budget.

The Rice report points to two events leading to this decline in NASA’s nanotechnology funding. In 2003, the space shuttle Columbia accident put NASA’s program under scrutiny, leading to a National Academies review of its program. Then in 2004, President George W. Bush presented his “Vision for Space Exploration”, which, while consisting of some lofty goals such as a manned mission to the planet Mars, actually cut NASA budgets in technology R&D.

Not all the news about NASA’s nanotechnology budget is quite as dire. According to the report, the “NNI reports a 29-percent increase in NASA nanotechnology-specific R&D in 2012—from $17 million in 2011 to $23 million in 2012.”

This latest upswing is good news, but have the previous eight years in cuts to nanotechnology research really been that detrimental to NASA’s space exploration? It’s not really clear whether there has been a negative impact on NASA.

NASA’s total research appropriations in the years between 2003 and 2010 decreased more than 75 percent, from $6.62 billion to $1.55 billion. So if there’s been a perceived—or real—decline in NASA’s space exploration it may have just as easily come from the cuts throughout its entire technology R&D budget.

Also, even as NASA's funding declined in those eight years, the U.S. government’s overall funding of nanotechnology research nearly doubled. NASA’s interests in nanotechnology are somewhat removed from the areas of energy, medicine and materials that have been the focus of the government's nanotechnology funding strategies.

And although NASA has not been high on the U.S. government’s list of recipients for nanotechnology funding, nanotechnology has continued to find its way into NASA programs. Nanomaterials developed by Lockheed Martin and Nanocomp Inc. were integrated into the Juno spacecraft destined for Jupiter. Is it necessary for NASA to develop the nanotechnology in order for it to improve NASA spacecraft?

While the numbers may be somewhat alarming, the issue with NASA’s decline as a leader in U.S. nanotechnology research has really just been a reallocation of funding to different agencies and a move towards outsourcing some of the nanomaterial development that had previously been done at NASA labs. It might be even a good thing not only for other technologies such as solar energy and drug delivery, but also for NASA itself by focusing resources in other areas to advance its space program.

Nanotechnology As Socio-Political Project

Nanotechnology has always been burdened with a bit of an identity crisis. Scientists, pundits and everyone in between are constantly offering up definitions for the term. This blur of definitions often leads to confusion, and worse inflating people’s expectations of what nanotechnology can deliver.

One possible example of this disconnect between nanotechnology and its expectations is the recent bankruptcy of A123 Systems.  One can’t help but think that the stalwart support the company received over the years from investors—raising more than $1 billion from private investors, despite never turning a profit—was in part due to a blind trust that the magic of nanotechnology would somehow save the day.

How is it that nanotechnology has been transformed into this seemingly magic vehicle for technological innovation for everything from curing cancer to enabling electric vehicles? To understand it, we need to take a step back and move beyond mere definitions of nanotechnology and instead reach some deeper understanding of how we’ve become so flummoxed in defining it.

Photo: University of Nottingham
Richard Jones of Sheffield University (center) with Professor Chris Rudd of the University of Nottingham (Left) and guests

To our rescue is Professor Richard Jones, who in addition to being a noted scientist is an eloquent commentator on nanotechnology, addressing here in the pages of Spectrum the role of nanotechnology in achieving the Singularity.

In Jones' contribution to a new book: “Quantum Engagements: Social Reflections of Nanoscience and Emerging Technologies” in a chapter entitled “What has nanotechnology taught us about contemporary technoscience?” he suggests that nanotechnology has come to have its peculiar status through a combination of political and cultural forces along with only a smattering of science.

Jones examines the etymology of the term "nanotechnology," and shows how it came to prominence outside of the scientific community. And when he turns his lens on the science of nanotechnology, he finds that it is such a smorgasbord of different scientific disciplines it’s hard to see how any of it can really be related, never mind form the foundation of a scientific field. Here are some, if not all, the disciplines Jones explains fit under the nanotechnology umbrella:

  • Precision engineering—microfabrication techniques
  • Meso-scale physics and low-dimensional semiconductors
  • Molecular electronics
  • Plastic electronics
  • Cluster chemistry
  • Colloid science
  • Powder technology
  • Microscopy
  • Material science
  • Supramolecular chemistry
  • Life sciences

Jones argues that nanotechnology has not done anything to bring these fields together, nor is there any indication that they are about to merge into one, broad field known as “nanotechnology.” However, the wide disparity between the many disciplines could explain “why tensions appear between the visions of nanotechnology proposed by different high status academic champions, and disparities are apparent between these visions and the nature of actual products which are claimed to use nanotechnology.”

The innovation infrastructure that has been built up around nanotechnology also has fueled some of nanotechnology’s unusual characteristics. Jones carefully goes through how the funding mechanisms have changed over the last 30 years and how corporate structures—through the breakup of monopolies (like AT&T)—have resulted in the great corporate laboratories of the post-WWII era being diminished to mere shadows of their former selves.

What has sprung up in their place has been a new innovation model coming into prominence in which intellectual property developed at a university is “spun out” and commercialized through venture capital funding. The business details of the commercialization, like “the identification of market need, manufacturing, the incorporation of the manufactured nanomaterials into a finished product, and the marketing of that product” are all outsourced outside of the company.

This could explain how some scientists who developed a better Li-ion battery and originally targeted their battery for power tools found themselves in a struggle for survival that was tied to the fortunes of the electrical vehicle industry.

The Cautionary Tale of A123 Systems

If the recent bankruptcy of A123 Systems holds any lesson for us it could be that any nanotech-enabled technology that places itself squarely in competition with an established technology has to be not only incrementally better than its nearest competitors, it has to be better than what it's trying to displace.

Some have predicted that the A123 bankruptcy will be painted as another Solyndra, the California maker of photovoltaics that failed despite heavy backing from the Obama administration.  Of course, when Konarka, a solar cell company from Massachusetts, the state Republican presidential candidate Mitt Romney was once governor of, went belly up earlier this year, some suggested that it could be spun into “Romneys’s Solyndra."

The underlying problems of A123 Systems, Solyndra, and Konarka are not political ones of governmental policies—they're illustrations of the futility of ignoring good old-fashioned supply-and-demand economics. (Solyndra, besides never being competitive with traditional energy sources, was also forced to compete with heavily subsidized solar alternatives.)

There is little question that A123 Systems made a better Li-ion battery than its competitors. The problem was the nano-enabled battery that they came up with for powering electric vehicles (EVs) was not in competition with other Li-ion batteries, but with the internal combustion engine.

This is not a political issue or an ideological issue, it’s a numbers issue. Back in 2010, Energy Secretary Steven Chu made clear the conditions of success for EVs when he said, "A rechargeable battery that can last for 5000 deep discharges, [have] 67 x higher storage capacity (3.6 Mj/kg = 1000 Wh) at [a] 3x lower price will be competitive with internal combustion engines (400500 mile range)." Full stop.

The pity of the story is that A123’s bankruptcy—after its auspicious beginnings--might pour cold water on the entrepreneurial ardor of other ambitious scientists. It shouldn’t. However, businesses need to assess realistically their competitive landscape even when they are based on a gee-whiz technology.

A123 Systems unfortunately appears to have fallen victim to the belief that if it could simply come up with a better Li-ion battery than its competitors, it would be powering EVs of the future. What it failed to recognize was that first EVs had to win the market from the petrol-based cars of today.

Nanostructured Silicon Solar Cells Achieve High Conversion Efficiency Without Antireflective Coatings

The economics of solar cells always involves striking a balance between conversion efficiency and manufacturing costs.

Researchers at the U.S. Department of Energy’s National Renewable Energy Laboratory (NREL) believe that they have struck a balance between both of these factors by developing a nanotechnology-enabled silicon solar cell that boasts 18.2 percent conversion efficiency and that should be cheaper to produce.

The research, which was published in the journal Nature Nanotechnology (“An 18.2%-efficient black-silicon solar cell achieved through control of carrier recombination in nanostructures”), was able to create a solar cell without the use of anti-reflective coatings that are typically required to reach that conversion efficiency.

The NREL team was able to achieve this high efficiency without anti-reflection coatings by making billions of nano-sized holes in silicon. Since the holes are actually smaller than the light wavelengths striking the silicon, there is no sudden change in the light density and the light doesn’t reflect back off the surface of the silicon.

While the NREL researchers had previously demonstrated that these nanostructures in silicon reflected less light off the surface, this latest research was the first time they were able to use the technique to achieve high conversion efficiency with the nanostructure silicon cells.

“This work can have a big impact on both conventional and emerging solar cell based on nanowires and nanospheres. For the first time it shows that really great solar cells can be made from nanostructured semiconductors,” Howard Branz, the principal investigator of the research says in an NREL press release.

While attaining high energy conversion efficiencies for an individual cell—and even doing it with a cheaper manufacturing process that eliminates the need for antireflective coating—is an achievement, a big obstacle remains getting those individual cells into a module without significant losses in efficiency.

This may be an area that will be addressed in further research. Branz adds: “The next challenges are to translate these results to common industrial practice and then get the efficiency over 20 percent. After that, I hope to see these kinds of nanostructuring techniques used on far thinner cells to use less semiconductor material.”

European Commission Pulls Back on New Nano Regulations

Last year the European Commission (EC) was eager to show its proactive approach to regulating nanomaterials when—after a protracted process--the Commission arrived at a definition for nanomaterials.

While the EC achieved its goal of a definition, the definition itself came under some pretty pointed criticisms for being so broad as to include the incidental nanoparticles that are produced when a car’s tires roll on pavement.

“We’ve met people recently who work on the legal side within large chemical organizations, who up until October last year didn’t know anything about nanotechnology and suddenly they’re going to be caught within legislation, which is quite a shock to them,” said Jeremy Warren, CEO of Nanosight, in a webinar he produced this year to explain the impact of the new definition.

When any company—European or otherwise—believes that it has been swept up into a regulatory framework for a material that they had no intention of making or using, or perhaps even knew existed, government bureaucrats are certainly going to hear about it. It didn’t take long for industry ministers of European countries to start to take heed.

Last week we began to see how the EC was trying to reel in their proactive approach when it released its “Communication on the Second Regulatory Review on Nanomaterials.” One of the points of the position paper was: “that current legislation covers to a large extent risks in relation to nanomaterials and that risks can be dealt with under the current legislative framework”.

All of that work to develop a definition of nanomaterials so as to create a class of materials that are not currently known to be hazardous (but might be someday) seemed to be all for naught. Instead the EC seems to have taken the position that current laws governing run-of-the-mill materials pretty much handle a large majority of nanomaterials out there.

The reaction of NGO’s like Greenpeace and Friends of the Earth was swift and angry. The NGOs trotted out the term “precautionary principle,” which seems to have come to mean an absolute moratorium on all nanomaterials rather than producers taking on the burden of proof regarding the level of risk of their products.

Another pervasive sentiment among the NGOs is that the EC is stalling. If the EC were indeed stalling, one possible explanation would be that they want scientific data to be gathered proving nanomaterials safe and until then still promote new companies and new products by delaying the imposition of regulations. I suppose that’s what the NGOs believe is happening in this case.

To me, it’s a bit too conspiratorial an explanation. I am more likely to believe this long process stems from the way bureaucracies operate, especially the European variety. They love to commission reports and studies and then hold meetings on the results. The European Union’s approach to the specific issue of nanosilver’s risk has driven some scientists to such levels of frustration they felt compelled to write an article for Nature Nanotechnology, decrying the situation.

Bureaucratic dithering aside, the real obstacle to arriving at a swift resolution about the risk of nanomaterials is that the science takes a long time. As I’ve commented before, we are asking for an almost complete overhaul of the periodic table in terms of toxicity to determine the risk of nanomaterials. Let’s keep the length of time at resolving these issues in that context.

Plasmonics Used to Dope Graphene

The big push in graphene research for electronics has been overcoming its lack of an inherent band gap.  But silicon has another leg up on graphene when it comes to electronics applications: it can comparatively easily be p- and n-doped (positive and negative).

While there have been a number of approaches taken for doping graphene, researchers at Rice University believe that the idea of plasmon-induced doping of graphene could be ideal for this purpose.

The research (“Plasmon-Induced Doping of Graphene”), which was published in the journal ACS Nano, looks to use plasomonics, which exploits the fact that “photons striking small, metallic structures can create plasmons, which are oscillations of electron density in the metal.”

The Rice team placed nanoscale plasmoic antennas—dubbed nonamers—on the graphene to manipulate light in such a way that they inject electrons into the graphene, changing its conductivity. The nonamers tooks the form of eight nanoscale gold discs that encircled one large gold disc, and were placed on the graphene with electron beam lithography.

When the graphene and nonamers are exposed to light, the incident light is converted into hot electrons that transform those portions of the graphene where the nonamers are located from a conductor to an n-doped semiconductor.

“Quantum dot and plasmonic nanoparticle antennas can be tuned to respond to pretty much any color in the visible spectrum,” says Rice professor Peter Nordlander, one of the authors of the paper, in the university's press release about the research. “We can even tune them to different polarization states, or the shape of a wavefront."

Nordlander adds: “That’s the magic of plasmonics. We can tune the plasmon resonance any way we want. In this case, we decided to do it at 825 nanometers because that is in the middle of the spectral range of our available light sources. We wanted to know that we could send light at different colors and see no effect, and at that particular color see a big effect.”

While the possibility of a process that simply uses light for doping graphene seems pretty amazing, the researchers are looking ahead to a day when a flashlight in a particular pattern would replace a key for unlocking a door by triggering the circuitry of the lock to open it. “Opening a lock becomes a direct event because we are sending the right lights toward the substrate and creating the integrated circuits. It will only answer to my call,” Norlander suggests in the release.



IEEE Spectrum’s nanotechnology blog, featuring news and analysis about the development, applications, and future of science and technology at the nanoscale.

Dexter Johnson
Madrid, Spain
Rachel Courtland
Associate Editor, IEEE Spectrum
New York, NY
Load More