Nanoclast iconNanoclast

Water-Splitting Catalyst Revealed

If you wanted to do an imitation—at least an accurate one—you would be well served to carefully observe the original. With artificial photosynthesis being hotly pursued by some of the most renowned nanotechnology researchers, a team at Umeå University in Sweden thought an important step for improving artificial photosynthesis would be to peer deeply into real photosynthesis to reveal the factors that make it work.

The Swedish team examined a manganese complex with spectroscopy in conjunction with the Linac Coherent Light Source (LCLS), a free-electron x-ray laser facility at Stanford University. Manganese is a transitional metal that when combined with calcium and oxygen creates the same water-splitting catalyst found in photosynthesis.

While this may sound like an approximation of photosynthesis, the research group had already used LCLS to perform structural analyses of isolated photosynthesis complexes in plants. The new wrinkle this time was to bring spectroscopy into the imaging process.

To accomplish this imaging, the LCLS emits a laser beam with wavelengths that are the breadth of an atom at pulses that last 50 femtoseconds. When the LCLS was combined with the spectrometer, the x-rays emitted from the manganese complex after being hit with the laser pulses are diffracted by the spectrometer and picked up by a detector array.

With this set-up the researchers observed detailed information about the compound’s electronic structure before the laser beam destroyed it.

"Having both structural information and spectroscopic information means that we can much better understand how the structural changes of the whole complex and the chemical changes on the active surface of the catalysts work together to enable the enzymes to perform complex chemical reactions at room temperature,” says Johannes Messinger, professor at the Department of Chemistry at Umeå University, in a university press release.

This detailed imaging of how photosynthesis splits water into its constituent parts has been held out as a way to help engineers more cheaply synthesize hydrogen gas to power hydrogen fuel cells—and possibly the automobiles powered by them. Research efforts to split water molecules into hydrogen gas have been taken on both by commercial entities and the academics. Perhaps this new information on the electronic structure of the water-splitting process of photosynthesis can further inform both these lines of research.

How Far Can IT Take Material Science?

Last year the White House announced an oddly entitled plan called the “Materials Genome Initiative”. The aim of the initiative—and thus the title—was to apply the same kind of data crunching fire power that was used on the mapping of DNA in the so-called Genome Project to the field of material science.

While one could argue that the White House dubbing this project the Materials Genome Initiative was more a metaphorical flourish than a scientific aim, it does raise the question of whether we can map all of material science in a way that will improve manufacturing as the plan has set out to do.

To answer that question, Richard Jones has penned a piece on his blog Soft Machines, which starts by posing the rhetorical question, "Do materials even have genomes?"

Jones raises many of the questions and problems that result from depending on—or expecting—computer simulation to help us design materials to perform tasks we have designed for them. Some of the points he makes remind me of a piece I wrote five years ago: Materials By Design: Future Science or Science Fiction?

At the time, I noted that “Any useful software modeling would need to be able to reveal how an alteration in a material’s structure—for example, a change in a crystal’s lattice structure—affect its properties and functions. Such a program would also need to be able to do that in a range of scales, because we also don’t know whether we must look at the atomic or particle level to find out where effects are taking place.”

This concern about problems of scale is reflected in Jones’ piece but he also raises the question of on what time scale this kind of endeavor would proceed:

"Even with the fastest computers, you can’t simulate the behavior of a piece of metal by modelling what the atoms are doing in it—there’s just too big a spread of relevant length and timescales. If you wanted to study the way different atoms cluster together as you cast an alloy, you need to be concerned with picosecond times and nanometer lengths, but then if you want to see what happens to a turbine blade made of it in an operating jet engine, you’re interested in meter lengths and timescales of days and years (it is the slow changes in dimension and shape of materials in use—their creep—that often limits their lifetime in high temperature situations)."

Jones points out that developing multi-scale modeling like this is nothing new; he refers to Masao Doi’s Octa project as an example. But such projects remain problematic. There are so many variables involved with material science, he says, that it is not clear how generic you can make the processes seem, at least for computer modeling. He further argues that researchers would quickly turn to physical experiments outside of the computer models.

He notes: “I’m skeptical that anyone trying to test out how to shape and weld big structures out of an oxide dispersion strengthened steel (these steels, reinforced with 2 nm nanoparticles of yttrium oxide, are candidate materials for fusion and fourth-generation fission reactors, due to their creep resistance and resistance to radiation damage) without getting someone to make a big enough batch to try it out.”

There is no doubt that computer modeling is a fantastic tool--a view Jones seems to support in the piece--but it should be clear we had better not be expecting material science to reveal itself the way the DNA molecule was mapped by the Genome Project. Whether the Materials Genome Initiative will prove beneficial to US manufacturing is something that can only be determined over the scale of time.

Plasmonic Nanolasers Shrink Down to Size of a Virus

When lasers start getting down to the nanoscale, they run up against the diffraction limit where the size of the laser cannot be smaller than the wavelength of light it emits. But researchers have shown that nanoscale plasmonic lasers can reach an optical mode well below this limit by confining light of very short wavelengths through the use of surface plasmons—oscillations of electrons that occur at the junction of a metal and an insulator. This has revitalized the hope that chips populated with these plasmonic nanolasers could make possible computer processors run by light rather than electrons.

Now researchers at Northwestern University have developed a new design for plasmonic nanolasers that are the size of a virus particle and capable of operating at room temperature. They described the discovery in the journal Nano Letters, (“Plasmonic Bowtie Nanolaser Arrays”).

"The reason we can fabricate nano-lasers with sizes smaller than that allowed by diffraction is because we made the lasing cavity out of metal nanoparticle dimers -- structures with a 3-D 'bowtie' shape," says Teri Odom, the leader of the research and professor of Chemistry at Northwestern, in a press release.

The bowtie geometry allowed the nanoparticles to achieve an antenna effect and suffer only minimal metal “losses”. Typically, plasmon nanolaser cavities have suffered from both metal and radiation losses that required them to be operated at cryogenic temperatures.

Odom also explains that the antenna effect  allows for lasing to occur from an "electromagnetic hot spot"—a capability not demonstrated previously. "Surprisingly, we also found that when arranged in an array, the 3-D bowtie resonators could emit light at specific angles according to the lattice parameters," Odom adds in the release.

Of course, nanolasers that are capable of operating at room temperature are not unique. Researchers at the University of California, San Diego reported earlier this year on a room temperature nanolaser design that requires less power to generate a coherent beam than other designs. The key difference between the two plasmonic nanolasers seems to be the bowtie geometry the Northwestern team developed.

At least one of the aims of both lines of research seems to be to integrate these nanolasers with CMOS electronics.  Whether they can reach this lofty goal remains to be seen, but these nanolasers are a key step in their realization.

Carbon Nanotube Memory Thrown a Lifeline from Nanoelectronics Powerhouse

Massachusetts-based Nantero has faced some challenges over the last decade in getting its carbon nanotube-based non-volatile memory to market. My colleague Philip Ross did a good job characterizing perhaps the biggest obstacle here in the pages of Spectrum over four years ago: “That instant-on computer that Nantero sketched out more than six years ago? You can buy one right now for just $400; it's called the iPhone.”

My contribution to that Spectrum article was to ask whether the tiny start-up really expected the big multinationals that are today’s flash memory manufacturers to just step aside and relinquish the market by letting Nantero essentially eliminate it.

Despite these challenges the company has remained steadfast in its conviction that its carbon nanotube memory would change computing, even in the face of having to sell off part of its company four years ago to Lockheed Martin. At the time, observers were really beginning to question the wisdom of Nantero taking on the role of David to the flash memory producers' Goliath, especially since Nantero was playing the role without a slingshot.

All of this may have changed significantly yesterday with the announcement of a joint development agreement with Belgium-based nanoelectronics powerhouse Imec to develop Nantero’s carbon-nanotube-based memory. Imec may very well be the needed slingshot.

“After review of the progress to date by Nantero and its manufacturing partners, we decided that this CNT-based non-volatile memory has multiple very attractive characteristics for next-generation highly scaled memory,” said Luc Van den hove, CEO of Imec in a press release. “By taking a leadership position in this area of development, in partnership with Nantero, we will be able to bring substantial benefit to our member companies.”

On its own Nantero had managed to bring its NRAM technology to the point where it was producing high-yielding 4 megabit arrays within CMOS production environments. Now with the NRAM arrays being manufactured, tested and characterized in Imec’s facilities, the aim will be to use the memory for applications below 20 nm, such as terabit-scale memory arrays and ultra-fast gigabit-scale nonvolatile cache memories, according to Jo de Boeck, CTO of Imec.

This is the kind of development that would have been welcomed by Nantero supporters many years ago. Even at this late date, it is still hopeful news for the fortunes of the company. But while flash memory has been the company’s great rival up until now, perhaps there are new ones in the shape of graphene-based flash memory that will form the competition in the future.

High Density Carbon Nanotubes Show Way Forward for Smaller and Faster Transistors

Researchers at IBM’s T. J. Watson Research Center in Yorktown Heights, New York are reporting success in precisely locating a high density of carbon nanotubes on a substrate that should lead to high-speed and power-efficient chips and could show a way forward after silicon.

The IBM team was able to successfully place over 10,000 working transistors on the surface of a silicon wafer. Some anticipate that this research will not only allow the building of smaller transistors but also improve the clock speed of the transistors.

The research, which was published in Nature Nanotechnology (“High-density integration of carbon nanotubes via chemical self-assembly”), was hinted at last year during the IEEE International Electron Devices Meeting (IEDM).

At the IEDM meeting last year there was a lot of noise about IBM demonstrating the first transistor with sub-10nm channel lengths. The tie into that line of research and this latest work is that the IBM team built those sub-10nm nanostructures out of carbon nanotubes and grew them through self- assembly on standard 200-millimeter diameter wafers.

The latest research used ion-exchange chemistry to trigger a chemical self-assembly process for the nanostructures. The researchers place the carbon nanotubes in a solution that makes them water-soluble. Then the carbon nanotubes chemically self assemble onto the substrate in patterned arrays.

The process made it possible to place the carbon-nanotube transistors in a density high enough that the resulting material outperformed any other switches from any other material, according to Supratik Guha, director of physical sciences at IBM’s Yorktown Heights research center in a New York Times article covering the research.   “We had suspected this all along, and our device physicists had simulated this, and they showed that we would see a factor of five or more performance improvement over conventional silicon devices,” says Guha in the article.

What might be the most impressive aspect about the results is that the researchers were able to electrically test the 10,000 transistors. Being able to characterize this large a number of nanotube devices is critical for analyzing transistor performance and yield.

This step will also prove crucial in what remains to be the biggest obstacle for the technology in replacing silicon: achieving carbon nanotube purity. At the moment, the carbon nanotubes the researchers have access to contain enough metal in them that they don’t make ideal semiconductors. The IBM team are confident that they can reach a 99.99 percent pure form of carbon nanotubes that will make the devices operate even better than their current prototypes.

Image courtesy of Nature Publishing

Nanostructured Silicon Li-ion Batteries’ Capacity Figures Are In

Seven months ago I covered a small start-up called California Lithium Battery Inc. (CalBattery) that had entered into a Work for Others (WFO) agreement with Argonne National Laboratory (ANL) to develop and commercialize what they dubbed as the “GEN3” lithium-ion battery.

The GEN3 battery is largely based on ANL’s silicon-graphene battery anode process. Basically the ANL approach is to sandwich silicon between graphene sheets in the anode of the battery to allow more lithium atoms in the electrode.

This line of research was motivated by the hope of improving the charge life of Li-ion batteries. First, researchers showed that if you replaced the graphite of the anodes with silicon, the charge could be increased by a factor of ten.  There was one big drawback though. After a few charge-discharge cycles the silicon would crack and become inoperable from the expansion and contraction of the material. The solution seemed to be nanostructured silicon anodes that could last longer than the pure silicon variety, but just barely. 

The ANL silicon-graphene anode is supposed to overcome this problem and achieve comparable charge-discharge cycles of graphite, but with the charge significantly increased like you would achieve with pure silicon in the anode.

So, what’s been happening in the last seven months? Well, CalBattery has released a press announcement revealing the results of their last eight months of testing. According to the press release, the Li-ion batteries they have been testing have an energy density of 525WH/Kg and specific anode capacity of 1,250mAh/g. To offer a comparison, the company press release explains that Li-ion batteries currently on the market have an energy density of between 100-180WH/kg and a specific anode capacity of 325mAh/g.

“This equates to more than a 300% improvement in LIB (Li-ion battery) capacity and an estimated 70% reduction in lifetime cost for batteries used in consumer electronics, EVs, and grid-scale energy storage,” says CalBattery CEO Phil Roberts in the company press release.

Curiously, I didn’t see anything in the press release that talks about what numbers they were able to achieve in charge/discharge cycles with the material. And that really is the crux of the matter. Everyone has understood for the last few years that nanostructured silicon anodes have a high capacity. The problem is that it has only been slightly better than regular silicon when it comes to charge/discharge cycles.

Let’s look at Energy Secretary’s threshold numbers for making Li-ion battery-powered competitive to petrol-powered vehicles:

  • A rechargeable battery that can last for 5000 deep discharges
  • 6–7 x higher storage capacity (3.6 Mj/kg = 1000 Wh) at [a] 3x lower price

Well, we don’t know what the deep discharge figures are for this GEN3 battery. But improving the capacity 300% seems to be a little short of factor of 6 or 7. But as it was pointed out to me in the comments a 70% reduction in lifetime cost does seem to meet the criteria of a 3x lower price.

Maybe EVs don’t really need to be competitive with petrol-powered vehicles, and Secretary Chu’s figures are not pertinent, but if the dwindling sales of EVs are any indication, maybe those figures are relevant and EVs actually do need to be competitive with petrol-powered vehicles…for now.

Nanotechnology Won't Be Delivering a Utopia Anytime Soon

Dr. Michio Kaku, a theoretical physicist at the City University of New York, and regular contributor to numerous TV documentaries on subjects within physics, has offered up an intriguingly titled video on the website Big Think that’s spreading like wildfire through social media: Can Nanotechnology Create Utopia? You can watch the video (5+ minutes long) below.

Kaku starts off by discussing the quest for utopias that fueled the early settlers of the New World. Oddly, his main example is the Shakers. A little background is in order here: The Shakers took a monastic pledge of celibacy. That sort of lifestyle inevitably dwindles your numbers. In fact, the sect probably would have died out in the 19th century, but for the Civil War, during which they took in large numbers of orphans. However, Kaku cites as one of the main causes of Shakers disappearing (they haven’t by the way, there are still a few Shakers) the issue of scarce resources. You see, in the cold winters they couldn’t get enough seeds and this created scarcity, which in turn led to conflict, or so Kaku seems to believe or wants us to believe. In point of fact, the Shakers were among the first organizations to create a business through mail order, and one of the main things they sold were seeds.

But Kaku’s expertise is not the history of religions in the colonial United States, rather, it's theoretical physics. I think based on his next example of utopias we can safely say that the emphasis should be on “theoretical” because what we get is a discussion of a Star Trek replicator.

In the video Kaku draws a comparison between a Star Trek replicator and the molecular manufacturing  of table-top factories and universal assemblers. He explains that with these universal assemblers we become “like gods” able to turn wood into glass. If I had these god-like powers, I doubt my first feat would be turn wood into glass, but I guess he was just illustrating a point.

And what is his point? Well, when we get these universal assemblers and can make anything we want just by pressing the button “Ferrari,” we will be in a world of such abundance that it will seem like a kind of utopia.

However, Kaku cautions that this utopia has interesting philosophical repercussions, which he illustrates by describing a near-entire episode of Star Trek. I won’t bother you with the plot here, but the question it raises are along the lines of: Will we lose the will to work with all this abundance?

This kind of handwringing over a nanotechnology future is really the bailiwick of the Center for Responsible Nanotechnology. They could devote great tomes to concerns over a possible outcome that is so far away that not only can nobody predict when it might come (with the exception of an exceptional futurist like Ray Kurzweil) but some believe may never come.

These naysayers who question the physics of universal assemblers do not deter Kaku because we already have a replicator as proof: Ribosomes. Kaku explains that ribosomes turn your cheeseburger into the DNA that makes for the next generations of humans.

Of course, it took nature a few billion years to come up with this feat of molecular manufacturing. With that time frame in mind, should we really be worrying ourselves about the ironic hardships delivered upon us by a utopia that was created by a reproductive technology for which there is little indication will happen anytime soon?

NASA’s Decline as Leader in Nanotechnology Research

A recent study out of Rice University ("NASA's Relationship with Nanotechnology: Past, Present and Future Challenges"; pdf) points out that NASA is “the only U.S. federal agency to scale back investment in this area [nanotechnology]”.

The numbers in the report, produced by the James A. Baker III Institute for Public Policy at Rice, are alarming: “NASA reduced annual nanotechnology R&D expenditures from $47 million to $20 million.” And the Ames Research Center in California, which had set up its Center for Nanotechnology five years before the National Nanotechnology Initiative (NNI) was even established, had its staff reduced from 60 researchers with an $18 million budget down to 10 researchers and a $2 million budget.

The Rice report points to two events leading to this decline in NASA’s nanotechnology funding. In 2003, the space shuttle Columbia accident put NASA’s program under scrutiny, leading to a National Academies review of its program. Then in 2004, President George W. Bush presented his “Vision for Space Exploration”, which, while consisting of some lofty goals such as a manned mission to the planet Mars, actually cut NASA budgets in technology R&D.

Not all the news about NASA’s nanotechnology budget is quite as dire. According to the report, the “NNI reports a 29-percent increase in NASA nanotechnology-specific R&D in 2012—from $17 million in 2011 to $23 million in 2012.”

This latest upswing is good news, but have the previous eight years in cuts to nanotechnology research really been that detrimental to NASA’s space exploration? It’s not really clear whether there has been a negative impact on NASA.

NASA’s total research appropriations in the years between 2003 and 2010 decreased more than 75 percent, from $6.62 billion to $1.55 billion. So if there’s been a perceived—or real—decline in NASA’s space exploration it may have just as easily come from the cuts throughout its entire technology R&D budget.

Also, even as NASA's funding declined in those eight years, the U.S. government’s overall funding of nanotechnology research nearly doubled. NASA’s interests in nanotechnology are somewhat removed from the areas of energy, medicine and materials that have been the focus of the government's nanotechnology funding strategies.

And although NASA has not been high on the U.S. government’s list of recipients for nanotechnology funding, nanotechnology has continued to find its way into NASA programs. Nanomaterials developed by Lockheed Martin and Nanocomp Inc. were integrated into the Juno spacecraft destined for Jupiter. Is it necessary for NASA to develop the nanotechnology in order for it to improve NASA spacecraft?

While the numbers may be somewhat alarming, the issue with NASA’s decline as a leader in U.S. nanotechnology research has really just been a reallocation of funding to different agencies and a move towards outsourcing some of the nanomaterial development that had previously been done at NASA labs. It might be even a good thing not only for other technologies such as solar energy and drug delivery, but also for NASA itself by focusing resources in other areas to advance its space program.

Nanotechnology As Socio-Political Project

Nanotechnology has always been burdened with a bit of an identity crisis. Scientists, pundits and everyone in between are constantly offering up definitions for the term. This blur of definitions often leads to confusion, and worse inflating people’s expectations of what nanotechnology can deliver.

One possible example of this disconnect between nanotechnology and its expectations is the recent bankruptcy of A123 Systems.  One can’t help but think that the stalwart support the company received over the years from investors—raising more than $1 billion from private investors, despite never turning a profit—was in part due to a blind trust that the magic of nanotechnology would somehow save the day.

How is it that nanotechnology has been transformed into this seemingly magic vehicle for technological innovation for everything from curing cancer to enabling electric vehicles? To understand it, we need to take a step back and move beyond mere definitions of nanotechnology and instead reach some deeper understanding of how we’ve become so flummoxed in defining it.

Photo: University of Nottingham
Richard Jones of Sheffield University (center) with Professor Chris Rudd of the University of Nottingham (Left) and guests

To our rescue is Professor Richard Jones, who in addition to being a noted scientist is an eloquent commentator on nanotechnology, addressing here in the pages of Spectrum the role of nanotechnology in achieving the Singularity.

In Jones' contribution to a new book: “Quantum Engagements: Social Reflections of Nanoscience and Emerging Technologies” in a chapter entitled “What has nanotechnology taught us about contemporary technoscience?” he suggests that nanotechnology has come to have its peculiar status through a combination of political and cultural forces along with only a smattering of science.

Jones examines the etymology of the term "nanotechnology," and shows how it came to prominence outside of the scientific community. And when he turns his lens on the science of nanotechnology, he finds that it is such a smorgasbord of different scientific disciplines it’s hard to see how any of it can really be related, never mind form the foundation of a scientific field. Here are some, if not all, the disciplines Jones explains fit under the nanotechnology umbrella:

  • Precision engineering—microfabrication techniques
  • Meso-scale physics and low-dimensional semiconductors
  • Molecular electronics
  • Plastic electronics
  • Cluster chemistry
  • Colloid science
  • Powder technology
  • Microscopy
  • Material science
  • Supramolecular chemistry
  • Life sciences

Jones argues that nanotechnology has not done anything to bring these fields together, nor is there any indication that they are about to merge into one, broad field known as “nanotechnology.” However, the wide disparity between the many disciplines could explain “why tensions appear between the visions of nanotechnology proposed by different high status academic champions, and disparities are apparent between these visions and the nature of actual products which are claimed to use nanotechnology.”

The innovation infrastructure that has been built up around nanotechnology also has fueled some of nanotechnology’s unusual characteristics. Jones carefully goes through how the funding mechanisms have changed over the last 30 years and how corporate structures—through the breakup of monopolies (like AT&T)—have resulted in the great corporate laboratories of the post-WWII era being diminished to mere shadows of their former selves.

What has sprung up in their place has been a new innovation model coming into prominence in which intellectual property developed at a university is “spun out” and commercialized through venture capital funding. The business details of the commercialization, like “the identification of market need, manufacturing, the incorporation of the manufactured nanomaterials into a finished product, and the marketing of that product” are all outsourced outside of the company.

This could explain how some scientists who developed a better Li-ion battery and originally targeted their battery for power tools found themselves in a struggle for survival that was tied to the fortunes of the electrical vehicle industry.

The Cautionary Tale of A123 Systems

If the recent bankruptcy of A123 Systems holds any lesson for us it could be that any nanotech-enabled technology that places itself squarely in competition with an established technology has to be not only incrementally better than its nearest competitors, it has to be better than what it's trying to displace.

Some have predicted that the A123 bankruptcy will be painted as another Solyndra, the California maker of photovoltaics that failed despite heavy backing from the Obama administration.  Of course, when Konarka, a solar cell company from Massachusetts, the state Republican presidential candidate Mitt Romney was once governor of, went belly up earlier this year, some suggested that it could be spun into “Romneys’s Solyndra."

The underlying problems of A123 Systems, Solyndra, and Konarka are not political ones of governmental policies—they're illustrations of the futility of ignoring good old-fashioned supply-and-demand economics. (Solyndra, besides never being competitive with traditional energy sources, was also forced to compete with heavily subsidized solar alternatives.)

There is little question that A123 Systems made a better Li-ion battery than its competitors. The problem was the nano-enabled battery that they came up with for powering electric vehicles (EVs) was not in competition with other Li-ion batteries, but with the internal combustion engine.

This is not a political issue or an ideological issue, it’s a numbers issue. Back in 2010, Energy Secretary Steven Chu made clear the conditions of success for EVs when he said, "A rechargeable battery that can last for 5000 deep discharges, [have] 67 x higher storage capacity (3.6 Mj/kg = 1000 Wh) at [a] 3x lower price will be competitive with internal combustion engines (400500 mile range)." Full stop.

The pity of the story is that A123’s bankruptcy—after its auspicious beginnings--might pour cold water on the entrepreneurial ardor of other ambitious scientists. It shouldn’t. However, businesses need to assess realistically their competitive landscape even when they are based on a gee-whiz technology.

A123 Systems unfortunately appears to have fallen victim to the belief that if it could simply come up with a better Li-ion battery than its competitors, it would be powering EVs of the future. What it failed to recognize was that first EVs had to win the market from the petrol-based cars of today.

Advertisement

Nanoclast

IEEE Spectrum’s nanotechnology blog, featuring news and analysis about the development, applications, and future of science and technology at the nanoscale.

 
Editor
Dexter Johnson
Madrid, Spain
 
Contributor
Rachel Courtland
Associate Editor, IEEE Spectrum
New York, NY
Advertisement
Load More