Nanoclast iconNanoclast

Nanotechnology Won't Be Delivering a Utopia Anytime Soon

Dr. Michio Kaku, a theoretical physicist at the City University of New York, and regular contributor to numerous TV documentaries on subjects within physics, has offered up an intriguingly titled video on the website Big Think that’s spreading like wildfire through social media: Can Nanotechnology Create Utopia? You can watch the video (5+ minutes long) below.

Kaku starts off by discussing the quest for utopias that fueled the early settlers of the New World. Oddly, his main example is the Shakers. A little background is in order here: The Shakers took a monastic pledge of celibacy. That sort of lifestyle inevitably dwindles your numbers. In fact, the sect probably would have died out in the 19th century, but for the Civil War, during which they took in large numbers of orphans. However, Kaku cites as one of the main causes of Shakers disappearing (they haven’t by the way, there are still a few Shakers) the issue of scarce resources. You see, in the cold winters they couldn’t get enough seeds and this created scarcity, which in turn led to conflict, or so Kaku seems to believe or wants us to believe. In point of fact, the Shakers were among the first organizations to create a business through mail order, and one of the main things they sold were seeds.

But Kaku’s expertise is not the history of religions in the colonial United States, rather, it's theoretical physics. I think based on his next example of utopias we can safely say that the emphasis should be on “theoretical” because what we get is a discussion of a Star Trek replicator.

In the video Kaku draws a comparison between a Star Trek replicator and the molecular manufacturing  of table-top factories and universal assemblers. He explains that with these universal assemblers we become “like gods” able to turn wood into glass. If I had these god-like powers, I doubt my first feat would be turn wood into glass, but I guess he was just illustrating a point.

And what is his point? Well, when we get these universal assemblers and can make anything we want just by pressing the button “Ferrari,” we will be in a world of such abundance that it will seem like a kind of utopia.

However, Kaku cautions that this utopia has interesting philosophical repercussions, which he illustrates by describing a near-entire episode of Star Trek. I won’t bother you with the plot here, but the question it raises are along the lines of: Will we lose the will to work with all this abundance?

This kind of handwringing over a nanotechnology future is really the bailiwick of the Center for Responsible Nanotechnology. They could devote great tomes to concerns over a possible outcome that is so far away that not only can nobody predict when it might come (with the exception of an exceptional futurist like Ray Kurzweil) but some believe may never come.

These naysayers who question the physics of universal assemblers do not deter Kaku because we already have a replicator as proof: Ribosomes. Kaku explains that ribosomes turn your cheeseburger into the DNA that makes for the next generations of humans.

Of course, it took nature a few billion years to come up with this feat of molecular manufacturing. With that time frame in mind, should we really be worrying ourselves about the ironic hardships delivered upon us by a utopia that was created by a reproductive technology for which there is little indication will happen anytime soon?

NASA’s Decline as Leader in Nanotechnology Research

A recent study out of Rice University ("NASA's Relationship with Nanotechnology: Past, Present and Future Challenges"; pdf) points out that NASA is “the only U.S. federal agency to scale back investment in this area [nanotechnology]”.

The numbers in the report, produced by the James A. Baker III Institute for Public Policy at Rice, are alarming: “NASA reduced annual nanotechnology R&D expenditures from $47 million to $20 million.” And the Ames Research Center in California, which had set up its Center for Nanotechnology five years before the National Nanotechnology Initiative (NNI) was even established, had its staff reduced from 60 researchers with an $18 million budget down to 10 researchers and a $2 million budget.

The Rice report points to two events leading to this decline in NASA’s nanotechnology funding. In 2003, the space shuttle Columbia accident put NASA’s program under scrutiny, leading to a National Academies review of its program. Then in 2004, President George W. Bush presented his “Vision for Space Exploration”, which, while consisting of some lofty goals such as a manned mission to the planet Mars, actually cut NASA budgets in technology R&D.

Not all the news about NASA’s nanotechnology budget is quite as dire. According to the report, the “NNI reports a 29-percent increase in NASA nanotechnology-specific R&D in 2012—from $17 million in 2011 to $23 million in 2012.”

This latest upswing is good news, but have the previous eight years in cuts to nanotechnology research really been that detrimental to NASA’s space exploration? It’s not really clear whether there has been a negative impact on NASA.

NASA’s total research appropriations in the years between 2003 and 2010 decreased more than 75 percent, from $6.62 billion to $1.55 billion. So if there’s been a perceived—or real—decline in NASA’s space exploration it may have just as easily come from the cuts throughout its entire technology R&D budget.

Also, even as NASA's funding declined in those eight years, the U.S. government’s overall funding of nanotechnology research nearly doubled. NASA’s interests in nanotechnology are somewhat removed from the areas of energy, medicine and materials that have been the focus of the government's nanotechnology funding strategies.

And although NASA has not been high on the U.S. government’s list of recipients for nanotechnology funding, nanotechnology has continued to find its way into NASA programs. Nanomaterials developed by Lockheed Martin and Nanocomp Inc. were integrated into the Juno spacecraft destined for Jupiter. Is it necessary for NASA to develop the nanotechnology in order for it to improve NASA spacecraft?

While the numbers may be somewhat alarming, the issue with NASA’s decline as a leader in U.S. nanotechnology research has really just been a reallocation of funding to different agencies and a move towards outsourcing some of the nanomaterial development that had previously been done at NASA labs. It might be even a good thing not only for other technologies such as solar energy and drug delivery, but also for NASA itself by focusing resources in other areas to advance its space program.

Nanotechnology As Socio-Political Project

Nanotechnology has always been burdened with a bit of an identity crisis. Scientists, pundits and everyone in between are constantly offering up definitions for the term. This blur of definitions often leads to confusion, and worse inflating people’s expectations of what nanotechnology can deliver.

One possible example of this disconnect between nanotechnology and its expectations is the recent bankruptcy of A123 Systems.  One can’t help but think that the stalwart support the company received over the years from investors—raising more than $1 billion from private investors, despite never turning a profit—was in part due to a blind trust that the magic of nanotechnology would somehow save the day.

How is it that nanotechnology has been transformed into this seemingly magic vehicle for technological innovation for everything from curing cancer to enabling electric vehicles? To understand it, we need to take a step back and move beyond mere definitions of nanotechnology and instead reach some deeper understanding of how we’ve become so flummoxed in defining it.

Photo: University of Nottingham
Richard Jones of Sheffield University (center) with Professor Chris Rudd of the University of Nottingham (Left) and guests

To our rescue is Professor Richard Jones, who in addition to being a noted scientist is an eloquent commentator on nanotechnology, addressing here in the pages of Spectrum the role of nanotechnology in achieving the Singularity.

In Jones' contribution to a new book: “Quantum Engagements: Social Reflections of Nanoscience and Emerging Technologies” in a chapter entitled “What has nanotechnology taught us about contemporary technoscience?” he suggests that nanotechnology has come to have its peculiar status through a combination of political and cultural forces along with only a smattering of science.

Jones examines the etymology of the term "nanotechnology," and shows how it came to prominence outside of the scientific community. And when he turns his lens on the science of nanotechnology, he finds that it is such a smorgasbord of different scientific disciplines it’s hard to see how any of it can really be related, never mind form the foundation of a scientific field. Here are some, if not all, the disciplines Jones explains fit under the nanotechnology umbrella:

  • Precision engineering—microfabrication techniques
  • Meso-scale physics and low-dimensional semiconductors
  • Molecular electronics
  • Plastic electronics
  • Cluster chemistry
  • Colloid science
  • Powder technology
  • Microscopy
  • Material science
  • Supramolecular chemistry
  • Life sciences

Jones argues that nanotechnology has not done anything to bring these fields together, nor is there any indication that they are about to merge into one, broad field known as “nanotechnology.” However, the wide disparity between the many disciplines could explain “why tensions appear between the visions of nanotechnology proposed by different high status academic champions, and disparities are apparent between these visions and the nature of actual products which are claimed to use nanotechnology.”

The innovation infrastructure that has been built up around nanotechnology also has fueled some of nanotechnology’s unusual characteristics. Jones carefully goes through how the funding mechanisms have changed over the last 30 years and how corporate structures—through the breakup of monopolies (like AT&T)—have resulted in the great corporate laboratories of the post-WWII era being diminished to mere shadows of their former selves.

What has sprung up in their place has been a new innovation model coming into prominence in which intellectual property developed at a university is “spun out” and commercialized through venture capital funding. The business details of the commercialization, like “the identification of market need, manufacturing, the incorporation of the manufactured nanomaterials into a finished product, and the marketing of that product” are all outsourced outside of the company.

This could explain how some scientists who developed a better Li-ion battery and originally targeted their battery for power tools found themselves in a struggle for survival that was tied to the fortunes of the electrical vehicle industry.

The Cautionary Tale of A123 Systems

If the recent bankruptcy of A123 Systems holds any lesson for us it could be that any nanotech-enabled technology that places itself squarely in competition with an established technology has to be not only incrementally better than its nearest competitors, it has to be better than what it's trying to displace.

Some have predicted that the A123 bankruptcy will be painted as another Solyndra, the California maker of photovoltaics that failed despite heavy backing from the Obama administration.  Of course, when Konarka, a solar cell company from Massachusetts, the state Republican presidential candidate Mitt Romney was once governor of, went belly up earlier this year, some suggested that it could be spun into “Romneys’s Solyndra."

The underlying problems of A123 Systems, Solyndra, and Konarka are not political ones of governmental policies—they're illustrations of the futility of ignoring good old-fashioned supply-and-demand economics. (Solyndra, besides never being competitive with traditional energy sources, was also forced to compete with heavily subsidized solar alternatives.)

There is little question that A123 Systems made a better Li-ion battery than its competitors. The problem was the nano-enabled battery that they came up with for powering electric vehicles (EVs) was not in competition with other Li-ion batteries, but with the internal combustion engine.

This is not a political issue or an ideological issue, it’s a numbers issue. Back in 2010, Energy Secretary Steven Chu made clear the conditions of success for EVs when he said, "A rechargeable battery that can last for 5000 deep discharges, [have] 67 x higher storage capacity (3.6 Mj/kg = 1000 Wh) at [a] 3x lower price will be competitive with internal combustion engines (400500 mile range)." Full stop.

The pity of the story is that A123’s bankruptcy—after its auspicious beginnings--might pour cold water on the entrepreneurial ardor of other ambitious scientists. It shouldn’t. However, businesses need to assess realistically their competitive landscape even when they are based on a gee-whiz technology.

A123 Systems unfortunately appears to have fallen victim to the belief that if it could simply come up with a better Li-ion battery than its competitors, it would be powering EVs of the future. What it failed to recognize was that first EVs had to win the market from the petrol-based cars of today.

Nanostructured Silicon Solar Cells Achieve High Conversion Efficiency Without Antireflective Coatings

The economics of solar cells always involves striking a balance between conversion efficiency and manufacturing costs.

Researchers at the U.S. Department of Energy’s National Renewable Energy Laboratory (NREL) believe that they have struck a balance between both of these factors by developing a nanotechnology-enabled silicon solar cell that boasts 18.2 percent conversion efficiency and that should be cheaper to produce.

The research, which was published in the journal Nature Nanotechnology (“An 18.2%-efficient black-silicon solar cell achieved through control of carrier recombination in nanostructures”), was able to create a solar cell without the use of anti-reflective coatings that are typically required to reach that conversion efficiency.

The NREL team was able to achieve this high efficiency without anti-reflection coatings by making billions of nano-sized holes in silicon. Since the holes are actually smaller than the light wavelengths striking the silicon, there is no sudden change in the light density and the light doesn’t reflect back off the surface of the silicon.

While the NREL researchers had previously demonstrated that these nanostructures in silicon reflected less light off the surface, this latest research was the first time they were able to use the technique to achieve high conversion efficiency with the nanostructure silicon cells.

“This work can have a big impact on both conventional and emerging solar cell based on nanowires and nanospheres. For the first time it shows that really great solar cells can be made from nanostructured semiconductors,” Howard Branz, the principal investigator of the research says in an NREL press release.

While attaining high energy conversion efficiencies for an individual cell—and even doing it with a cheaper manufacturing process that eliminates the need for antireflective coating—is an achievement, a big obstacle remains getting those individual cells into a module without significant losses in efficiency.

This may be an area that will be addressed in further research. Branz adds: “The next challenges are to translate these results to common industrial practice and then get the efficiency over 20 percent. After that, I hope to see these kinds of nanostructuring techniques used on far thinner cells to use less semiconductor material.”

European Commission Pulls Back on New Nano Regulations

Last year the European Commission (EC) was eager to show its proactive approach to regulating nanomaterials when—after a protracted process--the Commission arrived at a definition for nanomaterials.

While the EC achieved its goal of a definition, the definition itself came under some pretty pointed criticisms for being so broad as to include the incidental nanoparticles that are produced when a car’s tires roll on pavement.

“We’ve met people recently who work on the legal side within large chemical organizations, who up until October last year didn’t know anything about nanotechnology and suddenly they’re going to be caught within legislation, which is quite a shock to them,” said Jeremy Warren, CEO of Nanosight, in a webinar he produced this year to explain the impact of the new definition.

When any company—European or otherwise—believes that it has been swept up into a regulatory framework for a material that they had no intention of making or using, or perhaps even knew existed, government bureaucrats are certainly going to hear about it. It didn’t take long for industry ministers of European countries to start to take heed.

Last week we began to see how the EC was trying to reel in their proactive approach when it released its “Communication on the Second Regulatory Review on Nanomaterials.” One of the points of the position paper was: “that current legislation covers to a large extent risks in relation to nanomaterials and that risks can be dealt with under the current legislative framework”.

All of that work to develop a definition of nanomaterials so as to create a class of materials that are not currently known to be hazardous (but might be someday) seemed to be all for naught. Instead the EC seems to have taken the position that current laws governing run-of-the-mill materials pretty much handle a large majority of nanomaterials out there.

The reaction of NGO’s like Greenpeace and Friends of the Earth was swift and angry. The NGOs trotted out the term “precautionary principle,” which seems to have come to mean an absolute moratorium on all nanomaterials rather than producers taking on the burden of proof regarding the level of risk of their products.

Another pervasive sentiment among the NGOs is that the EC is stalling. If the EC were indeed stalling, one possible explanation would be that they want scientific data to be gathered proving nanomaterials safe and until then still promote new companies and new products by delaying the imposition of regulations. I suppose that’s what the NGOs believe is happening in this case.

To me, it’s a bit too conspiratorial an explanation. I am more likely to believe this long process stems from the way bureaucracies operate, especially the European variety. They love to commission reports and studies and then hold meetings on the results. The European Union’s approach to the specific issue of nanosilver’s risk has driven some scientists to such levels of frustration they felt compelled to write an article for Nature Nanotechnology, decrying the situation.

Bureaucratic dithering aside, the real obstacle to arriving at a swift resolution about the risk of nanomaterials is that the science takes a long time. As I’ve commented before, we are asking for an almost complete overhaul of the periodic table in terms of toxicity to determine the risk of nanomaterials. Let’s keep the length of time at resolving these issues in that context.

Plasmonics Used to Dope Graphene

The big push in graphene research for electronics has been overcoming its lack of an inherent band gap.  But silicon has another leg up on graphene when it comes to electronics applications: it can comparatively easily be p- and n-doped (positive and negative).

While there have been a number of approaches taken for doping graphene, researchers at Rice University believe that the idea of plasmon-induced doping of graphene could be ideal for this purpose.

The research (“Plasmon-Induced Doping of Graphene”), which was published in the journal ACS Nano, looks to use plasomonics, which exploits the fact that “photons striking small, metallic structures can create plasmons, which are oscillations of electron density in the metal.”

The Rice team placed nanoscale plasmoic antennas—dubbed nonamers—on the graphene to manipulate light in such a way that they inject electrons into the graphene, changing its conductivity. The nonamers tooks the form of eight nanoscale gold discs that encircled one large gold disc, and were placed on the graphene with electron beam lithography.

When the graphene and nonamers are exposed to light, the incident light is converted into hot electrons that transform those portions of the graphene where the nonamers are located from a conductor to an n-doped semiconductor.

“Quantum dot and plasmonic nanoparticle antennas can be tuned to respond to pretty much any color in the visible spectrum,” says Rice professor Peter Nordlander, one of the authors of the paper, in the university's press release about the research. “We can even tune them to different polarization states, or the shape of a wavefront."

Nordlander adds: “That’s the magic of plasmonics. We can tune the plasmon resonance any way we want. In this case, we decided to do it at 825 nanometers because that is in the middle of the spectral range of our available light sources. We wanted to know that we could send light at different colors and see no effect, and at that particular color see a big effect.”

While the possibility of a process that simply uses light for doping graphene seems pretty amazing, the researchers are looking ahead to a day when a flashlight in a particular pattern would replace a key for unlocking a door by triggering the circuitry of the lock to open it. “Opening a lock becomes a direct event because we are sending the right lights toward the substrate and creating the integrated circuits. It will only answer to my call,” Norlander suggests in the release.

Graphene-based Gas Membranes Promise Reduced Carbon Dioxide Emissions

Researchers at the University of Colorado Boulder have achieved the first experimental results of using graphene as a membrane to separate gases. While still a long way off from industrial use, the membranes do possess mechanical properties that should prove beneficial in  natural gas production. The ultra-thin, graphene-based membranes' highly selective pores increase flux through the membrane making the process more energy efficient, thereby reducing the plant's production of carbon dioxide.

The Univ. of Colorado research builds on work that was completed last year at Boulder that showed that graphene possessed extraordinary adhesion capabilities. That work demonstrated that if graphene were used in a multi-layer membrane, the adhesion between the layers of the membrane would be extremely strong. 

This most recent work also seems to go one step beyond research at MIT from earlier this year. In that work, scientists used computer simulation to show that nanoporous graphene could replace membrane materials currently used in the reverse osmosis water desalination processes. At the time, the MIT researchers said they expected that turning their computer models into real world membranes would be daunting. According to the researchers, getting the pores size precisely right would be difficult to do on a large scale.

While the Boulder researchers have not attempted to scale up their experimental results yet, they have managed to get the pore size correct on the graphene so it can separate a variety of gases.

The research, which was published in the journal Nature Nanotechnology (“Selective molecular sieving through porous graphene”),  was able to achieve its precisely sized pores by etching them into graphene sheets with a process involving ultraviolet light-induced oxidation. The resulting porous graphene was then tested on a range of gases including “hydrogen, carbon dioxide, argon, nitrogen, methane and sulphur hexaflouride -- which range in size from 0.29 to 0.49 nanometers.”

“These atomically thin, porous graphene membranes represent a new class of ideal molecular sieves, where gas transport occurs through pores which have a thickness and diameter on the atomic scale,” says Colorado mechanical engineering professor Scott Bunch  in a university press release.

The main technical challenge, according to the researchers, will be bringing these results up to an industrial scale. In particular, they will need to find a process by which they can create large enough sheets of graphene.  The researchers even concede that getting the pores precisely defined still needs further development.

Nanowires Show the Strain Limit of Silicon

Most nanotechnology developments targeted at electronics look ahead to a post-silicon world.  But silicon is still firmly with us and every attempt is being made to wring that last drop of capability out of the material, sometimes with the help of nanotechnology.

For the last decade, researchers have been pushing silicon's limits by straining it. Whether it be more recently the organic semiconductor variety, or just the run-of-the-mill, non-organic variety, strained silicon has been the mainstay of pushing silicon to the very edge of its capabilities. The question is how far can strained-silicon electronics take us?

Swiss researchers at Paul Scherrer Institute and the ETH Zurich may have an answer to that question. With their most recent research, they have strained silicon nanowires right up to their breaking point and still managed to integrate it into an electronic component.

Renato Minamisawa from the Paul Scherrer Institute describes the research in a press release as “"the strongest tension ever generated in silicon; probably even the strongest obtainable before the material breaks."

To accomplish this, the Swiss researchers have turned to the tried-and-true method of top-down manufacturing: etching a substrate with a silicon layer that is already under some strain. In the research, which was published in the journal Nature Communications (“Top-down fabricated silicon nanowires under tensile elastic strain up to 4.5%”), the Swiss team etched dumbbell-shaped bridges into the strained silicon, which exploit the phenomenon of strain accumulation mechanisms.

Basically, the silicon is initially strained in all directions. As you etch away the material into narrow bridges, the thin material left is pulled in just two directions. "Since all the force which was distributed over a larger area before the etching now has to concentrate in the wire, a high tension is created within it", says Minamisawa.

But just straining silicon to its maximum before it breaks is fairly straightforward process and not that noteworthy on its own.

"There is actually no magic behind building up tension in a wire - you just have to pull strongly on both ends", explains Hans Sigg of the Laboratory for Micro- and Nanotechnology at the Paul Scherrer Institute in the same press release. "The challenge is to implement such a wire in a stressed state into an electronic component."

In the method that the researchers developed, the thin wires that remain after the etching of the silicon layer are attached to the rest of the material only at its endpoints, and most importantly are perfectly uniform in dimension and strain. It would be possible to produce thousands of such wires all within a very precise strained state. "And it is even scalable, meaning that the wires can be fabricated as small as you want," Sigg points out.

Despite making every effort to ensure that the process is perfectly compatible with current fabrication methods and materials, the researchers seem curiously unconcerned whether the process ever makes it into industry. As Minamisawa notes about the silicon nanowires: "But even if they do not end up in microelectronic applications, our research could show what the limits of silicon electronics really are."

Silicon Oxide as Resistive Memory Goes Transparent

Since 2008, James Tours’ research group at Rice University has been championing its discovery that silicon oxide can form the basis of resistive memory. In the past four years, there have been a fair number of doubters who just couldn’t believe that silicon oxide could switch its resistance, but there has also been a growing body of research from other labs that corroborates that four-year old work.

Now Tour and his team--with Jun Yao at the fore—have pushed their research one step further with their latest paper in the journal Nature Communications (“Highly transparent nonvolatile resistive memory devices from silicon oxide and graphene”) in which they demonstrate a non-volatile, transparent memory chip resistant to both heat and radiation.

The core of the new memory chip is the use of silicon oxide that has been sandwiched between layers of graphene and placed on flexible plastic sheets. While the Rice researchers have been looking at the use of graphene in flexible displays of late, in that work nanowires were sandwiched between the layers of graphene. In this latest research, silicon oxide is sandwiched between the graphene layers.

In fact, it has been the Rice team’s nanowire-and-graphene research—and its dogged belief in silicon oxide’s ability to switch its resistance and serve as a resistive memory—that has made this latest development possible. In the 2010 paper that followed their 2008 discovery of silicon oxide’s capabilities, Yao observed that running a voltage through silicon oxide removed oxygen atoms from the material creating a 5-nanometer-wide channel, and turned it into pure silicon filaments.

When Yao reversed the voltage, he discovered that the silicon filaments could be broken and reformed, creating a “0” for a broken circuit and a “1” for the healed circuit—the basis of computer memory. Since then Yao has been attempting to build up evidence that the switching effect he witnessed was not a result of the breaking graphite but because of the underlying crystalline silicon.

“Jun was the first to recognize what he was seeing. Nobody believed him,” says Tour in the press release. “Jun quietly continued his work and stacked up evidence, eventually building a working device with no graphite.”

To quiet the doubters, who believed that the switching had to be due to some carbon in the system, Yao created a device that had no exposure to the carbon at all. The devices that the Rice team is producing now contain no metals except for the leads attached to the graphene electrodes.

Tour further notes: “Now we’re making these memories with about an 80 percent yield of working devices, which is pretty good for a non-industrial lab When you get these ideas into industries’ hands, they really sharpen it up.”

The history of nano-enabled, non-volatile memory's challenges to flash memory have proven to be less than successful in the past, but this could be an architecture that changes that trend.



IEEE Spectrum’s nanotechnology blog, featuring news and analysis about the development, applications, and future of science and technology at the nanoscale.

Dexter Johnson
Madrid, Spain
Rachel Courtland
Associate Editor, IEEE Spectrum
New York, NY
Load More