Nanoclast iconNanoclast

Nanotech's Role in Clean Drinking Water Creeping Forward

While attending the EuroNanoForum 2011 conference this week in Budapest, Hungary, I was confronted with at least once considering how nanotechnology could be used in water purification and desalination. You really can’t get through one of these things without hearing how nanotechnology could save the world.

But the water issue is one that over the years I have taken some interest in, putting together a conference on the subject seven years ago, and even taking up the issue here on Spectrum’s pages here, here and here.

A couple of years back, the Meridian Institute published a good white paper entitled “Conventional and Nano-Based Water Technologies,” which did a nice job of cataloguing all the nanotech-based solutions for water desalination.

We have another one to add to that list possibly in a recent article published Physics World  written by Jason Reese, Weir Professor of Thermodynamics and Fluid Mechanics at the University of Strathclyde.

The article relates how CNTs are enabling a technique used by Reese that moves away from the high-energy-cost process of reverse osmosis. In this technique, Reese has shown that the CNTs can improve water permeability 20 times that of modern commercial reverse-osmosis membranes. A factor of 20 improvement in permeability should have a pretty significant impact on the energy requirements.

This is certainly a move in the right direction. However, I have to confess that when I put the NanoWater conference together seven years ago, I had somewhat greater expectations that we would be further along at this point. I am not entirely convinced it’s a lack of technological solutions, nanotech related or otherwise, that is the cause of the delay.

Examining the New Dawn of Dye-Sensitized Solar Cells with their Discoverer

Last month I had the rare opportunity on two separate occasions of sitting down with two world-renowned Swiss scientists.

First, I got to meet Nobel Prize winner Heinrich Rohrer and on Monday of this week I got to chat at EuroNanoForum 2011 with last year’s winner of the Millennium Prize, Michael Grätzel, who is currently a professor at the École Polytechnique Fédérale de Lausanne (EPFL) in Switzerland and in 1991 discovered dye-sensitized solar cells (DSSC), sometimes called Grätzel Cells in his honor. 

While chatting over some lunch, I put three questions to him that were more or less the following: 

--Is the future of dye-sensitized solar cells primarily in the area of powering of electronic devices, i.e. laptops, or could it have a place in the power grid? 

--Was he aware of the work of Angela Belcher in using viruses to manipulate carbon nanotubes for use in dye-sensitized solar cells,and were there any other innovations that he saw as key to the further development of DSSCs?

 --Is improving the conversion efficiency of DSSC the most critical technological hurdle for the cells?

While I did record the responses from Dr. Grätzel, the audio quality was fairly poor due to it being in a busy lunch area for the conference. So, I will quote some of his responses here.

To the first question above, Dr. Grätzel started by saying, “It’s certainly a disruptive technology , which is presently being commercialized mainly through niche applications such as providing electric power for portable electronic devices.

“It is also a very strong contender for building integrated photovoltaics (BIPV). The DSSC is the only solar cell that can be used to realize transparent glass facades, skylights and windows that produce electric power from light," he added.

“Other potentially huge markets targeted presently by industry is to print the DSSC on coil coated steel for roofing and cladding. The commercial production of flexible and light weight devices has already started in 2009.The DSSC  is something that will add new markets to the present applications of silicon ells but it will not confront conventional PV cells at this stage .”

However, he was quick to point out that DSSC does have distinct advantages over silicon cells.

“We [DSSC] have no competition for example here in these light conditions [low interior lighting]. Here we are the best. From indoor applications and for outdoor applications in ambient light that is where money is being made now by the companies that have invested in DSSC,” he said. “But that’s not the only goal. The final target is to mass produce modules that have presently reached 10% conversion efficiency for large scale solar electricity production including applications for roof tops and solar farms.”

In 2010, Sony did in fact demonstrate a prototype module they based on DSSC technology with a10% conversion efficiency.

When it came to the question of conversion efficiency, Dr. Grätzel seemed resigned to the percentage game that seems to exist, but believed that kilowatt hour (kWh) to price was a more significant metric.

“We have to play the game. We have to go and have our efficiencies validated by an accredited PV calibration laboratory. We cannot create a different world where we just say we are the best,” he said. “We are living exactly with the standards that silicon has set in terms of efficiency and stability.

“But, on the other hand, it is true that when it comes to the advantages we should also play those up as well,” he said. He added that under certain outdoor exposures DSSC will already out perform silicon in the key metric of kWh price

“In the end, what we would really like to see is kWh price used as a metric in addition to peak watt price. The peak watt price is a good standard but when it comes to outdoor applications it often does not reflect reality such as the performance under cloudy conditions and the drop of conversion efficiency with temperature encountered by silicon solar cells,” he said.

When I asked about the work of Angela Belcher based on the DSSC, his response was clear “That’s a real breakthrough we can learn a lot from her fascinating experiment. ”

But perhaps more intriguing is that he and his team are submitting their latest research this week on new dyes that break some previous conversion efficiency records for DSSC.

It seems as though DSSC technology is really taking hold recently and developments both commercially and in the laboratory are accelerating.

Definitions for Nanotechnology Inform EU Citizens as much as Regulatory Framework

Yesterday I attended EuroNanoForum 2011 in Budapest, Hungary, which marks the fifth running of this biennial event dating back to 2003.

The Forum serves as a kind of platform from which the European Commission can assess and trumpet its nanotechnology capabilities.

As anyone who has read this blog and my contributions to the TechTalk blog over the years knows, this regional mentality to the development of nanotechnology strikes me as kind of missing the point of how nanoscience and later nanotechnology comes to be developed. But it seems that governments forking out funds for these kinds of shindigs is what really keeps them going, so I suppose they can do whatever they want. It’s their party after all.

The conference had organized a special day for journalists that included a press conference with the plenary speakers that included among others Michael Grätzel, the discoverer of dye-sensitized mesoscopic oxide particles for use in solar cells (I interviewed him and will blog on that tomorrow) and Rudolf Strohmeier, Deputy Director General, Directorate General for Research & Innovation for the European Commission.

Since Mr. Strohmeier is a self-described regulator, I thought it might be worth asking him about the wisdom of embarking on a quest to define what nanotechnology is before establishing a regulatory framework. I also thought I might check in to see where they’re at now in the rather lengthy process.

For those who might like a small primer on the topic, the EU believed it necessary to define what nanotechnology is before developing regulations for it and it all seemed to make sense until the process got stuck in the mud on the issue of “how much” or “how many” nanoparticles.

At the time I first came across the imbroglio, it just seemed all a bit silly. Even if they could determine whether the risk of nanoparticles came from either the number of nanoparticles or the weight of the nanoparticles in a material, it wouldn't really seem to sort out whether said material was of any risk.

But it took an article from Andrew Maynard over at the Risk Science Blog for me to see how wrong-headed the EU's approach really is. By shoehorning a definition that will work for regulators we may be squeezing out science from the process.

As Maynard concludes:

“Five years ago, I was a strong proponent of developing a regulatory definition of nanomaterials.  Today, with the knowledge we now have, I think we need to start thinking more innovatively about how we identify new materials that slip through the regulatory net – whatever we decide to call them.  Only then will we have a hope of developing science-grounded regulation that protects people while supporting sustainable development.”

Below is an audio recording I made of my exchange with Mr. Strohmeier. Interestingly, according to him, the definition was necessary for educating EU citizens as much as for developing regulations. Patrick Vittet-Philippe, the Press and Information Officer for DG Research and Innovation of the European Commission, makes an additional comment at the end of the recording.

Loading the video player...

In fairness, I didn't really get a chance to follow up with Mr. Strohmeier to see if he could see the problems that arise when you arbitrarily arrive at a definition that may not always reflect the latest science on the topic. Nonetheless, I can't help but think that a definition that is as much about mollifying the public as it is about good science has inherent risks itself.

Mapping of Memristor Could Speed Its Commercialization

It’s been some time since I last checked in on Hewlett Packard’s drive to develop the memristor. At that time, HP had joined forces with Korean-based memory chipmaker Hynix Semiconductor Inc. to make memristor chips.

So, while I was waiting to see what would come from the collaboration between HP and Hynix, it seemed that not only was the memristor being touted as changing memory but also replacing the transistor altogether as evidenced by the comments on this recent blog post.

Although the latest news is not an announcement of a commercially available product, which looks as though it will be called resistive random access memory (ReRAM), the research HP has conducted recently has been successful in mapping out what happens inside the 100nm channels of the memristor.

The research was conducted by researchers at HP Labs and the University of California Santa Barbara and initially published in the in the UK-based Institute of Physics journal Nanotechnology

Basically the researchers were able to use X-rays to target precisely the channels within memristors in which resistance switching occurs and then they were able to sort out the chemistry and structure of the channel. If nanotechnology is anything, it is certainly having the tools necessary to see how things operate on the nanoscale and then exploit that knowledge to get things to do what you want.

And what HP no doubt wants is to get the memristor to market and for the first time I am seeing a timeline offered up in which 2014 is an expected to date to see it incorporated into electronic devices like mobile phones and tablets with 10 times greater embedded memory than currently available.

IBM's Millipede Project, Social Networking and How Semiconductor Technology Can Save the World

Last year, thanks to Twitter, I came upon a blog penned by Ira Feldman who was providing coverage of the IEEE San Francisco Bay Area Nanotechnology Council Sixth Annual Symposium. 

If there are positives to social networking this is certainly one of them where knowledge that would otherwise be in a silo for just those who attended the conference can actually be shared with a larger community. I hope more conference attendees start to make this a practice.

Mr. Feldman has provided coverage once again of this year’s IEEE San Francisco Bay Area Nanotechnology Council annual symposium. 

In particular Feldman has given us an analysis of the keynote speaker’s, Dr. Spike Narayan, Functional Manager at IBM, address: “Nanotechnology: Leveraging Semiconductor Technologies to Address Global Challenges.”

According to Feldman, the presentation asked the question “can we leverage semiconductor technology to address global challenges of environment, energy, healthcare, and water?”

If the recent collaborative work between IBM and the Institute of Bioengineering and Nanotechnology in Singapore in using the body of knowledge that had been accumulated in polymer building blocks for creating nanoparticles and then applying it to creating a drug the fights drug-resistant bacteria, then the answer is a resounding ‘yes’.

One of the examples Narayan apparently provided for “where semiconductor knowledge is indeed transferable to these other domains” is in the area of disk drives, with Feldman offering the IBM Millipede project as the most advanced example.

It is a curious story that of the IBM Millipede project. The IBM Millipede essentially used an array of thousands of miniaturized Atomic Force Microscopes (AFMs) as a memory device. Since it was based on the AFM that Gird Binnig had invented, he was sometimes made a spokesman for its capabilities and did so in his 2004 interview with Spectrum

Although touted as the next step in mobile memory devices, it soon became rarely mentioned and most everyone suspected that it fell victim to the cheap and increasingly capable qualities of flash memory.

So, last week during the press conference with Gerd Binnig and Heinrich Rohrer an intrepid journalist (not me) dared ask about the fate of the IBM Millipede project. I took a gulp and waited for the reply.

Binnig, who had been a champion of the technology, remained unapologetically supportive of the technology but did hand off the particulars of Millipede’s fate to Dr. Paul Seidler, Coordinator the new Nanotechnology Center at IBM Research in Zurich to explain more thoroughly.

And just as many suspected, the IBM Millipede project in its original form of creating a mobile storage device is no longer, but instead lives on various other research projects within IBM. At least in the nanotechnology side of things, IBM Millipede has found its niche in probes for lithography.

Below is Dr. Seidler’s full response.

Loading the video player...


Getting back to Ira Feldman, he has wonderfully led us to an archive for all the presentations from the IEEE San Francisco seminar and they can be found here

An Audience with Nanotechnology Nobel Prize Laureates

As promised last week, I would like to share some audio recordings I made of Gerd Binnig and Heinrich Rohrer taking questions from the press during the opening of the new IBM and ETH Zurich nanotechnology laboratory named in their honor.

This first audio file features both Binnig’s and Rohrer’s response to my question of why they were interested in looking at inhomogenities on surfaces in the first place, which led them eventually to creating an instrument for doing it. A more complete history of the STM’s genesis can be found in their joint Nobel lecture here.

Loading the video player...


I was always curious why Gerd Binnig conveyed in his interview with Harry Goldstein here in the pages of Spectrum the sense his designs for the STM would work when nearly every indication he had seemed to point to it simply wouldn't.

The answer is interesting because not only do we see how large a factor intuition plays in scientific inquiry, but we get an interesting sort of engineering/science hybrid approach in which it is perhaps more important to show why something won't work rather than why it should. It also simply reveals Binnig's determination not to give up.

Loading the video player...


The final question here comes from another member of the press who asks both Binnig and Rohrer how it feels to have in a sense crystalized the development of the field of nanotechnology. Binnig sees that what the STM created grew beyond what he could have imagined and Rohrer points to all the contributions from other scientists that made this breakthrough possible.

Loading the video player...


With the STM standing as such a cornerstone for the development of nanotechnology over the last 25 years, one can imagine that both these scientists have become accustomed to fielding all sorts of questions of what their contribution has meant. Even still they remain patient with questions from people like me that they have answered many times before and they still manage to make you feel as though it is the first time they are considering the idea. It was a great privilege.

Intuition Leads to the Tool that Opened Up the Nanoscale Universe and a New Nanotechnology Lab

I was a guest yesterday of IBM along with a group of some 600 assorted dignitaries, politicians and other journalists at the opening of a new $90-million nanotechnhology research laboratory at IBM research facilities in Zurich Switzerland. 

Along with some other journalists, I had received a preview of the facility back in November and even then with concrete still being poured and a jumble of wires seemingly sprouting up from everywhere the facility impressed with its unique “noise-free labs”. (I should note that it does seem that the final cost is now being reported as $90 million now rather than the $60-million figure I reported back in November. I have been told since posting this that the additional $30 million constitutes the cost of equipment, which was not calculated in my original figure.)

But yesterday’s event was truly a spectacle with a big band orchestra and a performance by a group of yoddlers that harkened back to Arthur K. Watson, the son of the founder of IBM, offering a yoddle for a Swiss audience 50 years ago—a recording of which preceded yesterday’s life performance. The festivities were not even dampened by the high level of security that was present apparently in response to some type of terrorist threat(s) targeting the new facility.

While a great deal of attention was paid to the collaborative partnership that will exist at the new facility between IBM and ETH Zurich, it was perhaps the more sentimental aspect of the day that provided a climax to the opening and was my personal interest in the story.

The new facility has been named the Binnig and Rohrer Nanotechnology Center in honor of the two Nobel Laureates, Gerd Binnig and Heinrich Rohrer, who in 1986, along with Ernst Ruska for his previous work in the design of the electron microscope, received the Nobel Prize in Physics for their invention of the Scanning Tunneling Microscope at IBM in Zurich.

Both Binnig and Rohrer were on hand not only to unveil the naming plaque for the new lab but to conduct a discussion for the full 600 guests and do a Q&A session with journalists and later one-on-one interviews. The duo brought the assembled audiences to laughter frequently with their oddly juxtaposed personalities—Rohrer describing himself as a down-to-earth pragmatist and Binnig possessing a touch of the poet from where I sat—they seemed like Nobel Laureates who could become a comedy duo.

Anyone who is involved in the field of nanoscience and nanotechnology owes a debt of gratitude to both of these scientists for deciding that model systems for approximating surfaces and clumsily dealing with inhomogeneities on surfaces wasn’t sufficient and that a device should be developed so that we can actually see…and touch…the surface of things on the atomic scale.

In the IEEE Spectrum’s oft-quoted interview with Binnig back in 2004, A Beautiful Noise, Binnig describes the utter lack of success they had in trying to get their prototype device to do what they expected it to do.

“In a way, this process is just like Columbus going from Europe to America: on the way there, he has no clue that he is coming closer,” relates Binnig in the interview/ “We were in exactly the same situation because the instrument never worked. You have no clue what to do, what knobs to turn to make it work better, because it simply does not work at all. You can't be sure whether you are close to a solution or not.”

Remembering this part of the interview, I was struck by his comment in front of the audience that he always believed that the instrument he had sketched out three days after starting at IBM would work. So, I asked him why under those circumstances did he continue to believe it would work.

“It’s hard to explain,” he began. “Somehow you just have intuition that it will work.”

[A number of recordings both video and audio were made of the event, and I expect that I will be able to share these on the blog in the coming days.]

Microscopy Reveals Source of Extraordinary Nanomaterial's Capabilities

Research has been coming fast and furious recently in exploiting the capabilities of graphene for supercapacitors.

One research team, led by Rod Ruoff at the University of Texas in Austin, has been working extensively with graphene to see what they may unlock from this material.

It turns out that one capability for graphene is to make supercapacitors possess both the energy density of lead-acid batteries and the high power density (rapid energy release) of supercapacitors.

“This new material combines the attributes of both electrical storage systems,” said Ruoff in a Brookhaven National Laboratory press release.  “We were rather stunned by its exceptional performance.”

But Ruoff only had a theory as to why the material had such remarkable performance characteristics. His hypothesis was that the material consisted of “a continuous three-dimensional porous network with single-atom-thick walls, with a significant fraction being “negative curvature carbon,” similar to inside-out buckyballs.”

The hypothesis, however, needed some observational experiments and the microscopy team at Brookhaven National Lab, led by Dong Su and Erick Stach had the tools necessary to put it to the test and they published their findings in the May 12th edition of Science.

It turns out Ruoff got it right. “Our studies revealed that Ruoff’s hypothesis was in fact correct,” says Stach “The material’s three-dimensional nanoscale structure consists of a network of highly curved, single-atom-thick walls forming tiny pores with widths ranging from 1 to 5 nanometers, or billionths of a meter.”

While Stach’s conclusion that since the graphene is easily manufacturable and comes from an abundant resource (carbon) is logical, I believe he will discover that the world of business and industry is not quite so clear headed. Maybe Ruoff's start-up company, Graphene Energy, can get it to market.

Will the US Congress Reauthorize the National Nanotechnology Initiative?

While the National Nanotechnology Initiative (NNI) is now 10 ten years old, it wasn’t until 2003 when President George W. Bush signed into law the 21st Century Nanotechnology Research and Development Act that a statutory framework was established for the NNI and appropriations for it were authorized through fiscal year 2008.

Since 2008, the US House of Representatives has passed two bills that essentially amend the 2003 act and reauthorize the NNI, however, the US Senate has not acted on either. This all brings us to where we are today in which the NNI has received annual appropriation bills that have financed it since 2008.

Last month, the US Congress’s Subcommittee on Research and Science Education held a hearing on nanotechnology in which a number of witnesses urged the NNI be reauthorized to ensure that the nanotechnology initiative in the US doesn’t falter.

One of the witnesses was Dr. Clayton Teague, who has served as Director of the National Nanotechnology Coordination Office (NNCO) since 2003, recently announced his retirement. It is my personal belief that because of individuals like Dr. Teague it has been possible for the US to establish a strong foundation in developing nanotechnology by providing consistent leadership over an extended period of time that is actually quite rare in other countries attempts to mimic the US nanotechnology strategy.

While it’s not clear that the failure of the US Senate to act on Congressional bills will adversely affect NNI funding, it is troubling to think that in the deficit-cutting mania inside the Beltway the NNI might fall victim.

President Obama has made a budget request of $2.1 billion for the NNI, which is $200 million more than was enacted in the FY 2010 budget, but worryingly FY 2011 did see a drop in funding from 2010—the first time in the NNI’s history where funding has actually gone down from the previous year.

I am not much of a believer in the “nanotechnology race”, or more specifically that one government spending more than another will necessarily translate into successful “nano-economy”, if you will. But the lack of reauthorization of the NNI does present some troubling long-term concerns for the future of nanoscience research in the US. Oddly enough, the UK-based Nanotechnology Industries Association has offered an outline of what the troubling outcomes might be here.

But if my guess is right, the NNI was established and funded over the last 10 years not so much as to ensure good science but to establish a so-called “nano-economy” in the US—the next “Silicon Valley”. If that is indeed the case, maybe the free market types will step in actually invest in something other than oil commodities and establish that long talked about economic boom brought to us by nanotechnology.

Adoption of Graphene-Based Optical Modulator Seems Stymied by Business Not Technology

IEEE Spectrum has coverage this week on recent research conducted at the Nanoscale Science and Engineering Center at the University of California, Berkeley that demonstrated a device made of graphene can modulate light and potentially operate at speeds of 500 gigahertz.

The work was initially published in the journal Nature and demonstrates how with the application of voltage the energy state of electrons in a monolayer of graphene can be manipulated to block or allow the passage of photons, effectively modulating light.

Perhaps the most intriguing aspect of this research is how commercially attractive this material for integrated optical modulators is compared to other materials that are being considered.

The idea is that someday we will be replacing all those copper interconnects in chips with optical interconnects. Some material needed to be found that was easily compatible with complementary metal-oxide semiconductor (CMOS). Silicon modulators were too big for on-chip optical interconnects and the germanium and compound semiconductors being experimented with were not so easy to integrate into CMOS.

So, here we are. We’ve got a material that one of the researchers, Ming Liu, says should fit in easily with CMOS manufacturing.

But wait. The distressing bit of the story is the comment provided by Frank Schwierz, head of the RF & Nanoelectronics Research Group at Technical University of Ilmenau, in Germany, who, on the one hand is encouraged by the research, but on the other laments that it may take some time before we would ever see this in a device.

"This is not related to the modulator itself but rather to the fact that the semiconductor industry itself is very conservative," he is quoted as saying in the Spectrum article. "History tells us that chipmakers introduce new materials when, and only when, it is unavoidable."

Indeed. Optical interconnects on chips would be wonderful, no doubt. However, chip manufacturers may have more pressing concerns. This kind of wrinkle in capitalism and how it impacts technological advances I have mentioned before and is hardly anything new, but a bit demoralizing nonetheless.

Most Commented Posts

Nanoclast

IEEE Spectrum’s nanotechnology blog, featuring news and analysis about the development, applications, and future of science and technology at the nanoscale.

Editor

 
Dexter Johnson
Madrid, Spain
 
Advertisement
Load More