Nanoclast iconNanoclast

EU Embarks on Yet Another Public Outreach Project

Europe is once again trying to appease…umh…I mean inform the public about the subject of nanotechnology with another public outreach program entitled Nanochannels.

I say ‘again’ because there have been similar efforts in the past. There was Nanologue, which is a completed EU project that except for those directly engaged in it few have heard of.

Over at the blog Frogheart the reasonable question is asked why no one mentioned the work previously done by Nanologue in this field when announcing Nanochannels. Since I was in the meeting at EuroNanoForum in which the outreach program was discussed, I can confirm that a dreary recap was provided of the Nanologue project. The extent to which this data informed the new project was hard to discern from the near-comatose expression on the face of the audience.

But moving on, in addition to Nanologue there was ObservatoryNANO that was supposed to inform government, industry and finance decision makers about issues pertaining to nanotechnology and in doing so often provided unintended guffaws.

One of the groups behind ObservatoryNANO and now Nanochannels is the Institute of Nanotechnology, which certainly has background in this sort of thing as it also provided some of the background material for last year’s nearly universally lambasted UK nanotechnology strategy report.

That report too managed to forget all the other reports that had come before. While starting from scratch is sometimes necessary and even a good thing, managing to dispose of an entire body of knowledge in order to offer up odd suggestions seems to miss the spirit of starting fresh.

It would appear that Nanochannels (a somewhat unfortunate name in that nanochannels is a term that actually refers to a structure in nanotechnology) is somewhat similar to the UK project Nano&Me, except that Nano&Me was set up for £77k and Nanochannels has a budget of €894,940.

You would think with that kind of money they would have a website, but no, unless of course you count a Facebook page

You know the EU might take a page from the US government in its nanotechnology outreach strategy and just start getting serious about the subject. Just because some groups have decided that juvenile discourse is the way to address the topic of nanotechnology doesn’t mean that governments should embark on a children’s-book approach to the subject to get it all sorted.

IBM Takes Graphene One Step Further with the First IC Built from It

In poker parlance it would seem that IBM has gone “all in” with graphene replacing silicon in the chips of the future.

The latest news in the graphene story is that IBM has built an integrated circuit out of the wonder material. The research, which was published in the magazine Science last week, describes IBM’s success at building a “wafer-scale graphene circuit…in which all circuit components, including graphene field-effect transistor and inductors, were monolithically integrated on a single silicon carbide wafer.”

According to the Spectrum article cited above, it took researchers a year of engineering work to sort how to connect the graphene to the other metallic elements of the circuit and how to perform lithography on it without damaging it.

To overcome the latter challenge the Spectrum article intriguingly says, “One way the team addressed the damage problem was to grow the graphene on a silicon-carbide wafer, then coat it with a common polymer, PMMA, and a resist that was sensitive to jets of electrons used in electron beam lithography.”

I am assuming then that there were other ways tested but this turned out to be the best (I don’t have a Science subscription, so I don’t know if other methods were in fact tried.).

Like anyone who follows to any extent developments in material science around chips I have become somewhat mesmerized at the developments that have been coming fast and furious around graphene.

But meanwhile some interesting developments are occurring with other materials that come in two dimensions like graphene but have a natural band gap. While the molybdenite is not being positioned as a direct competitor with graphene in the post-silicon battlefield, one has to wonder whether there are other minerals out there in addition to molybdenite that could fit the bill and push graphene to the side.

Not that there is such a competitor out there mind you, but once upon a time not too long ago carbon nanotubes were the new wonder material that would someday replace silicon. If I were a betting man, I would be looking to hedge my wager somewhat.

Nanoscale Conductors Enable New Battery Architecture: The Semi-Solid Flow Cell

Researchers at MIT, led by W. Craig Carter and Yet-Ming Chiang, have developed a new architecture for batteries that combines the design of liquid-flow batteries with that of conventional lithium-ion batteries resulting in a 10-fold improvement in the energy density of liquid-flow batteries and reducing the size of a battery system such as found now in electric cars to about half their current size.

The research, which was originally published in the Wiley journal Advanced Energy Materials, was able to overcome the low energy density of liquid-flow batteries by creating a semi-solid material that “kind of oozes,” according to Chiang. The new material is able to store energy in “suspensions of solid storage compounds” and the “charge transfer is accomplished via dilute yet percolating networks of nanoscale conductors.”

The result is that the cathodes and anodes of the battery are particles that are suspended in the liquid electrolyte. And the two different suspensions are pumped through systems separated by a thin porous membrane.

The design also separates the storing and discharging of the battery into two different physical structures. According to Chiang, this separated architecture will enable batteries to be designed more efficiently.

Since the design is expected to reduce the size (and cost) of a battery system by as much as half, it is being touted as a way to make electric vehicles more competitive with internal combustion engines.

Chiang and his researchers have even gone so far as to dub the semi-solid liquid “Cambridge Crude”, no doubt a reference to their Cambridge, MA location. The researchers also posit the idea that the “Cambridge Crude” could be pumped like gas to recharge a car. However, I would make one small caveat on that notion: what are you supposed to do with the semi-solid liquid that you’re disposing of? I expect that one would run into all sorts of environmental concerns.

Nonetheless the researchers have developed what appears to be a new design architecture for batteries and have demonstrated that slurry-type active materials can be used for storing electrical energy.

If the research has the potential for commercial development, it seems the right research team developed it since Chiang’s earlier work on lithium-ion batteries led to the MIT spinoff A123 Systems and it has already been licensed by Chiang’s and Carter’s new company 24M Technologies (itself a spinoff from A123 systems).

Nanotech's Role in Clean Drinking Water Creeping Forward

While attending the EuroNanoForum 2011 conference this week in Budapest, Hungary, I was confronted with at least once considering how nanotechnology could be used in water purification and desalination. You really can’t get through one of these things without hearing how nanotechnology could save the world.

But the water issue is one that over the years I have taken some interest in, putting together a conference on the subject seven years ago, and even taking up the issue here on Spectrum’s pages here, here and here.

A couple of years back, the Meridian Institute published a good white paper entitled “Conventional and Nano-Based Water Technologies,” which did a nice job of cataloguing all the nanotech-based solutions for water desalination.

We have another one to add to that list possibly in a recent article published Physics World  written by Jason Reese, Weir Professor of Thermodynamics and Fluid Mechanics at the University of Strathclyde.

The article relates how CNTs are enabling a technique used by Reese that moves away from the high-energy-cost process of reverse osmosis. In this technique, Reese has shown that the CNTs can improve water permeability 20 times that of modern commercial reverse-osmosis membranes. A factor of 20 improvement in permeability should have a pretty significant impact on the energy requirements.

This is certainly a move in the right direction. However, I have to confess that when I put the NanoWater conference together seven years ago, I had somewhat greater expectations that we would be further along at this point. I am not entirely convinced it’s a lack of technological solutions, nanotech related or otherwise, that is the cause of the delay.

Examining the New Dawn of Dye-Sensitized Solar Cells with their Discoverer

Last month I had the rare opportunity on two separate occasions of sitting down with two world-renowned Swiss scientists.

First, I got to meet Nobel Prize winner Heinrich Rohrer and on Monday of this week I got to chat at EuroNanoForum 2011 with last year’s winner of the Millennium Prize, Michael Grätzel, who is currently a professor at the École Polytechnique Fédérale de Lausanne (EPFL) in Switzerland and in 1991 discovered dye-sensitized solar cells (DSSC), sometimes called Grätzel Cells in his honor. 

While chatting over some lunch, I put three questions to him that were more or less the following: 

--Is the future of dye-sensitized solar cells primarily in the area of powering of electronic devices, i.e. laptops, or could it have a place in the power grid? 

--Was he aware of the work of Angela Belcher in using viruses to manipulate carbon nanotubes for use in dye-sensitized solar cells,and were there any other innovations that he saw as key to the further development of DSSCs?

 --Is improving the conversion efficiency of DSSC the most critical technological hurdle for the cells?

While I did record the responses from Dr. Grätzel, the audio quality was fairly poor due to it being in a busy lunch area for the conference. So, I will quote some of his responses here.

To the first question above, Dr. Grätzel started by saying, “It’s certainly a disruptive technology , which is presently being commercialized mainly through niche applications such as providing electric power for portable electronic devices.

“It is also a very strong contender for building integrated photovoltaics (BIPV). The DSSC is the only solar cell that can be used to realize transparent glass facades, skylights and windows that produce electric power from light," he added.

“Other potentially huge markets targeted presently by industry is to print the DSSC on coil coated steel for roofing and cladding. The commercial production of flexible and light weight devices has already started in 2009.The DSSC  is something that will add new markets to the present applications of silicon ells but it will not confront conventional PV cells at this stage .”

However, he was quick to point out that DSSC does have distinct advantages over silicon cells.

“We [DSSC] have no competition for example here in these light conditions [low interior lighting]. Here we are the best. From indoor applications and for outdoor applications in ambient light that is where money is being made now by the companies that have invested in DSSC,” he said. “But that’s not the only goal. The final target is to mass produce modules that have presently reached 10% conversion efficiency for large scale solar electricity production including applications for roof tops and solar farms.”

In 2010, Sony did in fact demonstrate a prototype module they based on DSSC technology with a10% conversion efficiency.

When it came to the question of conversion efficiency, Dr. Grätzel seemed resigned to the percentage game that seems to exist, but believed that kilowatt hour (kWh) to price was a more significant metric.

“We have to play the game. We have to go and have our efficiencies validated by an accredited PV calibration laboratory. We cannot create a different world where we just say we are the best,” he said. “We are living exactly with the standards that silicon has set in terms of efficiency and stability.

“But, on the other hand, it is true that when it comes to the advantages we should also play those up as well,” he said. He added that under certain outdoor exposures DSSC will already out perform silicon in the key metric of kWh price

“In the end, what we would really like to see is kWh price used as a metric in addition to peak watt price. The peak watt price is a good standard but when it comes to outdoor applications it often does not reflect reality such as the performance under cloudy conditions and the drop of conversion efficiency with temperature encountered by silicon solar cells,” he said.

When I asked about the work of Angela Belcher based on the DSSC, his response was clear “That’s a real breakthrough we can learn a lot from her fascinating experiment. ”

But perhaps more intriguing is that he and his team are submitting their latest research this week on new dyes that break some previous conversion efficiency records for DSSC.

It seems as though DSSC technology is really taking hold recently and developments both commercially and in the laboratory are accelerating.

Definitions for Nanotechnology Inform EU Citizens as much as Regulatory Framework

Yesterday I attended EuroNanoForum 2011 in Budapest, Hungary, which marks the fifth running of this biennial event dating back to 2003.

The Forum serves as a kind of platform from which the European Commission can assess and trumpet its nanotechnology capabilities.

As anyone who has read this blog and my contributions to the TechTalk blog over the years knows, this regional mentality to the development of nanotechnology strikes me as kind of missing the point of how nanoscience and later nanotechnology comes to be developed. But it seems that governments forking out funds for these kinds of shindigs is what really keeps them going, so I suppose they can do whatever they want. It’s their party after all.

The conference had organized a special day for journalists that included a press conference with the plenary speakers that included among others Michael Grätzel, the discoverer of dye-sensitized mesoscopic oxide particles for use in solar cells (I interviewed him and will blog on that tomorrow) and Rudolf Strohmeier, Deputy Director General, Directorate General for Research & Innovation for the European Commission.

Since Mr. Strohmeier is a self-described regulator, I thought it might be worth asking him about the wisdom of embarking on a quest to define what nanotechnology is before establishing a regulatory framework. I also thought I might check in to see where they’re at now in the rather lengthy process.

For those who might like a small primer on the topic, the EU believed it necessary to define what nanotechnology is before developing regulations for it and it all seemed to make sense until the process got stuck in the mud on the issue of “how much” or “how many” nanoparticles.

At the time I first came across the imbroglio, it just seemed all a bit silly. Even if they could determine whether the risk of nanoparticles came from either the number of nanoparticles or the weight of the nanoparticles in a material, it wouldn't really seem to sort out whether said material was of any risk.

But it took an article from Andrew Maynard over at the Risk Science Blog for me to see how wrong-headed the EU's approach really is. By shoehorning a definition that will work for regulators we may be squeezing out science from the process.

As Maynard concludes:

“Five years ago, I was a strong proponent of developing a regulatory definition of nanomaterials.  Today, with the knowledge we now have, I think we need to start thinking more innovatively about how we identify new materials that slip through the regulatory net – whatever we decide to call them.  Only then will we have a hope of developing science-grounded regulation that protects people while supporting sustainable development.”

Below is an audio recording I made of my exchange with Mr. Strohmeier. Interestingly, according to him, the definition was necessary for educating EU citizens as much as for developing regulations. Patrick Vittet-Philippe, the Press and Information Officer for DG Research and Innovation of the European Commission, makes an additional comment at the end of the recording.

Loading the video player...

In fairness, I didn't really get a chance to follow up with Mr. Strohmeier to see if he could see the problems that arise when you arbitrarily arrive at a definition that may not always reflect the latest science on the topic. Nonetheless, I can't help but think that a definition that is as much about mollifying the public as it is about good science has inherent risks itself.

Mapping of Memristor Could Speed Its Commercialization

It’s been some time since I last checked in on Hewlett Packard’s drive to develop the memristor. At that time, HP had joined forces with Korean-based memory chipmaker Hynix Semiconductor Inc. to make memristor chips.

So, while I was waiting to see what would come from the collaboration between HP and Hynix, it seemed that not only was the memristor being touted as changing memory but also replacing the transistor altogether as evidenced by the comments on this recent blog post.

Although the latest news is not an announcement of a commercially available product, which looks as though it will be called resistive random access memory (ReRAM), the research HP has conducted recently has been successful in mapping out what happens inside the 100nm channels of the memristor.

The research was conducted by researchers at HP Labs and the University of California Santa Barbara and initially published in the in the UK-based Institute of Physics journal Nanotechnology

Basically the researchers were able to use X-rays to target precisely the channels within memristors in which resistance switching occurs and then they were able to sort out the chemistry and structure of the channel. If nanotechnology is anything, it is certainly having the tools necessary to see how things operate on the nanoscale and then exploit that knowledge to get things to do what you want.

And what HP no doubt wants is to get the memristor to market and for the first time I am seeing a timeline offered up in which 2014 is an expected to date to see it incorporated into electronic devices like mobile phones and tablets with 10 times greater embedded memory than currently available.

IBM's Millipede Project, Social Networking and How Semiconductor Technology Can Save the World

Last year, thanks to Twitter, I came upon a blog penned by Ira Feldman who was providing coverage of the IEEE San Francisco Bay Area Nanotechnology Council Sixth Annual Symposium. 

If there are positives to social networking this is certainly one of them where knowledge that would otherwise be in a silo for just those who attended the conference can actually be shared with a larger community. I hope more conference attendees start to make this a practice.

Mr. Feldman has provided coverage once again of this year’s IEEE San Francisco Bay Area Nanotechnology Council annual symposium. 

In particular Feldman has given us an analysis of the keynote speaker’s, Dr. Spike Narayan, Functional Manager at IBM, address: “Nanotechnology: Leveraging Semiconductor Technologies to Address Global Challenges.”

According to Feldman, the presentation asked the question “can we leverage semiconductor technology to address global challenges of environment, energy, healthcare, and water?”

If the recent collaborative work between IBM and the Institute of Bioengineering and Nanotechnology in Singapore in using the body of knowledge that had been accumulated in polymer building blocks for creating nanoparticles and then applying it to creating a drug the fights drug-resistant bacteria, then the answer is a resounding ‘yes’.

One of the examples Narayan apparently provided for “where semiconductor knowledge is indeed transferable to these other domains” is in the area of disk drives, with Feldman offering the IBM Millipede project as the most advanced example.

It is a curious story that of the IBM Millipede project. The IBM Millipede essentially used an array of thousands of miniaturized Atomic Force Microscopes (AFMs) as a memory device. Since it was based on the AFM that Gird Binnig had invented, he was sometimes made a spokesman for its capabilities and did so in his 2004 interview with Spectrum

Although touted as the next step in mobile memory devices, it soon became rarely mentioned and most everyone suspected that it fell victim to the cheap and increasingly capable qualities of flash memory.

So, last week during the press conference with Gerd Binnig and Heinrich Rohrer an intrepid journalist (not me) dared ask about the fate of the IBM Millipede project. I took a gulp and waited for the reply.

Binnig, who had been a champion of the technology, remained unapologetically supportive of the technology but did hand off the particulars of Millipede’s fate to Dr. Paul Seidler, Coordinator the new Nanotechnology Center at IBM Research in Zurich to explain more thoroughly.

And just as many suspected, the IBM Millipede project in its original form of creating a mobile storage device is no longer, but instead lives on various other research projects within IBM. At least in the nanotechnology side of things, IBM Millipede has found its niche in probes for lithography.

Below is Dr. Seidler’s full response.

Loading the video player...


Getting back to Ira Feldman, he has wonderfully led us to an archive for all the presentations from the IEEE San Francisco seminar and they can be found here

An Audience with Nanotechnology Nobel Prize Laureates

As promised last week, I would like to share some audio recordings I made of Gerd Binnig and Heinrich Rohrer taking questions from the press during the opening of the new IBM and ETH Zurich nanotechnology laboratory named in their honor.

This first audio file features both Binnig’s and Rohrer’s response to my question of why they were interested in looking at inhomogenities on surfaces in the first place, which led them eventually to creating an instrument for doing it. A more complete history of the STM’s genesis can be found in their joint Nobel lecture here.

Loading the video player...


I was always curious why Gerd Binnig conveyed in his interview with Harry Goldstein here in the pages of Spectrum the sense his designs for the STM would work when nearly every indication he had seemed to point to it simply wouldn't.

The answer is interesting because not only do we see how large a factor intuition plays in scientific inquiry, but we get an interesting sort of engineering/science hybrid approach in which it is perhaps more important to show why something won't work rather than why it should. It also simply reveals Binnig's determination not to give up.

Loading the video player...


The final question here comes from another member of the press who asks both Binnig and Rohrer how it feels to have in a sense crystalized the development of the field of nanotechnology. Binnig sees that what the STM created grew beyond what he could have imagined and Rohrer points to all the contributions from other scientists that made this breakthrough possible.

Loading the video player...


With the STM standing as such a cornerstone for the development of nanotechnology over the last 25 years, one can imagine that both these scientists have become accustomed to fielding all sorts of questions of what their contribution has meant. Even still they remain patient with questions from people like me that they have answered many times before and they still manage to make you feel as though it is the first time they are considering the idea. It was a great privilege.

Intuition Leads to the Tool that Opened Up the Nanoscale Universe and a New Nanotechnology Lab

I was a guest yesterday of IBM along with a group of some 600 assorted dignitaries, politicians and other journalists at the opening of a new $90-million nanotechnhology research laboratory at IBM research facilities in Zurich Switzerland. 

Along with some other journalists, I had received a preview of the facility back in November and even then with concrete still being poured and a jumble of wires seemingly sprouting up from everywhere the facility impressed with its unique “noise-free labs”. (I should note that it does seem that the final cost is now being reported as $90 million now rather than the $60-million figure I reported back in November. I have been told since posting this that the additional $30 million constitutes the cost of equipment, which was not calculated in my original figure.)

But yesterday’s event was truly a spectacle with a big band orchestra and a performance by a group of yoddlers that harkened back to Arthur K. Watson, the son of the founder of IBM, offering a yoddle for a Swiss audience 50 years ago—a recording of which preceded yesterday’s life performance. The festivities were not even dampened by the high level of security that was present apparently in response to some type of terrorist threat(s) targeting the new facility.

While a great deal of attention was paid to the collaborative partnership that will exist at the new facility between IBM and ETH Zurich, it was perhaps the more sentimental aspect of the day that provided a climax to the opening and was my personal interest in the story.

The new facility has been named the Binnig and Rohrer Nanotechnology Center in honor of the two Nobel Laureates, Gerd Binnig and Heinrich Rohrer, who in 1986, along with Ernst Ruska for his previous work in the design of the electron microscope, received the Nobel Prize in Physics for their invention of the Scanning Tunneling Microscope at IBM in Zurich.

Both Binnig and Rohrer were on hand not only to unveil the naming plaque for the new lab but to conduct a discussion for the full 600 guests and do a Q&A session with journalists and later one-on-one interviews. The duo brought the assembled audiences to laughter frequently with their oddly juxtaposed personalities—Rohrer describing himself as a down-to-earth pragmatist and Binnig possessing a touch of the poet from where I sat—they seemed like Nobel Laureates who could become a comedy duo.

Anyone who is involved in the field of nanoscience and nanotechnology owes a debt of gratitude to both of these scientists for deciding that model systems for approximating surfaces and clumsily dealing with inhomogeneities on surfaces wasn’t sufficient and that a device should be developed so that we can actually see…and touch…the surface of things on the atomic scale.

In the IEEE Spectrum’s oft-quoted interview with Binnig back in 2004, A Beautiful Noise, Binnig describes the utter lack of success they had in trying to get their prototype device to do what they expected it to do.

“In a way, this process is just like Columbus going from Europe to America: on the way there, he has no clue that he is coming closer,” relates Binnig in the interview/ “We were in exactly the same situation because the instrument never worked. You have no clue what to do, what knobs to turn to make it work better, because it simply does not work at all. You can't be sure whether you are close to a solution or not.”

Remembering this part of the interview, I was struck by his comment in front of the audience that he always believed that the instrument he had sketched out three days after starting at IBM would work. So, I asked him why under those circumstances did he continue to believe it would work.

“It’s hard to explain,” he began. “Somehow you just have intuition that it will work.”

[A number of recordings both video and audio were made of the event, and I expect that I will be able to share these on the blog in the coming days.]

Advertisement

Nanoclast

IEEE Spectrum’s nanotechnology blog, featuring news and analysis about the development, applications, and future of science and technology at the nanoscale.

 
Editor
Dexter Johnson
Madrid, Spain
 
Contributor
Rachel Courtland
Associate Editor, IEEE Spectrum
New York, NY
Advertisement
Load More