European Commission Pulls Back on New Nano Regulations

After creating a separate and broad regulatory class for nanomaterials, the EC says current legislation is enough

2 min read
European Commission Pulls Back on New Nano Regulations

Last year the European Commission (EC) was eager to show its proactive approach to regulating nanomaterials when—after a protracted process--the Commission arrived at a definition for nanomaterials.

While the EC achieved its goal of a definition, the definition itself came under some pretty pointed criticisms for being so broad as to include the incidental nanoparticles that are produced when a car’s tires roll on pavement.

“We’ve met people recently who work on the legal side within large chemical organizations, who up until October last year didn’t know anything about nanotechnology and suddenly they’re going to be caught within legislation, which is quite a shock to them,” said Jeremy Warren, CEO of Nanosight, in a webinar he produced this year to explain the impact of the new definition.

When any company—European or otherwise—believes that it has been swept up into a regulatory framework for a material that they had no intention of making or using, or perhaps even knew existed, government bureaucrats are certainly going to hear about it. It didn’t take long for industry ministers of European countries to start to take heed.

Last week we began to see how the EC was trying to reel in their proactive approach when it released its “Communication on the Second Regulatory Review on Nanomaterials.” One of the points of the position paper was: “that current legislation covers to a large extent risks in relation to nanomaterials and that risks can be dealt with under the current legislative framework”.

All of that work to develop a definition of nanomaterials so as to create a class of materials that are not currently known to be hazardous (but might be someday) seemed to be all for naught. Instead the EC seems to have taken the position that current laws governing run-of-the-mill materials pretty much handle a large majority of nanomaterials out there.

The reaction of NGO’s like Greenpeace and Friends of the Earth was swift and angry. The NGOs trotted out the term “precautionary principle,” which seems to have come to mean an absolute moratorium on all nanomaterials rather than producers taking on the burden of proof regarding the level of risk of their products.

Another pervasive sentiment among the NGOs is that the EC is stalling. If the EC were indeed stalling, one possible explanation would be that they want scientific data to be gathered proving nanomaterials safe and until then still promote new companies and new products by delaying the imposition of regulations. I suppose that’s what the NGOs believe is happening in this case.

To me, it’s a bit too conspiratorial an explanation. I am more likely to believe this long process stems from the way bureaucracies operate, especially the European variety. They love to commission reports and studies and then hold meetings on the results. The European Union’s approach to the specific issue of nanosilver’s risk has driven some scientists to such levels of frustration they felt compelled to write an article for Nature Nanotechnology, decrying the situation.

Bureaucratic dithering aside, the real obstacle to arriving at a swift resolution about the risk of nanomaterials is that the science takes a long time. As I’ve commented before, we are asking for an almost complete overhaul of the periodic table in terms of toxicity to determine the risk of nanomaterials. Let’s keep the length of time at resolving these issues in that context.

The Conversation (0)

3 Ways 3D Chip Tech Is Upending Computing

AMD, Graphcore, and Intel show why the industry’s leading edge is going vertical

8 min read
A stack of 3 images.  One of a chip, another is a group of chips and a single grey chip.
Intel; Graphcore; AMD

A crop of high-performance processors is showing that the new direction for continuing Moore’s Law is all about up. Each generation of processor needs to perform better than the last, and, at its most basic, that means integrating more logic onto the silicon. But there are two problems: One is that our ability to shrink transistors and the logic and memory blocks they make up is slowing down. The other is that chips have reached their size limits. Photolithography tools can pattern only an area of about 850 square millimeters, which is about the size of a top-of-the-line Nvidia GPU.

For a few years now, developers of systems-on-chips have begun to break up their ever-larger designs into smaller chiplets and link them together inside the same package to effectively increase the silicon area, among other advantages. In CPUs, these links have mostly been so-called 2.5D, where the chiplets are set beside each other and connected using short, dense interconnects. Momentum for this type of integration will likely only grow now that most of the major manufacturers have agreed on a 2.5D chiplet-to-chiplet communications standard.

Keep Reading ↓ Show less