Energywise iconEnergywise

Latest IPCC Climate Assessment Reaffirms Previous Findings

The Intergovernmental Panel on Climate Change (IPPC) released a new assessment of climate science this morning, its fifth comprehensive evaluation of the basic physics. In light of the controversies that dogged its work and the doings of some major contributors—an error that found its way into the last assessment concerning the rates at which Himalayan glaciers are melting; the "climategate" kerfuffle over leaked e-mails—perhaps the most significant aspect of the new report is simply its reaffirmation of previous findings.

Of course, the report is being spun slightly differently depending on the ideological proclivities of publishers and, no doubt, the personal interests and obsessions of particular writers. The Wall Street Journal, in its treatment today, stresses a moderate lowering of the IPCC's projected warming range in this century by comparison with 1850-1900: to 1.5-4 degrees celsius (not 2 degrees, as the Journal misleadingly put it), from 2-4.5 °C previously. The New York Times mentions the IPCC's lowering its estimate of the warming to be expected from a doubling of greenhouse gas concentration, to 2.7-8.1 degrees from 3.6-8.1, but it emphasizes the limit put on the amount of CO2 that can be spewed into the atmosphere if warming is to be held at 3.6 °C: That limit is 1 trillion tons of carbon in toto—roughly half of which has already been emitted since the beginning of the industrial revolution in 1750, with the second half due to be emitted by 2040 if business continues as usual.

The IPCC assessment, even in the 36-page summary for policymakers, does not make for easy reading and leaves a good deal open to interpretation. Almost every statement is so carefully hedged with probability estimates--"very likely," "likely," virtually certain," "might," "cannot be made with confidence," "very unlikely," and so on--that one easily loses sight of the substantive statements actually being made.

Still, some of the most dramatic statements in the report are made flatly, without any qualification whatsoever. Atmospheric concentrations of carbon dioxide, methane, and nitrous oxide have all increased sharply since 1750 and now exceed pre-industrial levels by about 40 percent, 150 percent, and 20 percent respectively. Indeed, their levels "now substantially exceed the highest concentrations recorded in ice cores during the past 800,000 years.”

The IPCC estimates that the average rate of ice loss from glaciers around the world was 275 gigatons per year in 1993-2009, as compared to 226 Gt/yr in the longer 1971-2009 period; Greenland ice sheet melting has "very likely" increased "substantially" to 215 Gt/yr in 2002-2011 from 34 Gt/yr in 1992-2001. Antarctic ices loss "likely" went form 30 Gt/yr in 1992-2001 to 147 Gt/yr in 2002-11.

The IPCC minimizes the significance of slowed warming registered in the last ten years, saying it is well within the bound of normal decadal variability. As for future warming (treated in Section E1 on p. 15 of the summary for policymakers), it is likely to exceed 1.5 or 2 °C in various modeled scenarios, is "unlikely" to exceed 4 °C in all but one of the models, but is "about as likely as not" to exceed 4 °C in the most pessimistic model.

Considering that estimated warming has been in the range of 1.5-4.5 °C since the dawn of computerized climate modeling, one might wonder why so many scientists have gone to the trouble, at considerable expense, of once again subjecting projections to every reasonable kind of probing they can think of only to conclude the original estimates were about right all along. And indeed some reputable authorities are wondering exactly that. In an editorial last week, Nature magazine suggested that it is time for the IPCC to stop doing its regular, comprehensive assessments and instead do focused reports on topics of urgent concern, like the report it issued last year on weather extremes.

I'm tempted to agree. The repetition of so many well-known findings, together with the endless qualification of each finding in terms of probability, tends to deaden rather than awaken interest. Then too there is the inherent caution of the IPCC, given that all reports must be produced by consensus, a conservatism that no doubt was heightened this time around by the climate mini-scandals of the past few years. But what if there were suddenly to be an abrupt change in mainstream assessment of climate prospects, either for better or worse? In the absence of the IPCC's regular assessments, there would be no authoritative and generally respected procedure for validating the change and alerting policymakers to its implications.

Photo: iStockPhoto

World’s Largest Solar Thermal Plant Syncs to the Grid

The Ivanpah Solar Electric Generating System delivered its first kilowatts of power to Pacific Gas and Electric (PG&E) on Tuesday.

The world’s largest solar thermal plant, located in the Mojave Desert, sent energy from its Unit 1 station to PG&E, which provides power to parts of Northern California. When the plant is fully operational later this year, it will produce 377 megawatts. Two of the plant's three units will supply energy to PG&E and the other will deliver power to Southern California Edison.

"Given the magnitude and complexity of Ivanpah, it was very important that we successfully complete this milestone showing all systems were on track," Tom Doyle, president of NRG Solar, one of the plant’s owners, said in a statement.

The massive project spans more than 1400 hectares of public land and will double the amount of commercial solar thermal energy available in the United States. There are other large concentrated solar power (CSP) projects in the Middle East and Spain, but most of the growth in solar in the United States has come from photovoltaic (PV) panel projects, which have come down considerably in price in recent years.

Even with the proliferation of cheap solar PV, there is other value in CSP projects, which use large mirrors aimed at large central towers that create steam to drive turbines. A study earlier this year from National Renewable Energy Laboratory (NREL) found that a concentrated solar facility would be particularly useful for providing short-term capacity when other operators are offline or as a peaker plant when demand is highest. And steam turbines, unlike intermittent wind and solar PV, offer a steady power supply that operators can turn on or off or fine-tune on demand.

Google, which has invested heavily in renewable energy projects—including $168 million it put into Ivanpah—also sees value in CSP. “At Google we invest in renewable energy projects that have the potential to transform the energy landscape. Ivanpah is one of those projects,” Rick Needham, director of Energy and Sustainability at Google, said in a statement. In addition to generation, Google's investments in wind and solar include a solar financing company and the Atlantic Wind Connection project.

And it just wouldn't be an energy project without some criticism. Ivanpah's creators have been chided for the plant's potential to transform the physical landscape—especially its impact on the desert ecosystem and desert tortoises in particular. But some environmentalists see the risk as an acceptable one if utility-scale solar installations are replacing coal-fired power plants. California has a goal to get 33 percent of its electricity from renewables by 2020.

Photo Credit: Brightsource Energy

Counting the Sins of China's Synthetic Gas

Heavy water use, threats of tainted groundwater, and artificial earthquakes are but a sampling of the environmental side effects that have tarnished North America's recent boom in natural gas production via hydraulic fracturing or fracking. No surprise then that in European countries such as the U.K. that are looking to frack for cheap domestic gas, the environmental protesters often arrive ahead of the drill rigs.

But countries seeking fresh gas supplies could do far worse than fracking. So say Duke University researchers who, in today's issue of the research journal Nature Climate Change, shine a jaundiced spotlight on China's plans to synthesize natural gas from coal. Nine synthetic gas plants recently approved by Beijing would increase the annual demand for water in the country's arid northern regions by over 180 million metric tons, the Duke team concluded, while emissions of carbon dioxide would entirely wipe out the climate-cooling impact of China's massive wind and solar power installations.

"At a minimum, Chinese policymakers should delay implementing their synthetic natural gas plan to avoid a potentially costly and environmentally damaging outcome," says Chi-Jen Yang, a research scientist at Duke's Center on Global Change and the study's lead author, in a statement issued yesterday.

Synthetic gas plants use extreme heat and pressure to gasify coal, producing a combination of carbon monoxide and hydrogen. Steam and catalysts are then added to convert those gases to methane to produce a pipeline-ready substitute for natural gas.

It takes a whole lot of steam: According to Duke's estimates, China's synthetic gas plants will consume up to 100 times as much water (per cubic meter of gas) as shale gas production through fracking.

Relative greenhouse impact is harder to pinpoint because fracking's climate footprint remains controversial. Recent U.S. Environmental Protection Agency and industry studies dispute earlier results suggesting that fracked wells leak more methane—a potent greenhouse gas—than conventional wells.

What is certain, say Yang and his colleagues, is that synthetic gas production will be carbon intensive relative to conventional gas. Burning conventional natural gas to produce power releases two to three times less carbon into the atmosphere than when burning coal, but burning synthetic gas will be 36 to 82 percent dirtier than coal-fired plants.

Capturing and storing CO2 emissions could slash the climate costs, and China may have the technology to do it. Last year, Chinese power firm Huaneng started up the world's most advanced coal gasification power plant, which sports equipment to efficiently extract carbon waste from gasified coal. Similar technology could potentially enable China's synthetic gas plants to capture and sequester their CO2 instead of sending it up the stack.

Of course adding such equipment adds to construction and operating costs. Duke's team clearly doubts that Beijing will make synthetic gas producers go there.

Photo: David Gray / Reuters

GE to Muscle into Fuel Cells with Hybrid System

General Electric is working on an efficient distributed power system that combines proprietary fuel cell technology with its existing gas engines [like the one in the photo].

The company's research organization is developing a novel fuel cell that operates on natural gas, according to Mark Little, the director of GE Global Research and chief technology officer. When combined with an engine generator, the system can convert 70 percent of the fuel to electricity, which is more efficient than the combined cycle natural gas power plants powering the grid.

The fuel cell will generate electricity from reformed natural gas, or gas that's treated with steam and heat to make hydrogen and oxygen, he says. Residual gases from the fuel cell process—a "synthesis gas" that contains carbon monoxide and hydrogen—would then be burned in a piston engine to generate more electricity. The waste gas that comes from the fuel cell needs to be specially treated but “we know we can burn these things. They’re well within the fuel specs of our current engine,” Little says.

This distributed power system could provide electricity to a small industrial site or a data center, for example. It would replace diesel generators that are often used to power remote locations or bring electricity to places without a central grid. 

GE sells engines from two companies it acquired, Austria-based Jenbacher and Wisconsin-based Waukesha. It has done its own research on solid oxide fuel cells, and in 2011, it invested in Plug Power, which makes fuel cells for homes and small businesses. But Little indicated that this distributed power system will use new fuel cell technology invented by GE and configured to work in tandem with GE's engines. “We have a real breakthrough in fuel cell technology that we think can enable the system to be distributed and yet work at a very high efficiency level,” he says.

Commercial customers are showing more interest in stationary fuel cells and natural gas generators because they can provide back-up power and potentially lower energy costs. GE's system, which is still a few years a way from commercial availability, will be aimed at customers outside of the United States, Little says. Because the United States has relatively cheap natural gas, the combined power generation unit is unlikely to be cost competitive with grid power there. However, the price for natural gas in many other countries is more than double that in the United States and the hybrid power generation unit will “compete beautifully,” Little says.

GE's hybrid fuel system is just one of many research efforts the conglomerate has underway to take advantage of unconventional oil and natural gas drilling. Among the projects now being considered at a planned research center in Oklahoma is a way to use liquid carbon dioxide as the fluid to fracture, or frack, wells, rather than a mixture of water and chemicals. The company is developing a hybrid locomotive engine that can run on both diesel and natural gas. And it is working on small-scale liquid natural gas fueling stations that could be placed along railroad lines.

In another effort, GE is developing sensors and software to make oil and gas wells smarter. Researchers are working on different types of photonic sensors that are able to withstand very high heat and pressure. These  would be better  than electronic sensors for gathering flow and fluid composition data within wells, according to GE researchers.

Image credit: GE

Can Rotterdam's Port Become a Virtual Power Plant?

The energy sector is critical to Rotterdam’s economy, but the port city has aggressive plans to cut its carbon dioxide emissions in half by 2025.

The city takes in imports of oil, coal, biomass, and natural gas that are used across Northwest Europe. It is not just a stopover, but also a major refinery hub for the region. Even though Rotterdam relies heavily on the fossil fuel industry, it is increasingly focused on how to leverage renewables and existing assets to power its own port.

Rotterdam is partnering with General Electric [PDF] to develop a smart grid that can act as a virtual power plant (VPP), which would integrate thermal and renewable power production with flexible users in a centrally controlled system that would act as a single power plant. The city has been working with GE in the past few years to reduce emissions, improve water management and increase energy efficiency.

A virtual power plant takes energy efficiency and demand-side management to another level. It can be thought of as a sophisticated microgrid cluster, in which digital measurement and monitoring equipment on distributed resources can respond to the needs of the grid in real time. For example, many of the large industrial plants in the port produce their own electricity and heat, which can be sold into the grid when wind or solar production falls. There may also be more traditional generation, such as a coal-fired power plant or combined heat and power.

“Within a VPP, the electricity use of one part can be coordinated with the production of electricity in another part. A harbor, where many companies produce and consume electricity at a limited distance from each other, should be a suitable location to test and implement such a VP,” Daan Six of Belgian research organization VITO said in a report on the potential of a VPP in Rotterdam.

A virtual power plant usually responds in real-time to changing electricity rates. Depending on the cost of electricity, a large industrial customer may sell some power back to the grid or provide grid balancing services like frequency regulation, which is a larger problem with intermittent wind and solar than with steady, thermal generators.

A dynamic microgrid with various ways to produce and curb kilowatts can lead to cleaner energy use, especially if fossil-fueled peaking power plants can be avoided by consumers curbing their energy use. But a virtual power plant is not necessarily a replacement for fossil fuel-fired plants. An industrial customer might turn to backup generators that run on diesel, for instance, when the price signal is too high to take power from of the grid.

“Rotterdam is certainly one of those global conglomerates of industry in a very tight space and, because of the petrochemical and other activity there, with incredibly high energy demands,” GE’s Stephen Burdis told PortStrategy. “That is one of the drivers behind the project.”

The virtual power plant project is part of a larger energy restructuring in Rotterdam. E.ON and GDF Suez are constructing coal/biomass power stations that will decrease the carbon footprint compared to a coal-only power plant. Some refineries in the port are already capturing carbon dioxide and providing it to greenhouse growers. Steam waste heat is being captured for district heating and the port has plans to double its capacity for wind energy.

The efficiency efforts in Rotterdam are part of a broader effort in the North Sea region, E-harbors, which aims to maximize the use of renewable energy for transportation and electricity consumption.


Photo: Mercator Media

EPA Issues Regulations for New Coal-Fired Plants

As expected, at the end of last week the U.S. Environmental Protection Agency (EPA) released regulations for carbon emissions from new coal-fired and gas-fired power plants, under authority of the Clean Air Act and a key Supreme Court decision upholding the agency's authority to regulate greenhouse gas emissions. As expected, the EPA proposes to limit emissions from new coal plants to about 500 kilograms (1100 pounds) per megawatt-hour, and from gas plants, to 450 kg (1000 lbs) per MWh.

The EPA originally aimed to set a single standard for both coal and gas plants, at the lower limit, but under pressure from industry, it slightly eased the limit for coal. It is a minor concession. Average emissions today from a newly built, state-of-the-art coal plant are around 800 kg (1800 lbs) per MWh, according to GigaOm's Katie Fehrenbacher. The EPA's regulations are being universally interpreted to imply, therefore, that no new coal plant can be built in the United States unless it provides for carbon capture and storage (CSS).

The immediate impact of the regs, also by universal agreement among industry analysts and commentators, will be nil. That is because no new coal-fired power plants are being built anyway. The reason generally given is competition from much cheaper and much cleaner natural gas. But there is a second, just as important reason. Starting about a decade ago, a nationwide anti-coal campaign erupted at the environmentalist grassroots, which means that any company that proposes to build a new coal-fired plant has to go through the procedural trials previously reserved for those proposing a new nuclear plant.

That radical change in the country's political chemistry was a main factor, for example, in the dramatic decision by Texas's top utility in 2007  to ditch plans for a fleet of new coal plants and—this anyway was the initial plan—to go nuclear instead. Much sooner than most industry experts and political commentators had expected, the message had penetrated to the grassroots that coal plants are the country's single most concentrated source of greenhouse gas emissions (as EPA put it in its press release announcing the new regulations last Friday), and a major cause of respiratory disease as well.

To say that the new regs were generally expected and will have little or no immediate impact is not to say, actually, that they necessarily will be upheld when challenged in court. The EPA is required to demonstrate that the new carbon limits are achievable by means of current demonstrated and economically viable technology, that is to say, CCS. Right now, only scattered pilot CCS plants exist, and industry shows no inclination whatsoever to build a large plant equipped with CCS on a commercial basis, either in the United States or abroad. Challenges to the regs will likely climb all the way through the legal appeals process, eventually reaching the Supreme Court. In the end, industry may well win.

In the meantime, regulatory uncertainty will be one more reason for energy companies not to build new coal plants on any basis. But that uncertainty is but one of three conditions, any one of which is sufficient to virtually guarantee there will be no new coal-based generation. For a real renaissance in coal-fired power to occur, not only would gas prices have to rise sharply, but there would have to a fundamental change in environmentalist opinion.

Photo: Luke Sharrett/Bloomberg/Getty Images


Rooftop Solar Faces Growing Opposition from Utilities

Although solar energy is still a midget among U.S. energy sources, its rapid growth from a small base is beginning to make some of the big players nervous. Regulated utilities in a number of states—Arizona, California, Colorado, Idaho and Louisiana—have started to complain about the various benefits for photovoltaics (PV), says Mac Gunther, in a article appearing on Yale's environment360 website. Gunther, a contributing editor at Fortune, describes the position of PV in the U.S. energy mix as "puny" or "a mere blip," inasmuch as it accounted for barely one-tenth of 1 percent of U.S. electricity last year. (Coal delivered 37 percent and natural gas 30 percent.) Yet rooftop PV installations jumped nearly 50 percent last year, enough to make some incumbents seriously nervous.

Critics of solar incentives object to the whole panoply of state and Federal subsidies favoring PV, but they particularly object to aspects of "net metering," the requirement that utilities allow distributed generators like owners of rootfop arrays to sell electricity back into the grid. The subject of net metering is a complicated one. In the United Kingdom, which to a great extent inspired the injection of free-market principles into electric power systems, net metering is not generally allowed or encouraged. There, the industry has persuaded regulators that with net metering, distributed generators become, in effect, free riders—they benefit from selling into the grid without bearing their fair share of paying for its maintenance. In the extreme case, a household that always produced excess energy and never bought power from the local utility might pay nothing to support the grid.

In the United States, net metering was required by 2005 Federal energy legislation, but details of implementation vary drastically from state to state. A key issue is whether utilities are required to pay customers selling solar electricity into the grid at wholesale or retail electricity rates. As Rick Tempchin, executive director for retail energy services at the Edison Electric Institute in Washington, D.C. has put it, “Paying credits at the full retail rate costs the utility money because that cost will be higher than the cost that the utility actually avoids by purchasing the distributed generation power. For example, in centralized markets, a utility can buy all of its power needs at the wholesale rate. This rate will always be less than the full retail rate it would have to pay to buy the same power from a customer.”

In some ways, the debate over net metering is rather closely analogous to arguments that have raged in U.S. education policy over school vouchers and, more recently, charter schools. There too, critics complain that allowing parents to take their children out of public schools at public expense results in less revenue for maintenance of school infrastructure. As in the education debates, political libertarians, including some Tea Party members, tend to support net metering because it permits and encourages individuals to produce their own power. In Arizona, where the debate is unusually heated, no doubt because PV is closest to commericial competitiveness there, net metering advocates have hired Barry Goldwater Jr, the son of the late arch-conservative Arizona senator, to promote their cause. The pro-coal, anti-environmentalist Koch brothers, on the other hand, have put money into Arizona to support a website that opposes net metering.

Photo: Andy Cross/The Denver Post/Getty Images

Two-Way Wave Power Generator Wins UK Dyson Award

A new multi-axis wave power generator that can absorb forces no matter which way the water is churning has won the Dyson Award for the UK regionaccording to BBC News.

Renewable Wave Power is a semi-submersible, multi-axis wave energy converter that is specifically designed for the waters off of the Orkney Islands in Scotland. The prize comes with US $3170 that could be as much as $47 550 if the technology takes the global James Dyson Award.

Sam Etherington, the project’s engineer, was inspired by the variability of the ocean while kite surfing and sailing off of Cumbria in Northwest England.

In wave tanks at Lancaster University, the chain of loosely coupled pistons was able to absorb forces from all directions.  The conditions in the tank were modeled after the data taken from buoys off the Orkney Islands.

But a successful trial in a university wave tank is not necessarily a breakthrough towards harnessing the energy of the ocean. "The real test for a device is its cost of energy," David Forehand from the Institute for Energy Systems at Edinburgh told the BBC.

Scotland is aiming to be the world’s leader in wave and tidal power, but it is still largely in the development and commercialization stage. The Scottish government has the Saltire Prize, which will award $15.8 million in 2017 to one of the wave and tidal energy companies competing for the prize. The winner will be the technology that has the greatest volume of electrical output over 100 gigawatt-hours over a two-year period using only the sea.

Renewable Wave Power has many steps before it could compete for an award such as the Saltire Prize, however. Etherington’s submission to Dyson noted that he would require further tests to verify the initial results. If those tests were successful, Etherington would commission a scaled-up rig to be tested at the European Marine Energy Center on Orkney Island, which has a variety of test facilities for wave and tidal powers in various stages of development.

Despite a decade of companies testing at the European Marine Energy Center, most commercial applications are still quite small. One installation at Bangor Hydro Electric Company in Maine, for example, powers about 25 to 30 homes.

There are various challenges with tidal and wave power generation, such as developing components that can withstand years of salty, turbulent waters and competing with other renewable energy sources that have seen significant price drops in recent years.

Although there are many challenges with the many different technologies that have been proposed to harness ocean energy, Scotland estimates that signed lease agreements could produce up to 16 gigawatts of marine energy from the  Pentland Firth and Orkney waters by 2020.


Photo Credits: James Dyson Foundation, Mitch Payne/Getty Images

Completely Self-Controlled Power Systems Are Proposed

Though the first generation of smart grids incorporating digital communications and computing is taking longer than expected to materialize, it is not too soon to start thinking about what the second generation of power systems will look like.

In the issue of the IEEE Smart Grid e-Newsletter that went live this week, Qing-Chang Zhong, a professor in the Department of Automatic Control and Systems Engineering at the University of Sheffield in the U.K., proposes a novel scheme for how autonomous power system control could be achieved in electricity networks that could have millions of active players. Today, Zhong points out, the active participants in a large national grid like China's number just 1500 or so (mostly big central generators delivering 200 megawatts or more). With the introduction of many smaller wind and solar installations, electric vehicles, and energy storage facilities, the number of players is already exploding. The number of players will rise even more sharply as home-energy management and demand-response systems come into their own and start playing a growing role in voltage regulation.

How will it be possible to coordinate all those players and maintain system reliability?

Zhong proposes a model in which the synchroverter technology he co-invented a number of years ago would be widely deployed at all levels of the grid to take care of voltage and frequency regulation autonomously—assuring that the smart grid's communications functions aren't tied up by the blizzard of negotiations determining when and how much new distributed generation sources will contribute to the grid. They would operate like the synchronous machines engineered to provide regulation in today's grids. As Zhong goes on to explain, wind turbines and solar arrays, EVs and battery banks typically are connected to the grid by means of inverters (DC to AC converters), which can be engineered to have the properties of synchronous machines. On the demand side, Internet devices are powered by DC supplies and therefore communicate with the grid by means of rectifiers, as do LEDs [see photo], which seem destined to be the dominant lighting technology of the future. Altogether, three quarters or more of generation and load could be communicating with the grid by means of rectifiers.

That implies, Zhong believes, that synchroverters could be deployed everywhere a rectifier is needed and all the devices could flock together and sing, without central coordination or control.

A second article in the current issue of the IEEE newsletter also addresses a fundamental issue in the emergent smarter grid of the future. Amro M. Farid, an assistant professor of engineering systems and management at the Masdar Institute in Abu Dhabi, points out that in future grids, generation will be much less dispatchable on average (less capable of being ramped up quickly), while load-side assets will be more dispatchable. Farid, who leads the Laboratory for Intelligent Integrated Networks of Engineering Systems at Masdar, believes a solution is to be found in a model involving "holistic assessment for enterprise control." The concept, as Farid explains, originated in manufacturing, where it came to refer not only to management of dynamic production processes but also their integration with business considerations. Farid argues that this model is of obvious relevance to smarter grids that are much more dynamic in terms of inputs and outputs and, at the same time, more responsive to market forces.

Image: Yagi Studio/Getty Images

Compressed Air Energy Storage Makes a Comeback

Only a handful of compressed-air energy storage (CAES) plants have been installed since the 1970s. This week, SustainX is bringing the technology back to the U.S. electricity grid, albeit in a vastly different form.

The startup, based in Seabrook, New Hampshire, began operating a full-scale demonstration system that stores energy as compressed air in pipes and supplies 1.65 megawatts of power. The company will use the machine to gather data on performance and and to show off the technology to potential investors and customers. The project was funded by $5.4 million from the Department of Energy and at least that much from SustainX, according to a representative.

Conventional compressed-air energy storage uses a compressor to pressurize air and pump it into underground geological formations. The first two plants of this type put into operation—one in McIntosh, Alabama in 1991, and the other in Huntorf, Germany in 1978—use salt caverns as storage tanks, pumping compressed air in at night, when energy demand is lowest. During the day, the air is released, heated with natural gas, and forced through a turbine to generate power. The appeal of this technology is that it’s relatively low cost and can store many kilowatt-hours of energy.

SustainX takes a different tack: it uses compressed air as the energy storage medium, but holds the air in large pipes, the same used in natural gas pipelines. That means utilities or even commercial customers could place a storage device in a range of industrial locations, rather than only where there’s an underground formation available.

At the base of SustainX’s machine, called the S165, is the bottom half of a diesel engine normally used to propel ships. To store energy, a permanent magnet motor-generator turns the engine's crankshaft, driving six pistons located above it. The pistons, each of which is taller than a full-grown man, compress a combination of air and foamy water, which is then pumped into storage tanks. When power is needed, the air is released, driving the pistons and turning the generator to create a current.

A key difference between SustainX's technique and conventional CAES technology is that the compression and expansion of air are done at near-constant temperature and the process doesn’t require natural gas. And unlike conventional batteries, this system can vary the amount of energy independent of the power output. In other words, you can expand the amount of energy it stores simply by installing bigger pipes. That's different from a battery designed to deliver, say, 1 megawatt for 2 hours. If you wanted four hours of storage, you'd have to buy another battery--a more expensive approach, the company says. 

The target market for these systems is renewable energy project developers looking to firm up the output of wind and solar farms or to store excess wind energy at night for sale during peak times. Utilities could elect to install air energy storage instead of making upgrades to transmission and distribution lines to meet growing demand. SustainX expects that a typical configuration will offer between 10 and 20 megawatts and store four to six hours of energy, according to a representative.

The first customers are likely to be in Asia, in places where the price of natural gas prices is higher than it is in the U.S., electricity demand is growing rapidly, and there are problems integrating renewable energy, says Richard Brody, the vice president of business development. He expects the first systems to be installed next year in China. At volume, he projects price of energy can get to $500 per kilowatt-hour and the system can operate for 20 years.

There are at least two other compressed-air energy startups looking to get a toehold in the market for long-duration energy storage. Berkeley, California-based startup LightSail Energy is building a device designed to store compressed air in steel tanks, while General Compression of Newton, Mass., has a compressor operated by a wind turbine that stores air in underground caverns. Late last year, General Compression opened a two-megawatt CAES facility connected to a wind turbine in Gaines, Texas.

All of these companies hope to demonstrate that their technologies cost less and last longer than a host of battery alternatives. With few commercial systems for bulk storage currently online, it’s hard to evaluate which technologies will win out. But many companies are betting they can achieve a technical breakthrough that will make multi-hour energy storage cost-effective. 

Image: SustainX

Updated 17 September 2013: Changed to reflect the opening of a CAES plant last year. 


Newsletter Sign Up

Sign up for the EnergyWise newsletter and get biweekly news on the power & energy industry, green technology, and conservation delivered directly to your inbox.

Load More