Energywise iconEnergywise

Gas Up, Coal and CO2 Down Sharply

The Worldwatch Institute, for decades a leading player in sustainability research, has issued a report with startling findings about recent changes in the  U.S. fossil fuel mix and their implications for the country's greenhouse gas emissions. From 2007 to 2009, the share of natural gas in U.S. electricity generation increased from 20 to 23 percent, while the share of coal dropped from 52 to 45 percent. That may not look like much at first glance but in fact may be the beginning of a sea change in U.S. energy. The decrease in coal generation accounted for half the decline in carbon emissions from 2007 to 2007, which dropped a jaw-dropping 10 percent.

"In just two years, we wiped out half the increase in U.S. greenhouse gas emissions that had taken place during the previous 15 years," observes Christopher Flavin, president of Worldwatch and principal author of the report, "The Role of Natural Gas in a Low-Carbon Energy Economy." Could that dramatic decrease in U.S. carbon suggest that the U.S. goal of cutting greenhouse gas emissions 17.5 percent by 2020 is unduly unambitious? "It could," says Flavin.

Easily the most important factor enabling U.S. coal generation and carbon emissions to fall so dramatically has been the fast-growing role of natural gas in electricity generation. That in turn is attributable to the revolution in "unconventional gas," that is, natural gas extracted from deep shale formations by means of horizontal drilling and hydraulic fracturing. But almost important, arguably, is the bear hug that prominent organizations in the environmental community--Worldwatch and Sierra Club, among them--are giving gas.

Gas, after all, is a nonrenewable fossil fuel. So why do a lot of environmentalists like it so much?

If you go to a place like Dimock, in northeast Pennsylvania, where Marcellus Shale gas drilling is going on great guns, you'll see huge trucks carrying drill pipe through the middle of what used to be a sleepy rural town. Methane has contaminated drinking water wells and must now be vented; everywhere around town there are huge unsightly tanks (see photo above), some holding water for injection into wells, some holding the flowback water which contains all manner of chemicals--some put into the injection water to expedite gas extraction, some picked up from the ground on the way back to the surface.

So how can national environmental leaders be for gas? Actually, the advantages, as enumerated by Flavin and his coauthor Saya Kitasei, are overwhelming. First: "Burning natural gas produces virtually none of the sulfur, mercury, or particulates that are among the most health-threatening of pollutants that result from coal combustion," they explain," citing a study that found the environmental cost of gas-generated electricity was one-twentieth--yes, one twentieth!--that of coal.

Second, electricity from natural gas results on average in about half the carbon emissions resulting from coal-fired power. And that's the average: the ratio is much worse for the older, dirtiest coal plants and best gas-fired ones."New combined-cycle gas plants produce 55 percent less carbon dioxide that new coal plants do and 62 percent less than the average U.S. coal plant," say Flavin and Kitasei.

Because of dispatch rules favoring coal and nuclear baseload plants ahead of gas, and because gas often is used in peaking plants that only come onstream intermittently, gas-fired plants represent a much larger share of U.S. generating capacity than they do of actual generation. This implies that changes in rules and policy could induce an even more rapid conversion of coal to gas, if prices stabilize. Flavin thinks it's now conceivable that in the next 10 or 12 years we could cut coal's share in electricity generation to about a 25 percent share in U.S. generation, to yield roughly a 12.5 percent cut in U.S. carbon emissions.

But prices and policies are the wildcards. Traditionally gas prices have fluctuated drastically; their stabilization will depend on finding ways to develop shale gas consistent with meeting local concerns, which are serious. Needed policies include more uniform Federal and state regulation of gas development, revised electricity dispatch rules, tighter air quality regulation, and--above all--a strong cap-and-trade climate bill that discourages coal generation instead of  "locking it in," as Flavin puts it.

Smart Grid Feedback (1)

In a posting two weeks ago I raised the question of whether the much vaunted smart grid will produce actual energy and carbon savings . . . and save customers money. My prediction was that we will not know until next year at the earliest.

That posting elicited a lively response from readers. Though it isn't normally my practice to respond directly to comments or repeat them in follow-on posts, in this case such wide and intense interest attends smart grid prospects, it's worth culling some of the more telling reports.

To be clear: I'm going to ignore the more opinionated postings that complain, for example, about the smart grid's just being another excuse for the government to control our lives and violate our privacy, the "global warming hoax"; the undue influence of the U.S. military-industrial complex, "Stalinist regulations issued by Washington," and the need for technical project leaders with "dictatorial powers."

The question of why so many engineers believe they want more liberty but at the same time betray a yearning for authoritarian leadership--that is a whole other topic, and not one that will ever be addressed in this space.

So let's turn instead to the more factual reports:

--Aggravated risk of network failure. AW worries about recent research suggesting that networked networks are "prone to epic failures."

--Higher electricity bills in Canada. Daniel Fingas reports that in Toronto, smart metering with time-of-day pricing seems to be yielding higher average electricity costs.  "If your water heater (retrofitted with the appropriate control system) or heating/AC is electric you might see some savings, but other than that your only reasonable adjustable consumers are your clothes dryer and dishwasher."

--Radio interference and erroneous data gathering in California. Tom Kirkpatrick, reporting from PG&E country, says that in his home, he could no longer get AM radio after his smart meter was installed, because of RF interference. Further, he says that there have been local news reports of supposedly smarter meters delivering wrong information about power usage and communications failures.

--Texas turmoil. Lyel Upshaw says that a smart grid installation program "has been temporarily suspended while investigations are ongoing regarding consumers' electric bills doubling, tripling, and even more on their first billing after installation." Problems with calibration are probably involved, and Staples is confident that they will be promptly solved, but "the new digital meters [have] not made a good impression on consumers." TMCC says that in Dallas, "tests have had to be run to prove the smart meters are actually set and running correctly in the field since the test program end users found much higher energy bills after installation."

Permit me, since readers so often complain about my alleged biases and obsessions, to express gratitude to IEEE Life Fellow  Jim Crescenzi, who says he considers my reports "especially informative and helpful" and me to be "unusually objective."

 

Cape Wind Coda

Last week my posting about Federal approval of the immensely controversial Cape Wind project expressed skepticism about its significance and the general potential of offshore wind in the United States.

Tom Vinson, director of Federal regulatory affairs at the American Wind Energy Association, begs to differ. He points out that this is the first such project to obtain Washington's approval, and that all offshore projects further than 3 miles from the coastline require Federal approval. He says Cape Wind now has all required state, local, and Federal permits, except possibly one from the FAA, which must approve any structure higher than 200 feet.

Regarding wind's general potential, Vinson recalls that in a major study that found the United States could generate 20 percent of its electricity from wind, an estimated 54 GW of 300 GW total would be produced by offshore turbines. So that's tantamount to 3 or 4 percent of U.S. electricity coming from offshore wind.

That's not trivial. But I stand by my observation that potential is not the same as realistically developable. Every offshore project will be controversial, and I'll be very surprised if oceanic offshore wind is generating 3 percent of U.S. electricity by 2020 or 2030.

POSTSCRIPT (May 11, 2010):

AWEA, in its annual report, lists 12 offshore wind projects on the books. Four are in New Jersey, two in Massachusetts and two in Rhode Island, one each in Delaware, North Carolina and Texas, and a Great Lakes project near Cleveland.

 

Fixing the Gulf Oil Leak: Blowout Preventers and Robotic Submarines

Updated June 20

Update:

We are now 62 days into the Gulf of Mexico oil leak, and since this post was first published a number of different attempts to stem the flow have been attempted with varying degrees of success. (They varied from "abject failure" to "capturing some of the oil.")

As estimates of oil flow rates climbed steadily from 5,000 barrels per day up to a stunning 60,000 barrels per day, the strangely-named efforts ramped up. For a few days, the top kill approach seemed promising: this involved injecting large amounts of mud and drilling fluid into the well to stem the flow, but after several days BP admitted defeat.

Next, the company did manage to install a lower marine riser package cap over the top of the damaged blowout preventer. To do this, diamond wire saws were deployed to cut off the damaged pipe; with the new cap then manuevered in place, captured oil and gas are brought up a pipe to a waiting ship, the Discoverer Enterprise. The gas is flared off, and the oil stored. This, along with a secondary siphoning technique using the same equipment used for the top kill injection, now has a maximum capacity of 28,000 barrels per day of captured oil. We're still a ways off from President Obama's promise to be capturing 90 percent of the oil within a short timeframe.

Clearly, the technical difficulties involved with an enormous leak under 5,000 feet of water have been, to this point, nearly insurmountable. Still, the flow of ideas from everywhere, including extensive suggestions in the comments section below, is impressive. Among those ideas are the use of explosive charges (including nuclear, although that option has been largely rejected as "crazy" by officials), installing a heavier blowout preventer capable of withstanding the pressure of the escaping oil, and disconnecting the pipe's flanges and installing an open gate valve.

Though some of the oil is now being captured, such ideas could still play a role before the leak can be shut off completely. Recent reports indicate that drilling of the relief wells is ahead of schedule, but BP still estimates they won't be fully operational until August. By then, this spill will have become among the worst the world has ever seen.

***********

When the Deepwater Horizon offshore oil rig exploded on April 20, much of the oil that was actually on the rig ended up in the water. We have since learned that as far as oil spillage is concerned, this was not the problem. The problem actually lies 5,000 feet under water, where the well from which Deepwater Horizon was pulling oil has since been spewing about 5,000 barrels, or 210,000 gallons, of oil each day. The thing is, the rig actually had a piece of technology that should have prevented this. It didn't work.

Oil wells both on and offshore have contraptions called blowout preventers. Those iconic old images of oil well gushers? The blowout preventer stops that from happening, saving much cleanup and money given all the oil that used to spew randomly all over the place before it could be brought under control. According to BP's CEO Tony Hayward, the blowout preventer should have kicked in the day that the explosion occurred, but failed to do so. They don't seem to know why.

Because the 450-ton blowout preventer (a bigger, underwater version of the image to the right, below) still does sit atop the leaking well, option A for stemming the flow of oil has been to kick it into action. To do so, BP has sent in remote-controlled submarines with robotic arms. As cool as that sounds, it too has failed, and again we don't quite know why.

Another less catchy option for preventing further leakage into the Gulf is to create chambers to sit around the three leaks, and connect pipes from those chambers that will funnel the oil up to the surface. Makes sense, but it will take at least a few weeks to get it going. And according to CNN, that technique has never even been tested in water so deep.

Finally, BP has one more idea: by this weekend the Transocean Development Driller III will begin work on a "relief-well" nearby that could stem the flow through the leaking well. Good idea, except that one will take up to three months. Let's see, 210,000 gallons per day, up to 90 days... almost 19 million gallons, or more than 61,000 tonnes, not even including what has leaked so far. If we get to that point, the constant comparisons to Exxon Valdez won't be so hyperbolic; that disaster totaled about 35,000 tonnes of oil (although there have been many other larger spills around the world).

This is the second major offshore oil rig incident in less than a year. In 2009, an Australian rig in the Timor Sea spewed oil into the water for 10 weeks before a decidedly low-tech fix stopped the flow: they pumped huge amounts of mud into the well to plug it up.

Pulling oil up from deep underground that is in turn deep underwater is, clearly, a complicated business. Fixing a giant mistake in that process is proving even harder. A spokesman for BP summed up the efforts to stem the river of oil well: "We're not sure it's going to work, but it's certainly something worth attempting."

(Photos via US Coast Guard and Philbentley)

Federal Government Approves Cape Cod Wind

The much-trumpeted approval by Washington of a proposed wind farm in the waters off Cape Cod has elicited an unusual amount of comment in the blogosphere and seems to have aroused expectations that the coast is now clear--please pardon the inevitable pun--for further wind farms all up and down the U.S. East Coast.. Immediately, according to Energy Central's Bill Opalka, the state of Delaware and the Long Island Power Authority praised the government's decision, obviously in hopes that their similar projects soon will be approved as well.

Evidently the thinking is that if a large wind farm can be built in the Kennedy family's front yard, then large wind farms can be built anywhere. I doubt that's so. In any intrinsically beautiful place where developers want to put such farms there is bound to be immense controversy. And how much offshore potential for wind is there in the United States? Not as much as in Denmark of the UK, relatively small countries surrounded by water--that's for sure. New England, with a relatively well balanced electricity generation portfolio, certainly doesn't need a whole lot of wind. Texas, despite its long Gulf Coast, doesn't need much either, as it has a huge on-shore wind resource. Californian won't want it, given its acute sensitivities about Pacific Coast energy development (despite its immense need for green electricity). That leaves the U.S. Southeast, admittedly one of the regions in which energy demand is growing the fastest.

Ironically, however, the Southeast seems to be the region least predisposed to renewable and green energy. It would just as soon go with nuclear, thank you. And is it wrong? Might it not be nicer, aesthetically, to tuck away some compact nuclear plants rather than build sprawling wind farms off Hilton Head, the Keys, or Sarasota?

I bow to nobody in my enthusiasm for wind, but in my opinion, the best potential for U.S. offshore wind is not the oceans but the Great Lakes. They have some of the nation's fastest winds, and they are immediately adjacent to the regions that burn coal most intensively, generation that needs urgently to be replaced to reduce greenhouse gas emissions, not to mention save lives and improve health.

New York Gas Drilling Decision Cuts Both Ways

Last week the New York State Department of Environmental Conservation decided to remove the issue of whether hydraulic gas fracking should be permitted in the New York City watershed. The city, with support from the U.S. Environmental Protection Agency and many influential environmental organizations, has opposed drilling in the watershed, on the ground that the city's pristine unfiltered water supplies could be contaminated. The DEC decision invites companies to apply for drilling permits on a well-by-well basis, but that process is generally thought to be so cumbersome and costly, in effect the decision ends any immediate prospect of developing Marcellus gas north and northwest of New York City.

Pro Publica's Abrahm Lustgarten, who has done by far the most complete and impressive reporting on Marcellus gas development, admirably summarizes the implications of the decision: "The decision appears to protect the unfiltered water supply for nine million residents [of NYC]--as well as another unfiltered watershed near Syracuse, N.Y.--because energy companies will be required to undergo a separate and exhaustive review for each well they propose to drill and hydraulically fracture inside the area, a hurdle that may amount to a de facto ban. But it also removes a significant political and scientific obstacle to completing the two-year statewide review process, paving the way for drilling to proceed across much of the rest of state as soon as next spring." That's why, as Lustgarden goes on to explain, both the gas industry and environmental community are divided as to whether the DEC decision is a good or bad thing.

 It's crucial to appreciate that the watershed area affected by the DEC ruling represents a small fraction of Marcellus gas that could be exploited. Most gas fracturing right now is being done in the southwest corner of the state (not the southeast), where there are 15,000 oil and gas wells  right now, according to Walter Hang of Toxics Targeting. Hang, who used to run the toxics program at the New York Public Interest Research Group (more widely known as NYPIRG), has challenged the statewide environmental review, arguing that its scope is too restricted in terms of possible water contaminants. With the support of 10,000 people he got to sign a petition, Hang has called for the draft review to be withdrawn and the whole exercise to be reconceived and relaunched.

As long as watershed drilling was included in the environmental review process, which could drag out for many years, drilling in the watershed would not be able to proceed, as advocated by New York State's embattled governor. But now that the watershed has been removed from the review, if one company were to get permission to drill one well, then the door would be open to wide watershed drilling.

Atomic Tinkering With Platinum: Toward Affordable Fuel Cells

Fuel cells are an old technology. Dating back to the first half of the nineteenth century, we don't lack for experience or research into the idea of an anode and a cathode surrounding an electrolyte. What we have lacked during that entire period is a balance between the best energy output with economically viable materials. Varying metals for the cathode have been tried, but platinum is among the most efficient of the possibilities. Platinum, of course, is expensive.

Researchers led by Anders Nilsson at Stanford are taking an interesting tack at that problem. Rather than searching out other sources for the cathode, they have figured out a process that can increase platinum's reactivity. So far, according to a paper in Nature Chemistry, they have changed the metal's properties to the extent that 80 percent less platinum would be needed for a fuel cell.

They achieved this feat by combining platinum with copper and then removing the copper on the surface of the alloy. This created lattice strain, or a compressing of the platinum atoms closer together than normal, increasing their reactivity and weakening bonds to oxygen atoms (which occurs in fuel cell reactions).

"The distance between two neighboring atoms affects their electronic structure," said Peter Strasser, another author on the paper, in a press release. "By changing the interatomic distance, we can manipulate how strongly they form bonds." According to the release the investigators hope to eventually "create a potential replacement not only for gasoline engines but also for the batteries found in small electronic devices." Ambitious, to be sure.

Reducing the amount of platinum needed for a fuel cell would clearly bring down the cost, but there are plenty of other methods being sought out to move fuel cells toward widespread implementation. From gold particles to improve platinum's performance to the use of a far cheaper iron-sulfur complex, there is no shortage of ideas. Of course, we've had since 1838 to work on fuel cells, so one would hope we have made some progress.

Photo via NASA/Wikimedia Commons.

Smart Grid Jobs

Earlier this month Energy Secretary Chu announced that DOE was making $100 million in grants to support job training for work in the smart grid, with another $95 million to come from participating educational institutions, utilities, and manufacturers. The various programs supported by the grants are expected to produce 30,000 new trainees for smart grid work.

The ordinarily excellent Katie Fehrenbacher of Earth2Tech comments that "demand response devices are [already] shedding peak power in homes and businesses,” as "millions of two-way communicating, digital smart meters have been deployed.” If such load shedding in fact is already happening I haven't heard about it.

Does somebody know something I don't know?

Separately, Secretary Chu announced a series of partnerships and initiatives with other countries in the Western Hemisphere under the Energy and Climate Partnership of the Americas. Projects include efforts to advanced electricity interconnections in the Caribbean , and creation of at Energy Innovation Center at the Inter-American Development Bank (IDB).

Smart Grid Proof?

The vision of a smarter grid is of course a lovely thing to behold: an electric power system that’s much more interactive, interoperable, reliable, and robust—“self-healing,” even. That’s why so much excitement attended the news this time last year that the U.S. stimulus bill would contain billions of dollars in new funding to support smart grid construction, and the news six months later than the National Institute for Standards and Technology was issuing draft standards and a roadmap for completing standardization of the smart grid (the Framework and Roadmap for Smart Grid Interoperability Standards, issued in final form in January). And it's the reason too why such high expectations ride on the avalanche of smart meter installation projects launched in the last year.

But that excitement will turn to disappointment if the smart grid does not deliver, in addition to efficiency and reliability, energy conservation, less dependence on oil and gas imports, and lower greenhouse gas emissions from the electric power sector. As NIST put it in a press release last September announcing the draft standards and roadmap: “When completed, the Smart Grid will employ real-time, two-way digital information and communication technologies in the operation of the nation’s electric grid. The system would allow consumers to better manage and control their energy use and costs, reduce America’s dependence on foreign oil and create clean-energy jobs.”

We'll know the smart grid can deliver when in at least one situation in the United States, however restricted geographically or functionally, interactive communications have helped consumers reduce their energy use and cut their electricity bills. When will that be?

Not this year. There have been some interesting attempts to demonstrate the smart grid, notably in Boulder and Tallahassee, but just because such projects are small does not necessarily mean they're well designed and executed. (Right around the time IEEE magazine published a report on the Boulder experiment, most of the executives managing it were replaced, prompting us to wonder whether it might turn out to be like Michael Armstrong's introduction of the telephony triple-play at the old AT&T—brilliant in principle, but mistimed, and mismanaged.) It’s possible that the first proof of principle will come instead from one of the huge smart meter installation programs, say in California or Florida.

Of course there are smart grid projects directed strictly at efficiency and reliability, such as the MIdwest ISO’s installation of monitoring devices to test the power system's state 30 times a second, so that the system can run with smaller reserve capacity and greater ability to absorb intermittent generation like wind. That's all well and good. But citizens and consumers are not going to feel, broadly speaking, that the smart grid is a success unless they can see it helping them use energy better, both for their personal benefit and the common welfare.

The Intermittent Supply Argument Shouldn't Hold Up Renewable Energy

One of the common arguments against moving rapidly toward renewable sources of electricity like wind and solar is their intermittent nature. The sun doesn't shine at night, or when it's cloudy, or for as long in the winter; the wind doesn't blow all the time, no matter how well sited the turbines are. The answer, generally, is that we will soon improve storage options for that power to the point where that won't matter. But according to a paper in the journal Renewable Energy, renewable energy sources connected to a power grid where other sources are available would be the last to be stored anyway.

Donald Swift-Hook, of the World Renewable Energy Network (who publishes the journal) in the United Kingdom, wrote in the journal that many different power storage options have been considered since the fuel cell was first conceived in 1838. These have included everything from thermal storage, compressed air and electro-chemical storage, and the recent focus on renewable energy - and wind in particular, which recently jumped past 10,000 megawatts installed capacity in the United States - has led to a renewed interest in the ability to store intermittently generated power.

The misconception, he contends, lies in the fact that all power sources have a certain degree of availability, which is never 100 percent, and wind power is no different. Instead of a zero "capacity credit," or the amount that a power source contributes to power generation, wind can actually provide as much as 38 to 58 percent capacity credit.

"The public are unduly concerned that wind and solar plant are intermittent but they fail to realise that, in fact, all plant has always been intermittent, including ‘base load’ nuclear," Swift-Hook wrote. "All their concerns are already well taken care of by the spare capacity that is needed for all plant, not just for wind."

And if one does start to bring stored energy into the equation, Swift-Hook has an answer for that too. Stored energy on a system that has multiple power sources, he claims, will come from the sources that are first to be shut down if the storage system is full or breaks down. Because wind and solar have no fuel costs - that is, once they are operating, the wind and sun provide free power - those are also the cheapest power sources on the system. Thus, they will be the last, not the first, to be shut down.

"This means that, even when storage is installed, grid-connected intermittent renewables like wind energy will never be stored (unless nothing else is available)," Swift-Hook wrote.

The end result of these jumps of logic is that when wind and solar power are merely part of a larger system of power generation, storage of the power they generate is largely unnecessary. So that battery bank in the Department of Energy image up top might not really be part of the picture, as long as the wind and sun aren't the only sources.

Even if that is true, though, others are still working on how best to handle the contributions of wind and solar to power mixes. In another paper in the same journal, researchers led by Dominik Heide of Johann Wolfgang Goethe Universität in Germany, modeled the optimal mix of wind and solar power based on seasonal contributions in Europe. In other words, the longer-term intermittent nature of renewables poses a challenge along with the day-by-day questions.

They found that because wind blows more in the winter and sun shines more in the summer, there is a specific mix that can allow for less energy to be stored in a future utopia where all energy is provided by renewable energy. That mix is 55 percent wind power and 45 percent solar, reducing the need for storage by a factor of two.

Of course, the authors of the latter paper also discuss the optimal breakdown during transitional periods when wind and solar account for larger percentages than now but not yet 100 percent of the total. But as we have already established that in a mixed power scenario storage is unnecessary anyway, we can skip that part.

Image via Wikimedia Commons and DOE.

Advertisement

Newsletter Sign Up

Sign up for the EnergyWise newsletter and get biweekly news on the power & energy industry, green technology, and conservation delivered directly to your inbox.

Advertisement
Load More