Energywise iconEnergywise

three power plant chimneys emitting vapor cloud

How EPA Calculates the Cost of Environmental Compliance for Electricity Generators

graphic link to the landing page for The Full Cost of Electricity

People pay for electricity directly, out of pocket, when they pay their electric bill. But they may also pay in an indirect way, when they bear the environmental and health costs associated with pollution from electricity generation. With a new EPA administrator recently installed, how these costs are calculated is under new scrutiny. The University of Texas Energy Institute’s Full Cost of Electricity Study includes estimates of these environmental pollution costs as one part of the full system cost of electricity.

There is a well-established body of literature at the intersection of toxicology, epidemiology, and economics; it’s one that also governs how the Environmental Protection Agency estimates the benefits of regulations that reduce pollution from power plants. As part of the University of Texas Energy Institute’s Full Cost of Electricity (FCe-) Study my colleagues and I took a deep dive into the cost of these environmental externalities. Our goal: Describe in detail how the EPA estimates the dollar value of pollution reductions.

Whenever the EPA proposes a major new rule, it undertakes a rigorous analysis, comparing a benefit estimate with its estimate of the societal costs of complying with the proposed rule. Our analysis [PDF] illustrates how the EPA completed this kind of analysis for three recent and major rules targeting fossil-fueled power plants: the Cross State Air Pollution Rule (regulating pollutant transport to downwind communities), the Mercury and Air Toxics Rule, and the Clean Power Plan (regulating greenhouse gas emissions).

In each of these three rulemakings, the EPA concluded that the health and environmental benefits greatly exceeded compliance costs, even though in some cases compliance costs were in the billions of dollars.

These analyses are not without controversy. Many dispute the dollar value that the EPA places on a premature death, and many others disagree with the value assigned to a ton of carbon emissions. For the mercury rule and the greenhouse gas rule, benefits dwarf costs only because of so-called co-benefits—reduction of pollution other than the pollutant targeted by the rule.

These and other measurement issues are laid out in our white paper, “EPA’s Valuation of Environmental Externalities from Electricity Production” [PDF].

David Spence is a professor at the McCombs School of Business and School of Law, part of the University of Texas at Austin.

Software coders William Stevens, from left, Michael Harrison, and Brack Quillen work on computers at the Bit Source LLC office in Pikeville, Kentucky, U.S., on Monday, Feb. 1, 2016.

The Kentucky Startup That Is Teaching Coal Miners to Code

Coal’s role in American electricity generation is fast diminishing. A few large coal-mining companies declared bankruptcy last year, and several coal power plants have been shuttered. The biggest loss in all this has been felt by the tens of thousands of coal miners who have been laid off. But despite the U.S. president’s campaign pledges, those jobs are going to be hard to bring back. Besides competition from natural gas and cheaper renewables, coal mining, and mining in general, is losing jobs to automation.

But now, a small startup in the middle of Appalachian coal country has a forward-looking plan to put miners back to work. Pikeville, Ky.-based Bit Source has trained displaced coal industry veterans in Web and software development.

The retrained workers now design and develop websites, tools, games, and apps. Bit Source is proving that coal miners can, indeed, learn how to code, and do it well.

“Appalachia has been exporting coal for a long time,” says Justin Hall, the company’s president. “Now we want to export code. We’ve got blue-collar coders. It’s the vision of the future of work: How do you train a workforce and adapt it for technology.”

Read More
A torn dollar bill in the shape of the state of Texas

What Role Do Household Incomes Play in the Full Cost of Electricity?

graphic link to the landing page for The Full Cost of Electricity

Electricity is something many of us take for granted. Flip a switch and lights come on, air-conditioning fires up, and computers hum. But how much will new energy cost relative to the income of the people who will consume it? It’s not something most of us in the United States think about as we flip that switch, but it is something we need to understand as we build next-generation power plants and the grid to move that electricity to demand centers.

As part of The Full Cost of Electricity project of the University of Texas at Austin Energy Institute, we wanted to ensure that public and policy discussions had baseline information. So we asked a few simple questions: “How much are households paying for household energy overall? How much of this cost is for electricity? How does this cost compare to incomes?” To answer these questions, we used the data acquired by the U.S. Energy Information Administration, via its Residential Energy Consumption Survey, for the state of Texas [PDF].

Besides being our home, Texas is a microcosm: We have rural and urban areas, flat and hilly country, desertlike areas and coastline.

Here's what we found:

  • Twenty-two percent of Texas households are energy burdened, spending more than 8 percent of income on household energy, and 16 percent of households spend more than 10 percent
  • The vast majority of the cost of household energy is for electricity
  • Fifteen percent of Texas households spend more than 8 percent of household income on electricity alone, and 11 percent spend more than 10 percent
  • Higher incomes translate to higher household electricity consumption, but there are important differences between urban and rural households
  • Other than income, there are several demographic variables that explain whether or not a household spends more than 8 percent on energy. For example, a household where someone is at home during the work day is more likely to be energy burdened

For the details, read the full white paper, "Household Energy Costs for Texans" [PDF] on the Energy Institute’s page for The Full Cost of Electricity.

Carey W. King is the assistant director and a research scientist with the University of Texas Energy Institute.

No need for alarm over high Fukushima radiation levels, say experts

Nuclear Experts: High Radiation Estimates at Fukushima No Surprise to Us

With two robot-probe operations apparently encountering increasingly high radiation levels inside the crippled Fukushima Daiichi nuclear plant during the past three weeks, some media reports suggested the radiation count was climbing rapidly. It didn’t help temper that view when plant operator Tokyo Electric Power Company (TEPCO) had to prematurely halt the second operation last Thursday to yank out the robot probe. Radiation had begun to dim the view of the attached cameras, threatening to leave the robot blinded and therefore unable to retrace its steps and escape the rubble.

The first operation, conducted at the end of January, used a remote-controlled robot equipped with a camera attached to a 10.5-meter-long telescopic rod. Captured video and stills showed images of a dark mass of rubble inside the No. 2 reactor’s primary containment vessel near the pedestal that supports the reactor vessel.

Analysis of the images, meant to determine whether the rubble encountered is corium (a mix of melted fuel and other materials), is still ongoing.

A TEPCO official explained that nuclear engineers conducted radiation lab tests prior to the operations taking place. This enabled the engineers to study the images taken in the first probe and estimate the different radiation levels—the highest of which was estimated to be 530 sieverts an hour. An estimate based on images taken during the second probe put the level as high as 650 sieverts an hour. (To put those numbers in context, when you take an abdominal X-ray, you’re exposed to about 4 millisieverts of radiation.)

TEPCO says it is not particularly surprised at these numbers given that its probes were approaching the reactor vessel. “And these are not direct measurements, but are based on the amount of image noise produced,” a company official emphasized. “There is also a plus or minus error margin of 30 percent.” 

Will Davis, a former U.S. Navy reactor operator, and a communications consultant to the American Nuclear Society who has followed the Fukushima accident since it began, agrees with that conclusion. 

“I don’t think we can realistically make assumptions about rising and lowering radiation levels in these camera-based detection methods yet,” he told IEEE Spectrum. “Not only is the presence of localized [radiation] point-sources possible, but there is also the possibility that streaming of radiation is talking place. In other words, we cannot say that all of the radiation in the containment vessel is coming from one unified lump of damaged fuel in the reactor vessel, and perhaps from a second unified lump sitting under it.”

Davis added that it is only to be expected that the closer the robot probes get to the damaged reactor, the higher the dose rates will be. “This has been expected since the beginning. And the high recent readings—even with the chance of up to 30 percent error—only confirms what experts already knew.”

He pointed out that comparably high radiation levels had been recorded in the aftermath of the Three Mile Island and Chernobyl nuclear accidents. 

TEPCO sent in the two robot probes to pave the way for a third operation planned for later this month. This third probe will use a remotely controlled Scorpion robot equipped with a camera, a dosimeter, and a temperature gauge. 

By contrast, the main purpose of the second probe was to remove sediment. The robot was outfitted with a water cannon and a scraper tool, as well as three cameras. The hope was to blast a path for the Scorpion, which cannot easily maneuver over uneven surfaces.

Despite the operation being halted early due to the impact of radiation, the company official said no further preparatory probes were planned. 

The official added that the information gleaned so far was not regarded as a negative, but rather as an aid in helping the engineers who are conducting these operations. “They are combining and analyzing everything right now, and this will help them determine whether to use the Scorpion or not, and what the next best step is to be.”

The American Nuclear Society’s Davis noted that just getting through the approach and planning stages that will precede the removal of the damaged nuclear fuel inside the reactor vessels and the primary containment vessels “is going to take a very long time, probably many, many years.”

But he also pointed out that while the new estimated radiation levels gleaned from the probes may shock people not following the cleanup closely, “it is important to remember that they are extremely localized and have no impact whatsoever to anyone outside the nuclear plant.”

Pilot testing Quanta3's continuous methane monitoring system at a Texas drill pad

Congress to Curtail Methane Monitoring

Innovation in methane detection is booming amid tightened state and federal standards for oil and gas drillers and targeted research funding. Technology developers, however, may see their market diminished by a regulation-averse Republican Congress and president. Senate Republicans are expected to attempt to complete a first strike against federal methane detection and emissions rules as soon as this week.

Methane is a potent greenhouse gas responsible for an estimated one-fifth to one-one quarter of the global warming caused by humans since the Industrial Revolution, and oil and gas production puts more methane in the atmosphere than any other activity in the United States. Global warming, however, is not a moving issue for Republican leaders or President Donald Trump, who reject the scientific consensus on anthropogenic climate change.

What moves them are complaints from industries that “burdensome" regulations unnecessarily hinder job growth and—in the case of methane rules—domestic oil and gas output. The House of Representatives got the methane deregulation ball rolling on 3 February, voting along party lines to quash U.S. Bureau of Land Management rules designed to prevent more than a third of methane releases from nearly 100,000 oil and gas wells and associated equipment operating on federal and tribal lands.

The House vote is one of the first applications of the hitherto obscure Congressional Review Act of 1996, which gives Congress 60 legislative days to overturn new regulations. If the Senate concurs and President Trump signs, the resulting act will scrap the bureau's ban on methane venting and flaring and its leak-monitoring requirements. It will also restrict the bureau from ever revisiting those mandates.

Read More
Artist's concept of a NuScale nuclear power facility

NuScale Reactor Nears One Milestone, With More to Follow

The U.S. Nuclear Regulatory Commission (NRC) is expected to decide by mid-March whether to accept an application with no fewer than 12,000 pages of technical details that support a design for a small modular nuclear reactor design from NuScale Power.

As Winston Churchill might say, the milestone may not mark the beginning of the end but, just maybe, the end of the beginning.

That’s because the NRC’s act of accepting the application does nothing more than trigger a license certification review for the reactor. (The modular reactor might one day generate electric power for small cities, large hospitals, industrial facilities, and even remote water desalination plants.)

As part of its certification review, the NRC will follow a design-specific standard that lays out multiple requirements NuScale's design must meet. Completing that review and certification process could consume anywhere from 30 to 40 months.

It may be no surprise, then, that NuScale is the first small modular reactor (SMR) to have made it this far in the U.S. regulatory process. And it’s had some help.

Read More
Slow Nuclear Restart and Lukewarm Reception for Electricity Deregulation Is Forcing an Energy Mix Rethink

Pressure on Japan’s Government to Revamp Country's Energy Mix

Following the 2011 Fukushima Daiichi nuclear plant accident, the Japanese government drafted a plan for a new energy mix starting in 2013. The aim was to: improve the country’s energy security by supporting renewable and nuclear energy; lower costs by utilizing cheaper coal-fired power generation; and reduce CO2 emissions by leveraging renewables and optimizing efficient coal-fired and LNG powered generation. 

At the same time, the government has also been pushing ahead with liberalizing the energy market. It deregulated the electricity retail market last April; the gas market will follow suit this year. Unbundling of electric generation, transmission, and distribution is due to take place in 2020.

Yet, three years after the new energy plan was published, so little is working out as hoped; the government is expected to produce a revamped plan this year.

Ten months after deregulation of the electricity market last April, prices have hardly been altered, and only 3 percent of customers have changed suppliers. This despite the Big Ten power providers now having a free hand to compete in each other’s formerly protected regions, and despite hundreds of new competitors entering the market from various sectors such as telecommunications and the oil and gas industries.

As detailed in the Japanese press, reasons for the paucity of interest in changing suppliers include little difference in pricing, poorly defined benefits, and procedures that have proven off-putting 

At the same time, most of the 3 percent that have changed providers reside in the Tokyo and Osaka regions, by far the country’s most populated areas. The new power providers have also targeted these areas, meaning that customers residing outside the two megacities have few or no opportunities to choose a new provider, even if they were so inclined.

When it comes to nuclear power, the government had aimed for it to provide between 20 and 22 percent of the country’s energy mix by 2030. The government hoped to, at the same time, gradually ease back on this controversial energy source by decommissioning older reactors. Before the Daiichi accident, nuclear power accounted for about 29 percent of the country’s energy mix. But in the aftermath of the 2011 tsunami having devastated Fukushima Daiichi and the surrounding countryside, all 48 of the remaining reactors were closed down as a safety precaution.

Given the strong anti-nuclear attitude many Japanese now harbor following the Daiichi accident, 20-22 percent seemed overly optimistic in 2013. Today, it appears out of reach.

After Japan’s Nuclear Regulation Authority drew up a new set of safety standards following the accident—which it claims are the most stringent in the world—only five reactors have managed to obtain licenses to restart operations. Of these, just three are in operation, with the other two being stalled by court injunctions brought by local governments or citizens groups because of safety concerns. Similar injunctions are sure to follow as the power companies attempt to restart more reactors. 

“So, there are many difficulties in reaching the 20-22 percent figure,” says Professor Takeo Kikkawa of Tokyo University of Science Graduate School of Innovative Studies, who spoke to the press on 8 February. 

“On top of those issues, current Japanese law says that after 40 years of operation, reactors are to be decommissioned,” he said. This means 24 reactors, including the four at Fukushima Daiichi, would have to be decommissioned by 2030. So, even if all the remaining reactors seeking licenses to operate were successful in going back online, nuclear power would still only account for 15 percent of Japan’s energy mix, said Kikkawa. 

And should the government seek to extend the life of the reactors to 60 years, it would be breaking its public pledge to “decrease reliance on nuclear as much as possible,” he pointed out.

Because of all these issues, the government has no choice but to produce a new energy mix. 

Kikkawa, who was a member of the advisory committee the government relied on to come up with its 2013 energy plan, has put forward a new energy mix solution he believes can be achieved by 2030: nuclear 15 percent, renewables 30 percent, fossil fuels 40 percent, cogeneration 15 percent.

Yet given the apparent lack of answers to the many issues that exist today, even such a revamped plan may have to be revised yet again three years from now.

Overlook the cost to move power to the detriment of your renewable energy project

How Do We Estimate Transmission Costs for New Generation?

graphic link to the landing page for The Full Cost of Electricity

Often when project developers talk about the cost of a new power plant, wind farm, or solar installation, they’re referring to the capital cost of building a facility. Certainly those development costs have dropped over time—especially for wind and solar. But developers overlook the cost to move power—at their peril.

You might think that transmission costs—at least the fraction of those that developers of new plants must bear—are a very small part of the total. This is true for some types of renewable generation. Rooftop solar, for example, does not usually require upgrades to electric transmission systems that are designed to transmit high voltage electricity over long distances.

But what about for utility-scale renewable projects? Here, it’s less clear, so we examined the issue closely as part of our Full Cost of Electricity (FCe-) study in the white paper “Estimation of Transmission Costs for New Generation” [PDF].

In recent years there have been several major new transmission expansions to support those projects. One example is the Competitive Renewable Energy Zone (CREZ) in Texas, a recently completed transmission expansion project [PDF] designed to allow an additional 11,553 megawatts of wind power from West Texas. CREZ had a total cost of approximately US $6.9 billion—or about $600 per kilowatt.

Using a breakdown of who pays for what part of a transmission line, it's easier to grasp why this cost is not usually considered by such metrics as levelized cost of electricity and is instead typically allocated to end-use customers.

We found a correlation between the total annual installed generation and the total annual transmission investments for 69-, 138-, and 345-kilovolt infrastructure. Excluding the investments performed during the CREZ project, the data show that thermal and noncoastal wind projects are driving transmission investments in the ERCOT (Texas) grid.

Despite these conclusions, the methodology we used is necessarily approximate because of the many difficulties in assigning the cause of costs in particular projects. Nevertheless, our estimates are good for “order-of-magnitude” contributions of various components of transmission costs to overall generation project development. In some cases, this contribution is negligible, while in others, such as the connection of remote renewables, it can be significant. The focus of the data and calculations is on ERCOT, where transmission development costs in nonurban areas remain relatively cheap. In other parts of United States, such as California, where transmission costs are typically much higher, our estimates should be taken as lower bounds.

 Ross Baldick is a professor and Juan Andrade a student researcher in the Department of Electrical and Computer Engineering at the University of Texas at Austin.

Images of a dark mass of rubble under the crippled Fukushima Daiichi nuclear power plant could offer clues to the whereabouts of melted nuclear fuel.

Melted Fuel Debris Possibly Located at the Fukushima Daiichi Nuclear Plant

An ongoing operation to learn more about the melted nuclear fuel at the crippled Fukushima Daiichi nuclear plant in Japan may have helped the decommissioning project—estimated to take up to 40 years—reach an important milestone.

Tokyo Electric Power Company (TEPCO), the plant operator, said on Monday 30 January that a complicated maneuver employing a 10.5-meter-long telescopic rod with a pan-tilt camera attached has yielded images of a dark mass of rubble inside the containment vessel and under the reactor vessel that houses the nuclear fuel. The images are now being analyzed in an effort to ascertain what the material might be.

“If the mass captured on camera is melted nuclear fuel, it would be a big step in helping the decommissioning work,” Yoshiyuki Ishizaki, executive vice president of TEPCO, said on 30 January, following the discovery.

Should the presence of nuclear fuel be confirmed, nuclear engineers could then work up a strategy for removing the highly radioactive rubble. However, if the material proves to be part of the damaged pressure vessel, or remains of cables or pipes, then more robot-aided searches of the surrounding area—including the concrete base supporting the containment vessel—will be required.

The rubble was found at the end of an access rail and on the adjoining platform located under the pressure vessel that, during normal reactor operation, is used to inspect the Control Rod Drive (CDR) assembly.  The assembly helps control the fission rate of the reactor’s fuel rods. TEPCO says the CRD assembly and cables remained in their original locations.

Close-up images of the rubble show black globs of material covering part of the grating making up the platform, with areas of the grating dissolved and warped. Monday’s inspection is the first to find signs of this kind of damage. Last April’s 360-degree inspection of the No. 1 containment vessel, using two snake robots, showed no signs of melted fuel.

TEPCO had planned to send a scorpion robot into the No. 2 containment vessel next month to conduct further inspections. Monday’s inspection by remote camera was a precursor to that operation. However, a TEPCO official told IEEE Spectrum that obstacles had been discovered that may prevent the use of the scorpion bot. The official added that more analysis of the images is necessary before engineers can decide what type of robot will be used.

Meltdowns similar to what occurred in the  No. 2 Reactor happened in Reactors 1 and 3 after a tsunami resulting from a March 2011 earthquake knocked out the plant’s safety features. The No. 4 Reactor, which was off-line and undergoing an inspection at the time, did not suffer the same fate. Though the building housing the spent fuel rods was badly damaged by a hydrogen explosion, TEPCO has since managed to remove all the rods from the pool.

photograph of transmission lines

How to Predict a Utility’s Transmission and Distribution Costs: Count the Customers

graphic link to the landing page for The Full Cost of Electricity

The transmission and distribution system links power generators and electricity consumers, but it is an often neglected part of any discussion about the future of electricity. Yet paying for this key piece of infrastructure is a giant influence on how the electric grid can or should change as such new technologies as renewable generation go mainstream.

As part of The Full Cost of Electricity (FCe-) project at the University of Texas at Austin Energy Institute, my colleagues and I wanted to ensure that public and policy conversations started with real numbers. So we asked a simple question: “How much are we paying for transmission and distribution, and how does this quantity compare to the past?” Then we dug into the data submitted by investor-owned utilities to find the answers.

Turns out, the number of customers in a utility’s territory is the best predictor of annual transmission, distribution, and administrative (TD&A) costs.

Utilities recover their TD&A costs by a combination of charges for each kilowatt-hour of energy sold, charges for peak power demand, and a fixed connection charge. Since 1960, each customer in the utilities we studied has typically paid US $700 to $800 annually toward TD&A.

Administrative costs are not trivial. Transmission—high voltage and long-distance transport of electricity—accounts for less than 20 percent of the total TD&A cost. Distribution and administration costs are each approximately 40 percent of total, but administration is a bit larger and is the telltale factor: Administrative costs are associated with number of customer accounts, and customer services are related to the number of customers.

Further evidence comes from the fact that utilities have pushed more energy through per customer without it increasing TD&A costs. As a function of the amount of energy delivered, costs decreased significantly from 1960 to 2000, while the average energy consumption per customer nearly doubled from 11,700 kilowatt-hours per customer-year in 1960 to 24,400 kilowatt-hours per customer-year in 1980 and then flattened through 2014.

Thus our result: It’s not how much energy is delivered, it’s how many customers the energy goes to.

More data and analysis is available in the full white paper, “Trends in Transmission, Distribution, and Administration Costs for U.S. Investor Owned Electric Utilities” [PDF], on the Energy Institute’s page for The Full Cost of Electricity. (IEEE Spectrum is posting blogs from the UT researchers and linking to the white papers as they are released.)

Carey W. King is the assistant director and a research scientist at the University of Texas at Austin Energy Institute.


Newsletter Sign Up

Sign up for the EnergyWise newsletter and get biweekly news on the power & energy industry, green technology, and conservation delivered directly to your inbox.

Load More