Energywise iconEnergywise

Slow Nuclear Restart and Lukewarm Reception for Electricity Deregulation Is Forcing an Energy Mix Rethink

Pressure on Japan’s Government to Revamp Country's Energy Mix

Following the 2011 Fukushima Daiichi nuclear plant accident, the Japanese government drafted a plan for a new energy mix starting in 2013. The aim was to: improve the country’s energy security by supporting renewable and nuclear energy; lower costs by utilizing cheaper coal-fired power generation; and reduce CO2 emissions by leveraging renewables and optimizing efficient coal-fired and LNG powered generation. 

At the same time, the government has also been pushing ahead with liberalizing the energy market. It deregulated the electricity retail market last April; the gas market will follow suit this year. Unbundling of electric generation, transmission, and distribution is due to take place in 2020.

Yet, three years after the new energy plan was published, so little is working out as hoped; the government is expected to produce a revamped plan this year.

Ten months after deregulation of the electricity market last April, prices have hardly been altered, and only 3 percent of customers have changed suppliers. This despite the Big Ten power providers now having a free hand to compete in each other’s formerly protected regions, and despite hundreds of new competitors entering the market from various sectors such as telecommunications and the oil and gas industries.

As detailed in the Japanese press, reasons for the paucity of interest in changing suppliers include little difference in pricing, poorly defined benefits, and procedures that have proven off-putting 

At the same time, most of the 3 percent that have changed providers reside in the Tokyo and Osaka regions, by far the country’s most populated areas. The new power providers have also targeted these areas, meaning that customers residing outside the two megacities have few or no opportunities to choose a new provider, even if they were so inclined.

When it comes to nuclear power, the government had aimed for it to provide between 20 and 22 percent of the country’s energy mix by 2030. The government hoped to, at the same time, gradually ease back on this controversial energy source by decommissioning older reactors. Before the Daiichi accident, nuclear power accounted for about 29 percent of the country’s energy mix. But in the aftermath of the 2011 tsunami having devastated Fukushima Daiichi and the surrounding countryside, all 48 of the remaining reactors were closed down as a safety precaution.

Given the strong anti-nuclear attitude many Japanese now harbor following the Daiichi accident, 20-22 percent seemed overly optimistic in 2013. Today, it appears out of reach.

After Japan’s Nuclear Regulation Authority drew up a new set of safety standards following the accident—which it claims are the most stringent in the world—only five reactors have managed to obtain licenses to restart operations. Of these, just three are in operation, with the other two being stalled by court injunctions brought by local governments or citizens groups because of safety concerns. Similar injunctions are sure to follow as the power companies attempt to restart more reactors. 

“So, there are many difficulties in reaching the 20-22 percent figure,” says Professor Takeo Kikkawa of Tokyo University of Science Graduate School of Innovative Studies, who spoke to the press on 8 February. 

“On top of those issues, current Japanese law says that after 40 years of operation, reactors are to be decommissioned,” he said. This means 24 reactors, including the four at Fukushima Daiichi, would have to be decommissioned by 2030. So, even if all the remaining reactors seeking licenses to operate were successful in going back online, nuclear power would still only account for 15 percent of Japan’s energy mix, said Kikkawa. 

And should the government seek to extend the life of the reactors to 60 years, it would be breaking its public pledge to “decrease reliance on nuclear as much as possible,” he pointed out.

Because of all these issues, the government has no choice but to produce a new energy mix. 

Kikkawa, who was a member of the advisory committee the government relied on to come up with its 2013 energy plan, has put forward a new energy mix solution he believes can be achieved by 2030: nuclear 15 percent, renewables 30 percent, fossil fuels 40 percent, cogeneration 15 percent.

Yet given the apparent lack of answers to the many issues that exist today, even such a revamped plan may have to be revised yet again three years from now.

Overlook the cost to move power to the detriment of your renewable energy project

How Do We Estimate Transmission Costs for New Generation?

graphic link to the landing page for The Full Cost of Electricity

Often when project developers talk about the cost of a new power plant, wind farm, or solar installation, they’re referring to the capital cost of building a facility. Certainly those development costs have dropped over time—especially for wind and solar. But developers overlook the cost to move power—at their peril.

You might think that transmission costs—at least the fraction of those that developers of new plants must bear—are a very small part of the total. This is true for some types of renewable generation. Rooftop solar, for example, does not usually require upgrades to electric transmission systems that are designed to transmit high voltage electricity over long distances.

But what about for utility-scale renewable projects? Here, it’s less clear, so we examined the issue closely as part of our Full Cost of Electricity (FCe-) study in the white paper “Estimation of Transmission Costs for New Generation” [PDF].

In recent years there have been several major new transmission expansions to support those projects. One example is the Competitive Renewable Energy Zone (CREZ) in Texas, a recently completed transmission expansion project [PDF] designed to allow an additional 11,553 megawatts of wind power from West Texas. CREZ had a total cost of approximately US $6.9 billion—or about $600 per kilowatt.

Using a breakdown of who pays for what part of a transmission line, it's easier to grasp why this cost is not usually considered by such metrics as levelized cost of electricity and is instead typically allocated to end-use customers.

We found a correlation between the total annual installed generation and the total annual transmission investments for 69-, 138-, and 345-kilovolt infrastructure. Excluding the investments performed during the CREZ project, the data show that thermal and noncoastal wind projects are driving transmission investments in the ERCOT (Texas) grid.

Despite these conclusions, the methodology we used is necessarily approximate because of the many difficulties in assigning the cause of costs in particular projects. Nevertheless, our estimates are good for “order-of-magnitude” contributions of various components of transmission costs to overall generation project development. In some cases, this contribution is negligible, while in others, such as the connection of remote renewables, it can be significant. The focus of the data and calculations is on ERCOT, where transmission development costs in nonurban areas remain relatively cheap. In other parts of United States, such as California, where transmission costs are typically much higher, our estimates should be taken as lower bounds.

 Ross Baldick is a professor and Juan Andrade a student researcher in the Department of Electrical and Computer Engineering at the University of Texas at Austin.

Images of a dark mass of rubble under the crippled Fukushima Daiichi nuclear power plant could offer clues to the whereabouts of melted nuclear fuel.

Melted Fuel Debris Possibly Located at the Fukushima Daiichi Nuclear Plant

An ongoing operation to learn more about the melted nuclear fuel at the crippled Fukushima Daiichi nuclear plant in Japan may have helped the decommissioning project—estimated to take up to 40 years—reach an important milestone.

Tokyo Electric Power Company (TEPCO), the plant operator, said on Monday 30 January that a complicated maneuver employing a 10.5-meter-long telescopic rod with a pan-tilt camera attached has yielded images of a dark mass of rubble inside the containment vessel and under the reactor vessel that houses the nuclear fuel. The images are now being analyzed in an effort to ascertain what the material might be.

“If the mass captured on camera is melted nuclear fuel, it would be a big step in helping the decommissioning work,” Yoshiyuki Ishizaki, executive vice president of TEPCO, said on 30 January, following the discovery.

Should the presence of nuclear fuel be confirmed, nuclear engineers could then work up a strategy for removing the highly radioactive rubble. However, if the material proves to be part of the damaged pressure vessel, or remains of cables or pipes, then more robot-aided searches of the surrounding area—including the concrete base supporting the containment vessel—will be required.

The rubble was found at the end of an access rail and on the adjoining platform located under the pressure vessel that, during normal reactor operation, is used to inspect the Control Rod Drive (CDR) assembly.  The assembly helps control the fission rate of the reactor’s fuel rods. TEPCO says the CRD assembly and cables remained in their original locations.

Close-up images of the rubble show black globs of material covering part of the grating making up the platform, with areas of the grating dissolved and warped. Monday’s inspection is the first to find signs of this kind of damage. Last April’s 360-degree inspection of the No. 1 containment vessel, using two snake robots, showed no signs of melted fuel.

TEPCO had planned to send a scorpion robot into the No. 2 containment vessel next month to conduct further inspections. Monday’s inspection by remote camera was a precursor to that operation. However, a TEPCO official told IEEE Spectrum that obstacles had been discovered that may prevent the use of the scorpion bot. The official added that more analysis of the images is necessary before engineers can decide what type of robot will be used.

Meltdowns similar to what occurred in the  No. 2 Reactor happened in Reactors 1 and 3 after a tsunami resulting from a March 2011 earthquake knocked out the plant’s safety features. The No. 4 Reactor, which was off-line and undergoing an inspection at the time, did not suffer the same fate. Though the building housing the spent fuel rods was badly damaged by a hydrogen explosion, TEPCO has since managed to remove all the rods from the pool.

photograph of transmission lines

How to Predict a Utility’s Transmission and Distribution Costs: Count the Customers

graphic link to the landing page for The Full Cost of Electricity

The transmission and distribution system links power generators and electricity consumers, but it is an often neglected part of any discussion about the future of electricity. Yet paying for this key piece of infrastructure is a giant influence on how the electric grid can or should change as such new technologies as renewable generation go mainstream.

As part of The Full Cost of Electricity (FCe-) project at the University of Texas at Austin Energy Institute, my colleagues and I wanted to ensure that public and policy conversations started with real numbers. So we asked a simple question: “How much are we paying for transmission and distribution, and how does this quantity compare to the past?” Then we dug into the data submitted by investor-owned utilities to find the answers.

Turns out, the number of customers in a utility’s territory is the best predictor of annual transmission, distribution, and administrative (TD&A) costs.

Utilities recover their TD&A costs by a combination of charges for each kilowatt-hour of energy sold, charges for peak power demand, and a fixed connection charge. Since 1960, each customer in the utilities we studied has typically paid US $700 to $800 annually toward TD&A.

Administrative costs are not trivial. Transmission—high voltage and long-distance transport of electricity—accounts for less than 20 percent of the total TD&A cost. Distribution and administration costs are each approximately 40 percent of total, but administration is a bit larger and is the telltale factor: Administrative costs are associated with number of customer accounts, and customer services are related to the number of customers.

Further evidence comes from the fact that utilities have pushed more energy through per customer without it increasing TD&A costs. As a function of the amount of energy delivered, costs decreased significantly from 1960 to 2000, while the average energy consumption per customer nearly doubled from 11,700 kilowatt-hours per customer-year in 1960 to 24,400 kilowatt-hours per customer-year in 1980 and then flattened through 2014.

Thus our result: It’s not how much energy is delivered, it’s how many customers the energy goes to.

More data and analysis is available in the full white paper, “Trends in Transmission, Distribution, and Administration Costs for U.S. Investor Owned Electric Utilities” [PDF], on the Energy Institute’s page for The Full Cost of Electricity. (IEEE Spectrum is posting blogs from the UT researchers and linking to the white papers as they are released.)

Carey W. King is the assistant director and a research scientist at the University of Texas at Austin Energy Institute.

Geologist John Hartley at the entrance to the Yucca Mountain Nuclear Waste Repository in Nevada. | Location: Yucca Mountain, Nevada, USA. May 14, 2002

Will Trump and Perry Revive Proposed Yucca Mountain Nuclear Waste Repository?

Will Yucca Mountain rise again?

The answer may be more political than technical. And the topic of long-term nuclear waste storage is just one of dozens facing Energy Secretary-designate Rick Perry, should he be confirmed by the Senate. 

Yucca Mountain in Nevada was legally designated decades ago as the site for long-term storage of used nuclear fuel from domestic U.S. reactors.

Despite its desert location some 160 kilometers northwest of Las Vegas, the 400-meter-high dormant caldera volcano ranks as one of the most studied pieces of geology on earth. Technical and environmental studies basically conclude that the site is suitable to store used nuclear fuel for 1 million years.

Read More
workers looking at solar panels

For U.S. Jobs Creation, Renewables Are a Better Bet Than Coal

For jobs creation, the new Trump administration would do well to take a fresh look at clean energy rather than focusing only on fossil fuels. The solar power sector employed twice as many workers in 2016 than power generation from coal, gas and oil combined, according to a U.S. Department of Energy report on employment in energy and energy efficiency.

The solar workforce was about 374,000-strong, making up around 43% of the total employees in the power generation field. Wind power employed 101,738 workers, an increase of 25%. Coal, gas and oil-fired generation together accounted for just over 187,000 jobs, or 22% of the workforce.

A majority of fossil fuel energy jobs are in mining and extraction rather than power generation, but these jobs are declining. Coal mining reached its peak employment in 2012, and now employs around 53,000. Oil and gas extraction jobs reached a peak in 2014 with 541,000 jobs, and in mid-2016 had 388,000 workers.

According to a report from the Environmental Defense Fund (EDF) solar jobs are growing at a rate 12 times faster than the rest of the U.S. economy. Plus they are “generating more jobs per dollar invested–more than double the jobs created from investing in fossil fuels.” What’s more, many of these renewable energy and energy efficiency jobs are local, pay well, and can be found in any state, says Liz Delaney, a program director at EDF.

Read More
What can a roadmap of Boston tell you about its potential to produce solar power?

City Solar Power Potential and Road Network Size Linked

Scientists could estimate a city's solar power potential by analyzing the size of its road network, a new study finds.

Inspired by studies on the relationships between blood vessel networks and a body's size and metabolic rates, study author Sara Najem, a physicist at Lebanon's National Center for Remote Sensing in Beirut, investigated what connections might exist between different elements of a city's infrastructure, such as its road network and its solar potential. "I am always seeking to draw analogies between living systems and cities," Najem says. Najem will detail her findings in the journal Physical Review E. 

Read More
photograph of rooftop solar installation

Will Rooftop Solar Really Add to Utility Costs?

graphic link to the landing page for The Full Cost of Electricity

Regulations in most states obligate utilities to derive some of their electricity generating capacity from renewable sources. Unsurprisingly, the most widely available options—wind and solar—dominate. The International Energy Agency (IEA) estimates that by 2050, solar photovoltaic (PV) power generation will contribute 16 percent of the world’s electricity, and 20 percent of that capacity will come from residential installations.

By offering local generation, residential or rooftop PV reduces the need for transmission facilities to move power from large generating stations to distribution substations. But the effect on the distribution grid is less straightforward. The conventional distribution grid is designed for neither two-way power flow nor large generation capacity. So the prevailing thought is that the grid will need a costly upgrade to accommodate the high PV penetration. Our study within the Full Cost of Electricity (FCe-) program aims to estimate the cost of maximizing residential PV capacity without any grid impacts. The bottom line? We found that even without hardware upgrades to the distribution circuits, such circuits can handle significant solar generation.

We looked at it three ways: Allowing the largest PV generation

  1. without making operational changes to the circuit or upgrading the infrastructure;
  2. with a few modest operational changes in the equipment already installed; and
  3. with additional infrastructure upgrades such as smart inverters and energy storage.

(Note that accommodating the first two capacities does not require any integration costs, beyond some minimal cost associated with the operational changes in the existing devices.)

Depending on a distribution circuit’s characteristics, the maximum PV capacities it can handle range from as low as 15.5 percent of the median value of the daytime peak load demand (2.6 megawatts in one particular circuit) to more than 100 percent (3.87 MW in another circuit). These results suggest that significant rooftop PV generation can be integrated in the grid with little or no additional cost to utilities and their customers and without causing any adverse grid impacts. In fact, our study shows that at such levels, impacts due to PV generation are either nonexistent or can be addressed by appropriate circuit operational changes.

In one example, an operational change was able to boost photovoltaic capacity from 15 percent to 47 percent. The PV hosting capacity of the circuit in that same example can be boosted from 47 percent to 80 percent if as many as one-third of the photovoltaic installations include smart inverter technologies.

Although adding energy storage would also increase hosting capacity, we find that the cost of energy storage systems would be significant, and so it is unjustifiable if the sole purpose is to increase PV penetration.

For details of which circuit characteristics affect photovoltaic capacity, as well as other calculations, read the complete white paper “Integrating Photovoltaic Generation” [PDF], part of the Full Cost of Electricity Study conducted by the University of Texas Austin Energy Institute. (IEEE Spectrum is posting blogs from the UT researchers and linking to the white papers as they are released.)

Suma Jothibasu is a graduate student and Surya Santoso directs the Laboratory for Advanced Studies in Electric Power and Integration of Renewable Energy Systems within the Department of Electrical Engineering, Cockrell School of Engineering, at the University of Texas Austin.

LCOE map produced by UT Austin

How Does Geography Figure Into the Full Cost of Electricity?

graphic link to the landing page for The Full Cost of Electricity

Not all power plants are the same. They certainly don’t cost the same to build or operate. But what if I told you that one number, dubbed the levelized cost of electricity (LCOE), puts it all into black and white for decision makers: This plant’s electricity is cheaper than that one’s, or it isn’t.

LCOE is the estimated amount of money that it takes for a particular power plant to produce a kilowatt-hour of electricity over its expected lifetime and is typically expressed as cents per kilowatt-hour or dollars per megawatt-hour.

LCOE makes it easy to decide which plant to build if you’re a utility or a governing agency. Except that LCOE misses a few important location-based factors, such as fuel delivery costs, construction costs, capacity factors, utility rates, financing terms, and other geographically distinct items that contribute to the cost of a kilowatt-hour.

Despite these shortcomings, LCOE has become the de facto standard for cost comparisons among the general public, policymakers, analysts, advocacy groups, and other stakeholders. One number is readily understood, easily bandied about, and even more easily compared to any other number.

Read More
Photograph of Edison bulbs

Calculating the Full Cost of Electricity—Know Your History

graphic link to the landing page for The Full Cost of Electricity

For decades, scale economies associated with large, centralized, electricity generation technologies encouraged vertical integration. It also drove down the cost of electricity, fostered universal access, and provided for reliable electric service delivered by a single utility in a given region. That practice gave us the now traditional, vertically integrated, electric utility model.

diagram of linear electricity flow from utility to consumer in traditional utility model
Illustration: UT Austin/IEEE Spectrum
This simple diagram depicts an example of the traditional one-way structure of the vertically integrated utility business model. Electricity flows one-way: from utility to consumer.

From its beginning, the U.S. electricity industry emerged as a function of technological advancements, economies of scale, effective financial and regulatory structures that fostered capital investment, and new electric-powered loads. Over the course of a century, there have been successive waves of change in generation, transmission, distribution, market design, and industry regulation. While we expect electricity to continue to be an essential public good, and large-scale, centrally generated electricity to continue to be essential, we also expect traditional utility business and regulatory models to experience enormous stress for three primary reasons.

First, consider the continued development of more cost-competitive and lower-emission centralized generation such as wind farms, utility-scale solar, and natural gas–fired combined-cycle power plants. Traditional thermal generation technologies such as coal and nuclear are being challenged by new generating technologies that are more efficient, flexible (ramping), and modular (scalable). These newer technologies also offer lower emissions, shorter development times (two years for a solar farm versus 10+ years for nuclear plant), and potentially little to no fuel costs (free wind and sun).

Add in advancements in distributed energy resources (DERs) such as photovoltaic (PV) generation and storage.

Last but not least, changes in load patterns from energy efficiency, demand response, and customer self-generation add stress to generating and delivery resources owned and operated by traditional utilities.

This last item—self-generation—is potentially the biggest threat, as it goes against both the traditional utility business model, as well as the competitive market structure as it exists today. The good news? There are are many new alternative combinations of markets, regulations, and technologies possible (illustrated below). The Full Cost of Electricity (FCe-) study coordinated by the Energy Institute at the University of Texas at Austin explores them in its myriad white papers. (IEEE Spectrum is posting blogs from the UT researchers and linking to the white papers as they are released.)

21st century electricity systems have the potential for multiple pathways for money and electricity flow back and forth between consumer and utility.
Illustration: UT Austin/IEEE Spectrum

The transition to a new electricity system structure can be complex. Like all transitions, it can introduce considerable uncertainty into an industry that has traditionally eschewed change, remained fairly stable, and clung to long-held incentives to be conservative so that it can meet its obligation to serve the public good.

These and other technological changes will continue to encourage the industry to adopt new technology and business models, spur policy makers to consider alternative regulatory and electricity market structures, and make electricity customers interested in pursuing self-generation that competes with traditional utilities in ways that may further destabilize the existing order.

The FCe-: History and Evolution of the U.S. Electricity Industry white paper [PDF] describes many of the most important, interrelated, and changing technoeconomic, finance, and policy factors that have affected the electric grid over the past century. If history is any guide, they will likely continue to influence the evolution of electric service and the grid this century.

David P. Tuttle is a research fellow at the University of Texas at Austin Energy Institute.


Newsletter Sign Up

Sign up for the EnergyWise newsletter and get biweekly news on the power & energy industry, green technology, and conservation delivered directly to your inbox.

Load More