The Brattle Group provides consulting services and expert testimony in economics, finance, and regulation to corporations, law firms, and public agencies. It hires internationally recognized experts, and has had strong partnerships with leading academics and highly credentialed industry specialists around the world.The Brattle Group has offices in Cambridge, Massachusetts, San Francisco, Washington, DC, New York, NY, Rome, Madrid and London. Wikipedia.
News Article | May 11, 2017
The question of scale economies in solar has been both a technological and an economic one. As mentioned before, the contention in the late 2000s was that concentrating solar thermal power plant technology would outstrip solar photovoltaics (PV) because the latter was marginally more efficient (at the point of generation) and could incorporate energy storage. Below is part two of our Is Bigger Best? report, a report released in September 2016. Conventional wisdom suggests the biggest wind and solar power plants will be cheapest, but where they deliver power, and who will own them, matters more. Be sure to read part one and come back for part three next week. Frequently left out of that argument were the cost and loss of energy in transmission. In 2010 comments to the California Public Utilities Commission on the now-constructed Ivanpah concentrating solar power plant (called the Genesis Solar Energy Project at the time), transmission and generation expert Bill Powers explained that the cost of electricity from Ivanpah was likely to be higher than from distributed solar PV. “There is no justification for…using an obsolete cost assumption to eliminate large-scale distributed PV as an alternative to the Genesis Solar Energy Project…The assertion that the high distributed generation case is significantly higher cost than the reference case was incorrect in June 2009 and is definitively obsolete in June 2010.” With energy losses varying from an average of 7% to a peak of 14%, the marginally better solar resource at its remote location was lost in transmission, especially when there was ample rooftop space to accommodate local distributed solar. The Ivanpah plant finally came online in January 2014, supplying power at 20¢ per kilowatt-hour, although to date supplying less than two-thirds of its anticipated output. For comparison, the 20-year cost of energy from a distributed solar PV project completed in 2013 would have been 14.0¢ per kilowatt-hour with a 15% profit margin. It’s also worth noting that the higher output from a concentrating solar thermal power plant is in part due to the use of natural gas to ramp up plant output in the morning. The Ivanpah facility consumed nearly 744,000 thousand cubic feet of natural gas in 2014, about what 8,400 Minnesota homes use in a year. Another, often overlooked, issue with concentrating solar thermal is water use. As with traditional power plants, concentrating solar thermal power is using heat to make steam, and steam to turn turbines to generate electricity. In a 2011 post, ILSR noted that concentrating solar power used nearly twice as much water as a coal-fired power plant if wet cooled, and nearly as much as a natural gas power plant even if dry cooled. Solar PV, on the other hand, uses no water to generate electricity because sunlight is converted directly into electric current. Over time, the cost parity of solar thermal electricity and solar PV disappeared, as the following chart shows. While PV costs have fallen rapidly, the cost of concentrating solar has not followed suit. The prospects for continued reduction in solar PV prices remain good, given impressively lower costs in Germany and Japan. At least half of the differences can be explained by the gap in deployment, with three times the amount of solar deployed in Germany and Japan relative to the US. Other differences include “installation labor; permitting, inspection, and interconnection; [and] customer acquisition,” according to the Rocky Mountain Institute. The availability of energy storage was (and is) another touted advantage of concentrating solar thermal, but it’s unclear that it can offset the significantly higher prices. Thermal storage at concentrating solar power plants is much cheaper per megawatt-hour than batteries, and plants commonly have from three to six hours of storage. But since the thermal energy has to heat water and create steam, the response time from energy storage to useful electricity is in minutes rather than seconds. Early use of batteries, however, tend to be in providing “ancillary services,” such as maintaining a consistent voltage on the grid. These services require a relatively small amount of total capacity, but require a quick response. Shifting production from day to night has not proven economical. On the other hand, as the prevalence of solar PV in California is shifting the electricity peak into the later evening hours, thermal storage at concentrating solar plants could become more valuable. So far, however, the challenges and costs of concentrating solar thermal have spurred a shift toward solar PV, even for large projects, resolving the technology debate in favor of mass-produced solar PV. The scale issue, however, remains a fight within the technology of solar PV. As mentioned in the introduction, the Brattle Group last year fired the latest salvo in the utility-scale versus distributed solar debate. The group argues that resources should be disproportionately invested in utility-scale PV, since it can produce electricity at half the cost of distributed PV. In a set of 2016 reports on solar, Berkeley Labs and the Department of Energy’s SunShot initiative provided data on distributed and utility-scale solar costs. This chart combines the two analyses, and shows that the sweet spot for low-cost solar development is in the middle, rather than at the ends of the size spectrum. In the chart of upfront costs above, the largest utility-scale projects are nearly as costly as rooftop commercial-scale solar projects. However, utility-scale projects typically use panels that track the sun, with commensurately higher electricity output. The following chart, of the inflation-adjusted levelized cost of electricity, offers a more accurate picture. We used the National Renewable Energy Laboratory’s System Advisor Model to generate a real, levelized cost of electricity for a $2.50 per Watt solar array of 6.71¢ per kilowatt-hour (including the 30% higher federal tax credit), and adjusted accordingly for the other capital costs. Utility-scale projects (those 5 megawatts and above) are assumed to have tracking, with 30% higher output and therefore 30% lower levelized energy costs. This chart seems to support the Brattle Group’s contention that bigger solar is better, aside from projects exceeding 100 megawatts. But what’s still missing in this analysis is the price of competition. As noted in an ILSR analysis from 2015, utility-scale may cost less, but it’s also worth less to the electric grid because of its remote location. The following chart replicates the levelized cost chart, but adds in the market price against which these various sources of solar compete.3 This chart shows that at most smaller sizes, solar favorably competes with the retail electricity price. The national average residential electricity price used for this chart is close to 12¢ per kilowatt-hour, but this price is 15¢ in California (and even higher in some states in the Northeast). Commercial scale solar also competes relatively well against average commercial retail electric prices of 10¢ per kilowatt-hour.4 Megawatt-scale projects, connecting and competing into the wholesale market, compete against other new power generation, like natural gas, that produces electricity for 5¢ to 8¢ per kilowatt-hour. Worthy of note, the rise of community solar projects looks to hit that sweet spot of cost and benefit, with projects typically between 250 and 1000 kilowatts, providing a cost-effective way for those without a sunny rooftop (or enough capital to finance their own solar project), a way to participate. While solar at nearly any scale is competitive, the price of solar from large scale solar projects does not include the cost of transmission for delivery, relevant for most projects over 5 megawatts. Writ large, the cost of transmission is rising. In California, transmission costs for the three major investor-owned utilities have been rising by nearly 10% per year. In contrast, there is a lot of available capacity on the distribution grid for smaller-scale solar projects. From the same post as the chart above (emphasis mine): A 2015 Energy Institute at Haas working paper, described here, performed a detailed analysis of Pacific Gas & Electric’s distribution grid and concluded that solar penetration equal to 100% of capacity on all circuits would require only small cost to accommodate [less than 1/1000th of a percent of the utility’s operations and maintenance budget]. San Diego Gas & Electric (SDG&E) has estimated that their grid can accommodate about 1,000 Megawatts of distributed generation. That’s equal to around 20% of the utility’s peak demand. So big solar projects might produce somewhat cheaper electricity, but unlike your Amazon Prime membership, there’s no free delivery. And comparing utility-scale and distributed-scale solar misses an important point: they do not compete with each other on price. A final contention in the size debate is whether, driven by the urgency of climate change, it is possible to most quickly deploy wind and solar in small chunks or big ones. Although we can’t definitively answer this question, we offer two powerful anecdotes that suggest that big changes in renewable energy deployment come in packages of any size. Prior to 2007, Germany had installed about 2,900 megawatts of solar. Prior to 2011, the US had installed a similar amount. Over the next five years, Germany installed 22 gigawatts of solar, with 75% of the projects smaller than 500 kilowatts. In a similar timeframe, 2011-2015, the United States installed over 23 gigawatts of solar capacity, with just 42% smaller than 1 megawatt. In other words, in scaling up solar, the size (of individual projects) didn’t matter. While the total capacity was similar, Germany’s focus on local ownership meant that much more of the economic benefit of its new solar capacity accrued to ordinary citizens, instead of incumbent utilities. In Denmark, electricity had long been the province of cooperatives, so when the “feed-in tariff” program offered a guaranteed grid connection and a fair price on a 20-year contract for wind power in the early 1990s, many Danish citizens became part of wind power cooperatives. Wind energy capacity surged from around 500 megawatts to over 3,000 megawatts, and 80% of this wind energy was owned by 150,000 Danish citizens (3% of the population). On a per-capita basis, this would be the same as adding 150,000 megawatts of wind power in the US (twice the total installed capacity at the end of 2015). In the Danish example, wind power grew much faster when connected to local ownership, even though typical projects were just 3 to 7 turbines, each about 500 kilowatts in size. Full report available at ilsr.org. For timely updates, follow John Farrell on Twitter or get the Energy Democracy weekly update. Check out our new 93-page EV report. Join us for an upcoming Cleantech Revolution Tour conference! Keep up to date with all the hottest cleantech news by subscribing to our (free) cleantech daily newsletter or weekly newsletter, or keep an eye on sector-specific news by getting our (also free) solar energy newsletter, electric vehicle newsletter, or wind energy newsletter.
News Article | May 8, 2017
For nearly a century, it’s been considered conventional wisdom that larger-scale power generation means lower-cost electricity. This wisdom is built on two basic theories of economies of scale. Below is part one of our Is Bigger Best Report, a report released in September 2016. Conventional wisdom suggests the biggest wind and solar power plants will be cheapest, but where they deliver power, and who will own them, matters more. Be sure to read parts two and three in the next week. First, there’s the simple fact that larger volume components of power plants provide more usable space than the related materials costs. This simple illustration explains. The box on the left has a volume of 1x1x1 = 1 cubic foot. To assemble the box, you need 6 square pieces of material, each with an area of 1, for a total of 6 square feet. The box on the right has a volume of 2x2x2 = 8 cubic feet. The larger box can be assembled of 6 square pieces, each with an area of 2×2 = 4 square feet, for a total of 24 square feet. We’ve increase the volume of our container 8-fold, with only a 4-fold increase in material costs. As power plants became bigger in the first half of the 20th century, they captured this economy of scale in materials. The second basic theory is that the average cost of a product decreases the more you make of it. This takes into account the scale economies in material costs (in building the factories), but also the notion that some overhead costs (such as annual registration fees, insurance, etc) are fixed or grow more slowly than the total output of a business. Both of these theories were well supported by data in the early years of electricity generation in the 1900s, with coal, oil, and then nuclear power plants producing lower cost power from larger sized plants. The advantage to size also lent credence to the conventional wisdom of monopoly utilities. Big power plants required large amounts of capital, and capital markets offered lower interest rates to companies that did not have the risk of competition for their ever-larger power plants. But after decades of success, the “bigger-is-better” mantra stopped generating returns on investment, nearly 50 years ago. In super-large fossil fuel power plants, specialized equipment required excessively high temperatures and special materials that were more expensive than the marginal gains in efficiency. This graphic, from a book called Power Loss, illustrates the plateauing of power plant efficiency in the mid-1960s, as challenges in operating giant power plants offset their economies of scale. The plateau in plant efficiency from technical challenges was accompanied by a leveling off in the cost reductions of building bigger. Bigger power plants, evidence suggested, incurred higher indirect costs, such as much longer construction time. In the 1970s in particular, high inflation and other factors made up as much as 60% of a power plant’s cost, and made delay costly. Despite the evidence about limits to scale economies, the conventional wisdom that bigger is better has persisted into the renewable energy power industry. It’s particularly ironic, since the costly ever-bigger power plants of the 1970s led Congress to pass the 1978 Public Utility Regulatory Policies Act (PURPA), the federal law that opened the door to renewable energy alternatives to conventional power plants. This lesson seems lost on many observers of the renewable energy industry. The economies of scale of renewable energy take three forms, slightly different than those for fossil fuels: In 2008, New York Times reporter Matthew Wald hit all three of these assumptions. He suggested that the major barrier to expanding the nation’s wind power was lack of transmission capacity. To tap the country’s wind resources required building vast wind power projects in the windy Midwest and then shipping that power to population centers on the coasts, argued Wald (and others). In the same article, Wald described “immense solar-power stations in the nation’s deserts,” a reference to concentrating solar thermal power plants that focus sunlight with hundreds of mirrors to generate heat, then steam, then electricity. Like Wald, many observers thought that the quickest way to mass deployment of solar energy was building out many of these multi-hundred-megawatt facilities in the world’s deserts, then shipping that electricity via new transmission lines back to population centers. One initiative, called Desertec, even proposed to power all of Europe with concentrating solar thermal power plants in the North African desert. The adjacent image was circulated widely at the time, with the red squares representing the areas that could be covered with reflective mirrors to generate enough electricity to power the entire world, the EU25, or Germany. The arguments over scale have continued. Investor-owned utility Xcel Energy released a video in 2015 decrying “thinking small” in favor of “large scale solar projects that deliver energy more economically.” Just last year, a Brattle Group study suggested that utility-scale solar power plants were much less costly than distributed ones. These are just two shots fired in a larger battle over the size and scale of renewable energy deployment. The managers of electric utilities eventually realized there were limits to scale economies of fossil fuel power plants, in part because smaller-scale cogeneration and renewable energy power plants allowed under PURPA undercut the utility’s electricity costs. In renewable energy, the main issue is whether large, custom-built wind and solar projects can compete with small, mass-produced ones, when the former require access to big, expensive infrastructure that the latter do not. To address the economies of scale question for wind power, we have to address the scale economies of a single turbine, a group of turbines (called a wind farm or wind project), and whether it’s better to chase the best resource or build (at smaller scale) close to demand. On the question of the single turbine, there are several ways to get more electricity out of a single wind turbine: In a 2007 report, ILSR detailed the significant benefits of these changes (shown in the graphic to the right). Doubling the height of a wind turbine can reduce the cost of electricity it produces by 17%; doubling the size of the rotors can do even more, reducing the power cost by 75%. Although average turbine height seems to have leveled off near 80 meters, there’s little sign the the scale economies of a single turbine have reached their limits. Data from the 2015 Wind Technologies Market Report shows a steady increase rotor length and rated capacity, allowing individual wind turbines to produce more electricity. In other words, there are clear economies of scale in the size of a single wind turbine. Given these scale economies, the next question is whether large wind farms or smaller ones make economic sense. Based on data released in 2010, the conventional wisdom seemed shattered. Wind projects installed in 2007-09 actually exhibited dis-economies of scale, with larger projects costing more than projects sized between 5 and 20 megawatts (using 3 to 12 average-sized turbines). The 2010 data seems to be an aberration, as subsequent data aligns with the conventional wisdom of scale economies. One potential difference is that many regional transmission operators adopted cost-sharing provisions around 2010-11 that lifted the burden of transmission expansion from individual project developers and allowed it to be shared among all electric customers in the region. Since larger projects would have been more likely to incur transmission upgrades, it may explain at least part of the higher costs for larger projects. The following chart shows the recent economies of scale data for wind farms by size, highlighting 2011 and 2015 data. It’s clear that scale economies have increased substantially at the breakpoint of 5 megawatts, with the smallest projects nearly double the per-kilowatt cost of ones 5 megawatts and larger. Although the five lines above show the gradual increase in economies of scale of larger wind farms, combining the lines into a five-year average is also instructive. The chart below shows the cost per kilowatt for wind farms of increasing size as a percentage of projects sized 5 to 20 megawatts. The lesson is that two-thirds of the economies of scale of wind farms is captured when projects exceed 5 megawatts of total capacity. There are two caveats about data showing lower prices for larger projects: the price of competition and the cost of transmission. While nearly all commercial scale wind projects sell electricity into the grid, the smallest projects may be competing against a different price than the larger ones. There’s some evidence from community-scale developers that the fair contract price for electricity for projects under 5 megawatts that connect near utility substations (receiving the “avoided cost” utilities are required to pay under PURPA) may be much higher than grid wholesale prices because it avoids both generation and transmission costs.1 The following chart (converting the costs per kilowatt above into a 20-year price of electricity) illustrates how this avoided cost is much higher than the wholesale market price, sometimes called the “day-ahead locational marginal price.” It means that even the smallest wind power projects can be cost-effective. This value advantage of small projects may be an opening for community-based wind projects that have previously been seen as uneconomical in comparison to large-scale ones. However, “community shared wind” has yet to enjoy the popularity of community shared solar, as noted in ILSR’s 2016 report. The second caveat about the advantages of scale is the issue of costs to transmit power to customers. All of the costs shown in the above charts include interconnection to the electric grid, but may not include costs to upgrade the transmission system to accommodate the new capacity. Larger projects are more likely to incur these system upgrade costs, which are typically spread among all electric customers. Therefore, it’s hard to disaggregate transmission costs and get an accurate picture of whether the largest wind projects are truly the most economical. So individual turbines show clear economies of scale, but with wind projects the data is less clear. In the third economies of scale issue, of wind farm size and distance from the best wind resources, the data is also muddled. The windiest and most remote sites likely have the greatest amount of space for new wind projects, whereas projects sited close to consumers may have to be smaller. In ILSR’s 2007 report on wind economies of scale, we examined this issue and concluded that the cost of transmission can consume the advantage of building larger in a better wind resource. The following table provides some illustrations, with values in green showing wind speed increases that can offset transmission costs, and red values showing where the cost to transmit outweigh wind resource benefits (assuming a similarly sized wind power project). To get a sense of how these calculations play out in the real world, the following map shows that many large cities could benefit from getting electricity from wind farms within 400 miles, but that longer distances cannibalize the savings of higher wind speeds. For this map, we assumed that projects proximate to the city would be smaller (between 100 and 200 megawatts) and produce electricity that was more costly by about 3.5%. The issue of transmission infrastructure is complicated by the fact that transmission planning tends to lack transparency and access for local communities, and a serious consideration of alternatives. While the largest wind power projects may have a marginal price advantage, it’s also true that big wind farms, unlike smaller ones, aren’t compatible with ownership structures that deliver greater economic benefits to the local community. Since sub-5-megawatt wind projects may be able to compete at a different price point, having community ownership may prove more economically lucrative (even with a slightly higher electricity price) than purchasing power from a remote wind farm. There’s a clear economy of scale in the size of individual wind turbines and the construction of wind farms. However, the issue of energy transmission complicates the analysis. Given the process and cost-sharing elements of transmission planning, it’s difficult to disaggregate these effects. There may also be benefits to the smallest wind (and solar) projects being able to access higher contract prices that may make these community-scale projects more competitive. Full report available at ilsr.org. For timely updates, follow John Farrell on Twitter or get the Energy Democracy weekly update. Keep up to date with all the hottest cleantech news by subscribing to our (free) cleantech daily newsletter or weekly newsletter, or keep an eye on sector-specific news by getting our (also free) solar energy newsletter, electric vehicle newsletter, or wind energy newsletter.
News Article | May 5, 2017
A funny thing happened on April 14. In a simple memo directing the Department of Energy to conduct a short-term study, the Trump Administration signaled that it was seeing advanced energy as a problem that needs to be solved. Whereas AEE has argued that homegrown advanced energy technologies like wind and solar, energy efficiency, demand response, energy storage, and advanced grid systems deserve a central role in the administration’s “America First” energy plan, the outline of DOE’s study portrays renewable energy, in particular, as a threat to electric system reliability, even “national security,” and it takes aim at federal, and even state, policies that have facilitated its growth. For the advanced energy industry, the memo was not just a study order, it was a shot across the bow. In the memo to his chief of staff, Energy Secretary Rick Perry directed DOE to perform a “60-day study” of certain “critical issues central to protecting the long-term reliability of the electric grid.” In the memo, Perry expressed concerns about the “erosion of critical baseload resources” due to “regulatory burdens introduced by previous administrations that were designed to decrease coal-fired power generation” as well as “market-distorting effects of federal subsidies that boost one form of energy at the expense of others.” He directed DOE specifically to examine “the extent to which continued regulatory burdens, as well as mandates and tax and subsidy policies, are responsible for forcing the premature retirement of baseload power plants.” To be clear, we believe that the reliability of the grid is sacrosanct – no policies should threaten the grid that all of us rely on every day. Still, to us at AEE, the assumption that advanced energy technologies are harming reliability was a bit mystifying, coming as it did from the former governor of Texas, the leading wind-power state in the country, and one that the Brattle Group, in a paper for AEE Institute, showcased for its experience maintaining reliable electric power service with a high penetration of variable renewable energy. Texas also has the most open and competitive electricity market in the country, ably managed by ERCOT, showing that renewable energy development need not undermine markets designed to provide affordable and reliable electric power. It became a bit less mystifying when it became known that this study was to be put in the hands of appointees who came to DOE from the Institute for Energy Research, a Washington, D.C.-based think tank known for its attacks on renewable energy policy. Here is a sample, from IER’s blog in September: “Allowing wind energy to find its natural place in the power grid would require elimination of the PTC and all state-level renewable energy mandates.” On April 28, AEE, AWEA, and SEIA sent a joint letter to Perry taking issue with the premises of his study order, noting that the growth of wind and solar power neither accounts for the challenges now facing coal-fired and nuclear power plants in the nation’s electricity markets nor represents any kind of threat to reliability. “We note that these homegrown energy resources are proven technologies that help support grid reliability,” the industry associations wrote. “These energy resources have already been integrated smoothly into the electric power system in large and increasing amounts, as demonstrated in countless studies and, more importantly, in real-world experience across the U.S., including in Texas. Furthermore, we note that policies supporting the deployment of these technologies are not playing an important role in the decline of coal and nuclear plants. Numerous studies have conclusively demonstrated that low natural gas prices and stagnant load growth are the principal factors behind the retirements in coal and nuclear plants.” The industry groups also called on DOE to “follow standard practice” and conduct the study in “an open and transparent manner,” noting that it is “customary” for agencies developing reports that provide policy recommendations to allow public comment on a draft, prior to the report being finalized. “Public input, including from energy market participants, grid operators, and regulators, would help ensure that any resulting recommendations from the study are based on the best available information,” wrote the industry associations. Whether or not the Administration will heed our calls for public input is an open question. Nevertheless, it is crucial that market participants present DOE with the wealth of existing data and analysis that actually demonstrates that advanced energy is improving the state of our electricity grid. We are under no illusion that the DOE study will be the end of matter. Rather, we believe that the administration may try to use the study to provide a blueprint for attacks on advanced energy policy wherever the administration can target them: in Congress, at FERC, and even in the 29 states that have enforceable renewable energy standards (the “mandates” referenced in the Perry memo). In the coming weeks, our groups will be submitting to DOE – whether asked for or not – factual information on how our electricity system operates today, with a growing mix of variable and flexible resources to provide greater, not lesser, reliability. And if the administration insists on ignoring this information and pursuing an anti-advanced energy policy agenda in the name of saving “baseload” plants that aren’t in fact needed to meet electricity needs, we’ll be getting ready for a fight. In 2015, The Brattle Group found that “ongoing technological progress and ongoing learning about how to manage the operations of the electric system will likely allow the integration not only of the levels of variable renewable capacity now in places like Texas and Colorado but even significantly larger amounts in the future.” Click below to download the full paper:
News Article | November 28, 2016
CHICAGO, Nov. 28, 2016 /PRNewswire/ -- Without the Quad Cities and Clinton nuclear plants in Illinois, consumers would pay $364 million more annually and over $3.1 billion more over the next ten years (on a present value basis) in electricity costs. Annually, this equates to $115 million...
News Article | March 4, 2016
New research suggests that in the future, one of the most lowly, boring, and ubiquitous of home appliances — the electric water heater — could come to perform a surprising array of new functions that help out the power grid, and potentially even save money on home electricity bills to boot. The idea is that these water heaters in the future will increasingly become “grid interactive,” communicating with local utilities or other coordinating entities, and thereby providing services to the larger grid by modulating their energy use, or heating water at different times of the day. And these services may be valuable enough that their owners could even be compensated for them by their utility companies or other third-party entities. “Electric water heaters are essentially pre-installed thermal batteries that are sitting idle in more than 50 million homes across the U.S.,” says a new report on the subject by the electricity consulting firm the Brattle Group, which was composed for the National Rural Electric Cooperative Association, the Natural Resources Defense Council, and the Peak Load Management Alliance. The report finds that net savings to the electricity system as a whole could be $ 200 per year per heater – some of which may be passed on to its owner – from enabling these tanks to interact with the grid and engage in a number of unusual but hardly unprecedented feats. One example would be “thermal storage,” which involves heating water at night when electricity costs less, and thus decreasing demand on the grid during peak hours of the day. Of course, precisely what a water heater can do in interaction with the grid depends on factors like its size or water capacity, the state or electricity market you live in, the technologies with which the heater is equipped, and much more. “Customers that have electric water heaters, those existing water heaters that are already installed can be used to supply this service,” says the Brattle Group’s Ryan Hledik, the report’s lead author. “You would need some additional technology to connect it to grid, but you wouldn’t need to install a new water heater.” Granted, Hledik says that in most cases, people probably won’t be adding technology to existing heaters, but rather swapping in so-called “grid enabled” or “smart” water heaters when they replace their old ones. In the future, their power companies might encourage or even help them to do so. Typically, a standard electric water heater — set to, say, 120 degrees — will heat water willy-nilly throughout the day, depending on when it is being used. When some water is used (say, for a shower), it comes out of the tank and more cold water flows in, which is then heated and maintained at the desired temperature. In contrast, timing the heating of the water — by, say, doing all of the heating at night — could involve either having a larger tank to make sure that the hot water doesn’t run out, or heating water to considerably higher temperatures and then mixing it with cooler water when it comes out to modulate that extra heat. Through such changes, water heaters will be able to act like a “battery” in the sense that they will be storing thermal energy for longer periods of time. It isn’t possible to then send that energy back to the grid as electrical energy, or to use it to power other household devices — so the battery analogy has to be acknowledged as a limited one (though the Brattle report, entitled “The Hidden Battery,” heavily emphasizes it). But the potentially large time-lag between the use of electricity to warm the water and use of the water itself nonetheless creates key battery-like opportunities, especially for the grid (where utility companies are very interested right now in adding more energy storage capacity). It means, for instance, a cost saving if water is warmed late at night, when electricity tends to be the cheapest. It also means that the precise amount of electricity that the water heater draws to do its work at a given time can fluctuate, even as the heater will still get its job done. These services are valuable, especially if many water heaters can be aggregated together to perform them. That’s because the larger electricity grid sees huge demands swings based on the time of day, along with smaller, constant fluctuations. So if heaters are using the majority of their electricity at night when most of us are asleep, or if they’re aiding in grid “frequency regulation” through instantaneous fluctuations in electricity use that help the overall grid keep supply and demand in balance, then they are playing a role that can merit compensation. “If the program is well-designed, meaning in particular, you have a well-designed algorithm for controlling the water heater in response to these signals from the grid, then what’s really attractive about a water heating program is that you can run these programs in a way that customers will not notice any difference in their service,” says Hledik. In fact, using electric water heaters to provide some of these services has long been happening in the world of rural electric cooperatives — member-owned utilities that in many cases control the operation of members’ individual water heaters, heating water at night and then using the dollar savings to lower all members’ electricity bills. Take, as an example, Great River Energy, a Minnesota umbrella cooperative serving some 1.7 million people through 28 smaller cooperatives. The cooperative has been using water heaters as, in effect, batteries for years, says Gary Connett, its director of demand-side management and member services. “The way we operate these large volume water heaters, we have 70,000 of them that only charge in the nighttime hours, they are 85 to 120 gallon water heaters, they come on at 11 at night, and they are allowed to charge til 7 the next morning,” Connett explains. “And the rest of the day, the next 16 hours, they don’t come on.” Thus, the electricity used to power the heaters is cheaper than it would be if they were charging during the day, and everybody saves money as a result, Connett says. But that’s just the first step. Right now, Great River Energy is piloting a program in which water heaters charging at night also help provide grid frequency regulation services by slightly altering how much electricity they use. As the grid adds more and more variable resources like wind power, Connett says, using water heaters to provide a “ballast” against that variability becomes more and more useful. “These water heaters, I joke about, they’re the battery in the basement,” says Connett. “They’re kind of an unsung hero, but we’ve studied smart appliances, and I have to say, maybe the smartest appliance is this water heater.” Of course, those of us living in cities aren’t part of rural electric cooperatives. We generally buy our electricity from a utility company. But utilities also appear to be getting interested in these sorts of possibilities. The Brattle Group report notes ongoing pilot projects in the area with both the Hawaiian Electric Company and the Sacramento Municipal Utility District. Thus, in the future, it may be that our power companies try to sign us up for programs that would turn our water heaters into grid resources (and compensate us in some way for that, maybe through a rebate for buying a grid-interactive heater, or maybe by lowering our bills). Or, alternatively, in the future some people may be able to sign up with so-called demand response “aggregators” that pool together many residential customers and their devices to provide services to the grid. And as if that’s not enough, the Brattle Group report also finds that, since water heating is such a big consumer of electricity overall — 9 percent of all household use — these strategies could someday lessen overall greenhouse gas emissions. That would be especially the case if the heaters are being used to warm water during specific hours of the day when a given grid is more reliant on renewables or natural gas, rather than coal. Controlling when heaters are used could have this potential benefit, too. Granted, these are still pretty new ideas and the Brattle Group report says they need to be studied more extensively. But as Hledik adds, “I haven’t really come across anyone yet who thinks this is a bad idea.”
News Article | March 17, 2016
Earlier this month, I attended the 2016 Electric Reliability Council of Texas (ERCOT) Market Summit. The summit brought together thought leaders and stakeholders to discuss the future of Texas’ electric grid. Many of the discussions at the conference centered around the expected boom in new solar and wind energy capacity in Texas, and how ERCOT is planning to cope with the evolution of its electric grid and market as more and more renewable energy is added. While Texas will add a lot of new wind and solar capacity over the coming years, the grid operator is more than prepared to manage the reliability of the electric grid into the future. Even More Wind Energy on the Horizon Since the early 2000s, Texas has emerged as the national leader in wind energy. Last year, Texas sourced 11.7 percent of its electric energy from wind, with wind eclipsing nuclear energy, which provided 11.3 percent, for the first time ever. On an instantaneous basis, wind energy has provided about 45 percent of electric power on more than one occasion. On December 20 of last year, wind peaked at 44.7 percent of total power generation and provided about 40 percent of instantaneous generation for 17 straight hours. And on February 18 of this year, wind peaked at 45.1 percent of total generation and provided roughly 40 percent of instantaneous power generation for the entire day. Texas isn’t done adding wind energy yet. Over the next two years, Texas is expected to add more wind energy than ever before thanks in part to the declining cost of wind turbines and the extension of the federal wind energy production tax credit, which gives wind energy an extra 2.3 cents for every kilowatt-hour of energy produced. With all of the new anticipated wind energy capacity, expect Texas to continue breaking wind energy integration records for the foreseeable future. For the first time, wind energy isn’t the only form of renewable energy expected to see significant growth in Texas over the coming years. Thanks to steadily decreasing costs for utility-scale solar plants and the extension of the federal investment tax credit, which covers 30 percent of upfront investment costs, Texas is expected to add 1,725 megawatts of new utility-scale solar capacity between now and 2017 — increasing the total amount of solar capacity installed more than six-fold. All of the anticipated solar installations in Texas are solar farms that use photovoltaic panels to convert sunlight directly into electricity. ERCOT doesn’t track the amount of solar capacity installed in the form of rooftop photovoltaic panels, so there might be even more solar energy installed than meets the eye. A New Market Design for the Future Texas benefits from a large fleet of flexible natural gas generation that has the capability to balance the output from wind and solar energy with the rest of the electric grid with relative ease. However, as the amount of wind and solar capacity increases, there will be less and less flexible generation available on a real-time basis to compensate for the additional intermittency introduced by wind and solar energy. Fortunately, ERCOT is already in the midst of a major electricity market redesign to ensure electric reliability even as wind and solar make up a larger and larger share of total electricity generation. To operate the grid reliability, it is important to perfectly balance electricity supply with demand in real time. Today, this balance is maintained by flexible generators that provide “ancillary services,” which refers to a collection of services procured by the grid operator to maintain electric reliability no matter what. ERCOT ancillary services consist of “regulation” (generators that adjust their output to maintain the second-to-second balance between supply and demand), “responsive reserve” (generators that are spinning and ready to supply power in case of a contingency), and “non-spinning reserve” (generators that are offline but ready to turn on in a pinch if needed). All of these services are procured in the market and are part of the total cost we pay for electricity. Beginning with a concept paper released in 2013, ERCOT proposed a newly designed ancillary services market better able to cope with rising solar and wind energy intermittency as conventional generation makes up a smaller share of overall generation. The new design proposes unbundling balancing services traditionally provided by fossil generators into separate services that more adequately address the needs of a heavy-renewable grid and are compatible with new grid-balancing technologies like energy storage. The newly designed ancillary services market doesn’t just help integrate wind and solar energy, it also makes the electricity market more economically efficient overall by breaking up conventional ancillary services into their component parts. An independent analysis from the Brattle Group, widely considered a thought leader in electricity markets, found that ERCOT’s proposed future ancillary services market would provide $137 million in cost savings over the next ten years, or ten times the anticipated implementation cost of $12 to $15 million. Texas Set to Lead the Way With lots of solar and wind energy on the horizon, and a new electricity market design in the works, Texas is set to continue its role as a leader in the integration of renewable energy with the grid. This might come as a surprise considering the state’s frequent legal battles with the federal government over environmental regulations. It just goes to show how the technology-agnostic nature of competitive markets can lead to renewable energy growth even where local government is against carbon dioxide regulations of any kind.
News Article | December 4, 2016
Last Thursday, December 1st, the Illinois State Legislature passed a measure that will allow continued operation of two of the state’s six nuclear power plants. In a nail-biter more reminiscent of overtime at the Super Bowl, the Illinois State Legislature passed The Future Energy Jobs Bill (SB 2814) with less than an hour remaining in the legislative session. The bi-partisan bill allows Exelon’s Clinton and Quad Cities nuclear power plants to remain open, saving 4,200 jobs and over 22 billion kWhs of carbon-free power each year, more than all of the state’s renewables combined. These two plants were in jeopardy of closing because even at a low cost of five cents or so per kWh, they were losing a combined $100 million per year because they could not compete with cheap natural gas and wind energy that is subsidized at 2.3¢/kWh. Illinois taxpayers subsidize solar energy at 21¢/kWh. This bill provides these nuclear plants with just 1¢/kWh, and only until market conditions change. Exelon had drafted a press release announcing the closure of the two plants that was to be issued last night if the bill failed. Instead, these plants will be operating for at least another 10 years, producing over 200 billion kWhs of carbon-free energy. In addition to preserving nuclear energy as a way to support cleaner air, the measure also expands the state’s energy efficiency programs and makes changes to the state’s renewable portfolio standard sought by renewable advocates. The latest version of the bill removed incentives for southern Illinois coal-fired power plants that had been added to draw more support for the legislation. Also cut from the measure was a contentious billing system that based power bills on average peak use instead of overall use. Nuclear power produces over half of Illinois’ electricity, all with no carbon or other polluting emissions. The enormous negative impact of shutting down nuclear plants because of an artificial market finally got through to the Legislature, since the generating capacity of these nuclear plants would have to be replaced by natural gas or coal, doubling the State’s total carbon emissions and ensuring that the state would not meet its emissions goals anytime soon. This is just what happened in New England after the unnecessary closing of the Vermont Yankee Nuclear Power Plant in 2014. Their clean nuclear energy was replaced entirely by natural gas and out-of-state purchases, the local community was devastated economically, and electricity prices have increased. The rise in conventional air pollutants by moving from nuclear to coal or natural gas in Illinois would also have increased premature deaths. A recent study found that the use of nuclear energy worldwide has prevented about 1.8 million premature deaths from fossil fuel pollution, and could prevent up to several million additional ones in the near-future. It’s why China is planning 400 new nuclear power plants by 2040 – they lead the world in coal deaths and really bad air quality since their surge in coal began in 1992. The fate of these Illinois nuclear plants had drawn the attention of the entire country, including the leading climate scientists, since Illinois generates more zero-emissions electricity than any other state, 90% of which comes from nuclear power, and climate scientists are in favor of nuclear power. Earlier this year, a coalition of scientists and conservationists, including famed climate scientist James Hansen, anti-nuclear activist turned nuclear proponent Michael Shellenberger, and Whole Earth catalogue founder Stewart Brand, sent an open letter to Illinois legislators asking them to “do everything in your power to keep all of Illinois’s nuclear power plants running for their full lifetimes.” Even the Sierra Club reluctantly supported this bill. Nuclear plants across the country are at risk of being closed prematurely mainly because they are excluded from federal and state clean energy policies. First, the federal production tax credit subsidy for wind is not available to nuclear energy. This credit sometimes turns wholesale electricity prices negative by encouraging wind farms to overproduce during periods of low demand when no one wants their electricity and it threatens to overload the grid. Nuclear plants must pay to supply the grid during temporary wind surges, while wind farms continue earning money from the tax credit. It appears that individual states are beginning to see the advantages of keeping nuclear power viable. Recently, the New York Public Service Commission adopted a Clean Energy Standard that recognizes the economic and environmental benefits of commercial nuclear energy in that state, allowing two nuclear plants to remain open that were in the same precarious situation as the Clinton and Quad Cities plants. Connecticut faces a similar challenge. “The Future Energy Jobs Bill, now headed to the governor’s desk, preserves more than $1.2 billion in annual economic activity across Illinois, including 4,200 direct jobs at Clinton and Quad Cities and thousands more jobs that the plants support,” said Maria Korsnick, Nuclear Energy Institute’s chief operating officer. “The bill levels the playing field for nuclear energy with other carbon-free energy sources. Between them, the Clinton and Quad Cities facilities prevent the emission of more than 20 million metric tons of carbon dioxide a year.” This is more than twice the emissions of all the cars in Chicago and surrounding suburbs. The long-term savings from not having to replace the electricity supply that Clinton and Quad Cities reliably generate is substantial. A Brattle Group study found that keeping the Quad Cities and Clinton nuclear generating stations would save residential and business consumers $300 million in electricity costs every year they continue running. Illinois Governor Rauner is expected to sign the bill into law quickly. Dr. James Conca is a geochemist, an energy expert, an authority on dirty bombs, a planetary geologist and professional speaker. Follow him on Twitter @jimconca and see his book at Amazon.com
News Article | December 16, 2016
LITTLE ROCK, AR--(Marketwired - December 16, 2016) - On Dec. 7, America commemorated the 75th anniversary of the bombing of Pearl Harbor. Nine days later, an organization in Little Rock, Ark., will likewise celebrate 75 years of existence. On Dec. 16, 1941, in support of the American war effort, 11 electric utilities agreed to pool their resources to keep power flowing to Jones Mill -- an aluminum production facility outside Malvern, Ark. President Franklin Roosevelt's wartime goal to produce 50,000 airplanes per year had created the need for huge quantities of aluminum, and Jones Mill's operation would require 120 megawatts of power -- exceeding its home state's installed capacity of 100 MW at the time. From the utilities' partnership, Southwest Power Pool (SPP) was formed, and the new organization was successful in pooling power to support the plant. After the war, SPP continued as a leader providing safe, reliable power to U.S. homes. SPP today is a regional transmission organization (RTO): a not-for-profit, federally regulated service organization that ensures the reliable operation of a portion of the nation's power grid on behalf of its member companies, with more than 50,000 MW in capacity. SPP describes itself as the air-traffic controller of the power grid. Air-traffic controllers do not own the airports in which they operate or the planes they direct but are responsible for ensuring air travelers depart, fly and land safely. Similarly, SPP does not own the power stations it directs or the transmission lines across which electricity flows in its footprint, but it partners with generators, transmission owners, municipalities, power marketers, state and federal agencies, electric cooperatives and others to ensure the cost-effective and reliable delivery of power across a 14-state region. Though SPP works at the wholesale level and thus doesn't directly serve end users and ratepayers, it does benefit them. A recent study conducted by SPP and validated by the Brattle Group showed transmission investments in the SPP region had, on average, a benefit-to-cost ratio of 3.5-to-1. That means every dollar spent to build or upgrade transmission lines throughout SPP's region will ultimately produce $3.50 in electricity production cost savings and other benefits. In addition to planning transmission infrastructure, SPP facilitates the sale and purchase of electricity through its Integrated Marketplace, a wholesale electric market. SPP's marketplace launched in 2014 and has since reduced the cost of electricity in the organization's region by more than $1 billion. These and other services provide net benefits to SPP's members in excess of $1.4 billion annually at an overall benefit-to-cost ratio of more than 10-to-1. For the typical end-use customer using 1,000 kWh per month that means $68 of benefits a year at the cost of just 62 cents monthly. Or, put another way, without the services SPP provides its members, a ratepayer's $100 electric bill would be $105.65. Throughout its 75 years, SPP has evolved and grown from an affiliation of 11 companies with a common goal in 1941 to an organization employing about 600 professionals in support of nearly 100 member companies across a region spanning from the Canadian border in the north to Louisiana in the south and from southeastern Missouri to northwestern Montana. SPP attributes its legacy of success to the strength of its stakeholder relationships. In the foreword to a book published this year chronicling SPP's history, its President and CEO Nick Brown said, "Reliability is job one for SPP. We exist to help our members keep the lights on, today and in the future. We do so not through hard work, innovation or efficiency, though each is a necessary component of our success. For SPP, reliability is accomplished through strong, healthy relationships with those we serve." Because of the strength of those relationships, its legacy of success and deliberate focus on continuous improvement and building consensus among its members, SPP has every reason to think its future is just as bright as its history. Southwest Power Pool, Inc. manages the electric grid and wholesale energy market for the central United States. As a regional transmission organization, the nonprofit corporation is mandated by the Federal Energy Regulatory Commission to ensure reliable supplies of power, adequate transmission infrastructure and competitive wholesale electricity prices. Southwest Power Pool and its diverse group of member companies coordinate the flow of electricity across 60,000 miles of high-voltage transmission lines spanning 14 states. The company is headquartered in Little Rock, Ark. Learn more at www.spp.org. Acciona Wind Energy USA, LLC; American Electric Power (AEP Oklahoma Transmission Company, Inc.; AEP Southwestern Transmission Company, Inc.; Public Service Company of Oklahoma, Southwestern Electric Power Company); Arkansas Electric Cooperative Corporation; Basin Electric Power Cooperative; Board of Public Utilities of Kansas City, Kansas; Boston Energy Trading and Marketing, LLC; Calpine Energy Services, L.P.; Cargill Power Markets LLC; Central Power Electric Cooperative, Inc.; Cielo Wind Services, Inc.; City of Coffeyville; City of Independence, Missouri; City Utilities of Springfield; Clarksdale Public Utilities Commission; Cleco Power, LLC; Corn Belt Power Cooperative; CPV Renewable Energy Company, LLC; Dogwood Energy, LLC; DTE Energy Trading, Inc.; Duke Energy Transmission Holding Company, LLC; Duke-American Transmission Company, LLC; Dynegy Power Marketing, Inc.; East River Electric Power Cooperative, Inc.; East Texas Electric Cooperative, Inc.; EDP Renewables North America LLC; El Paso Marketing Company, LLC; Enel Green Power North America, Inc.; Entergy Asset Management; Entergy Services, Inc.; Exelon Generation Company, LLC; Flat Ridge 2 Wind Energy, LLC; Golden Spread Electric Cooperative, Inc.; Grain Belt Express Clean Line LLC; Grand River Dam Authority; Harlan Municipal Utilities; Heartland Consumers Power District; Hunt Transmission Services, LLC; ITC Great Plains, LLC; Kansas City Power & Light Company (KCP&L Greater Missouri Operations Company); Kansas Electric Power Cooperative, Inc.; Kansas Municipal Energy Agency; Kansas Power Pool (KPP); Lafayette Utilities System; Lea County Electric Cooperative, Inc.; Lincoln Electric System; Louisiana Energy and Power Authority; Luminant Energy Company, LLC; Mid-Kansas Electric Company, LLC; Midwest Energy, Inc.; Midwest Gen, LLC; Missouri Joint Municipal EUC; Missouri River Energy Services; Mountrail-Williams Electric Cooperative; Municipal Energy Agency of Nebraska; Nebraska Public Power District, NextEra Energy Resources, LLC; NextEra Energy Transmission, LLC; Noble Americas Gas & Power Corp; Northeast Nebraska Public Power District; Northeast Texas Electric Cooperative, Inc.; Northwest Iowa Power Cooperative; NorthWestern Energy; NRG Power Marketing, LLC; OGE Transmission, LLC; Oklahoma Gas and Electric Company; Oklahoma Municipal Power Authority; Omaha Public Power District, Plains and Eastern Clean Line LLC; Prairie Wind Transmission, LLC; Public Service Commission of Yazoo City; Public Service Company of Oklahoma; Rayburn Country Electric Cooperative; Shell Energy North America (US), L.P.; South Central MCN, LLC; Southwestern Electric Power Company; Southwestern Power Administration; Sunflower Electric Power Corporation; Tenaska Power Services Co.; Tex-La Electric Cooperative of Texas, Inc.; The Central Nebraska Public Power & Irrigation District; The Empire District Electric Company; Transource Energy, LLC; Transource Missouri, LLC; Tri-County Electric Cooperative, Inc.; Tri-State Generation and Transmission Association, Inc.; Westar Energy, Inc. (Kansas Gas and Electric Company); Western Area Power Administration - Upper Great Plains Region; Western Farmers Electric Cooperative; Williams Power Company, Inc.; Xcel Energy (Southwestern Public Service Company, Xcel Energy Southwest Transmission Company, LLC); XO Energy SW, LP.
News Article | October 4, 2016
Wind power is an success story, both in Texas and throughout the U.S. Recent commentary in the MIT Technology Review shares several captivating stories about the ways wind power benefits communities across Texas. Wind supports well-paying jobs and stable income for farmers and ranchers, provides a drought-proof cash crop they can rely on when the rains don’t fall or the fields don’t produce. However, Martin’s final written product also gets some things wrong about wind power’s technology. This fact check clears up those misunderstandings. The strongest electricity system is one that uses a diversity of generating sources. That way, if one source fails another one remains online to help pick up the slack. That keeps both the lights on for consumers and protects their wallets against price spikes from declines in energy supply. That’s exactly what happened during 2014’s Polar Vortex weather event, when the extreme cold knocked several conventional plants offline. Because wind energy kept reliably generating electricity during the frigid cold spell, it helped save consumers across the Great Lakes and Mid-Atlantic regions over $1 billion in just two days. Likewise, this Bloomberg article reports when New York’s Indian Point nuclear plant suddenly went offline this past December, “Wind turbines in the state came to the rescue, running close to capacity and compensating for the loss of the reactor.” We’ve also seen the benefits of a diversified electricity mix in Texas. ERCOT (grid operator for most of the state) data show the cost of wind’s variability is lower, in both total and dollar/megawatt hour (MWh) terms, than the cost of accommodating conventional power plant failures. That’s because wind plant output changes gradually and predictably, so the changes can often be accommodated using inexpensive offline power plants (non-spinning reserves) that can start up over 10-30 minutes. In contrast, conventional power plants fail abruptly, requiring the use of expensive, fast-acting reserves. Nor is there a difference in the quality of the electricity wind farms produce; wind plants exceed the ability of conventional power plants to regulate voltage and frequency. ERCOT regularly uses wind plants’ fast and accurate frequency response control as the primary tool for keeping system frequency stable as electricity demand and supply fluctuate. Wind plants meet far more stringent standards for riding through voltage and frequency disturbances, standards that cannot be met by conventional power plants. Using their sophisticated power electronics, wind plants quickly and accurately regulate voltage, in many cases even when the turbines are not producing power. Wind power is one of the biggest, fastest, cheapest ways to cut carbon pollution As the nation’s largest energy user, it shouldn’t come as a surprise that Texas would have high carbon dioxide (CO2) emissions. And that’s been exactly the case. By a large margin, Texas’s carbon emissions have been the country’s highest for decades. The more relevant factor is that those emissions are trending downward, as the state moves to wind generation and other low-carbon forms of energy. This downward trend has continued despite a large increase in overall electricity consumption in Texas (driven by population growth, increased electricity demand for oil and gas production, etc). The emissions intensity (CO2/megawatt hour (MWh)) of Texas electricity has been declining even more dramatically, as shown by this data from the Energy Information Administration. Because wind is one of the biggest, fastest, cheapest ways to cut carbon pollution, it makes sense that Texas’s carbon emission intensity would decrease as more wind power comes online in the state. Indeed, wind energy in the Lone Star State cuts nearly 5.5 million cars’ worth of CO2 pollution every year. Updating transmission pays for itself and saves consumers money Modernizing America’s electricity grid to meet 21st century needs benefits all sources of electricity generation, and doing so often creates significant consumer savings. For example, Americans could save up to $47 billion on the electricity bills every year from better transmission planning, according to analysis from the Brattle Group. Likewise, the Southwest Power Pool, a grid manager in 14 states, reports that transmission upgrades would save $800 for each of its customers over the next four decades. Similarly, the Midcontinent Independent Systems Operator, which manages the grid in another 15 states, found improvements could save each person it serves $1,000 in the coming years. That’s why other parts of the country are following Texas’s lead, including much of the Midwest, from Oklahoma west to Colorado and Wyoming, north to the Dakotas, and east to Illinois and Indiana, all areas all with very good wind resources. Texas has recognized that transmission pays for itself, and has always spread the cost of transmission for all energy sources across all users of the power system. A strong transmission system is essential for a free electricity market, as a congested power system hinders competition. The reality is all U.S. energy sources receive government incentives, and wind has received a small portion of the overall amount. Since 1950, wind has accounted for less than 3 percent of all federal dollars spent on energy incentives, while fossil fuels and nuclear have led the way at 65 percent and 21 percent, respectively. The reality is the more wind power has grown, the more Americans like it. Today, wind energy is widely deployed in 40 states, and places like Iowa, Kansas and South Dakota use wind to generate at least 20 percent of their electricity. Overall, a dozen states use to wind to generate at least 10 percent. Voters in these states realize wind brings economic development, well-paying jobs and new revenue streams to their communities. That’s why a recent poll found 91 percent of likely voters favor expanding wind power. While some may persist in spreading outdated or misleading information, the truth is wind a clean, reliable affordable solution for millions of American families and businesses.
News Article | February 15, 2017
Since mid-2016, the challenges facing the nation’s nuclear fleet have only grown more pressing. Natural gas prices, despite recent volatility, remain very low, keeping nuclear revenues in competitive electricity markets low. Nuclear plants continue to announce retirement decisions, with the 2.2 MW 2-unit Indian Point retirement by mid-2021 being especially notable considering its current profitability. More than 10% of the U.S.’s 2010 nuclear fleet is now retired or scheduled to retire within the next 8 years. Faced with the loss of the largest zero carbon electricity source in the country, states are taking the lead in maintaining struggling nuclear facilities. Since New York finalized its ZEC program, Illinois has provided similar targeted nuclear support as part of broader energy legislation. Other states are considering following suit. While state action may be the most likely policy solution for struggling nuclear units, regional or federal policy solutions offer different and more comprehensive changes. Increasingly, regulatory power over utility-scale electricity generation has shifted from the states to FERC. The evolving regulatory roles of state commissions, ISOs, and FERC constrain and inform any major policy efforts to address the challenges facing the nation’s nuclear fleet. As we discussed in Part 2, this shifting regulatory landscape limits how state legislatures and PUCs address nuclear retirements in individual states. At the same time, the new regulatory landscape provides the opportunity for policy solutions at the regional and federal level. The U.S. Congress, ISOs, regional programs, and FERC together can all play unique roles in limiting retirements of existing nuclear facilities. In key ways, regional and federal solutions are qualitatively different from the state solutions analyzed in part 2: Critically, the ‘higher’ the regulatory avenue used, the more nuclear facilities and general power plants that are effected. Most states only have a handful of nuclear reactors, making it possible to micro-target struggling nuclear reactors even if it brings charges of favoritism. Comparably, regional and federal regulatory authorities have many nuclear reactors under their oversight. Due to political and regulatory constraints, any actions these regulators take may have to benefit all nuclear units, potentially increasing retirement prevention costs. The effects of any policy will be different for deregulated and rate-regulated nuclear units. Parts 1 and 2 highlighted the key differences between these two types of reactor compensation. A quick recap: Of the two, deregulated reactors face the most pressing retirement risks. Nevertheless, many rate-regulated reactors face major retirement risks in 5-15 years without policy action. In this article, we review four potential energy policies that operate primarily on the regional or federal level that could stem the tide of nuclear retirements: This is the third article in a three-part series on existing nuclear electricity generation in the United States. Part 1 discusses major economic and policy challenges. Part 2 examines several specific actions states can take to prevent nuclear retirements. This article (Part 3) examines potential regional and federal policy solutions. Of the seven competitive wholesale electricity markets (ISOs) in the United States, four have some type of capacity market construct: PJM, ISO-NE, NYISO, and MISO. These markets only emerged relatively recently and are still being actively designed. Although the rules behind each capacity market are complex, the concept is simple: While energy only markets compensate generators for energy provided to the grid, capacity markets compensate generators for promising to provide capacity when dispatched by the ISO. Effectively, capacity markets substitute for the traditional role of state regulators in long term system planning. Capacity markets work to maintain long term grid reliability and adequate resource supply. Energy-only markets maximize for short term operation and, due to price volatility and market cycles, will often not provide sufficient revenue to keep power plants open in the short term even if they are economic in the mid or long term. By providing revenues up to three years in the future, existing capacity markets provide some long-term certainty for market revenues (the extent is debated). In the markets where they exist, prevailing capacity prices can thus shape overall market outcomes. Indeed, they already have. Around half of retired or retiring nuclear reactors are in the three ISOs with the most developed capacity markets: PJM, NYISO, and ISO-NE. Most states in these three ISOs are deregulated. These nuclear units almost exclusively receive revenue from energy and capacity markets. Capacity markets are still being developed, are somewhat controversial, and have notable limitations.They are not markets as most people think of them; rather, they are administrative auctions. Based on ISO-developed and FERC-approved rules, grid operators run their own capacity auction processes. They determine the amount of capacity needed in the target year, receive bids for supplying that capacity, and determine the ultimate capacity clearing price. Typically, if a generator clears the auction, they are required to generate electricity when called upon by the grid operator. They receive capacity revenues in a $/MW-time period format. The rules governing capacity auctions often play as much role in setting prices as competitive bids do. The ISOs determine what level of capacity needs to be procured, generator eligibility, under what conditions suppliers can provide the capacity, how the auction price is determined, and more. In most capacity auctions, most plants plan on continuing to operate no matter what. They are price takers, meaning that there are only a handful of plants bidding competitively into the auctions. Hence the rules of the auction effectively determine the revenues the generators receive. ISO-NE provides a stark example: in the first seven capacity auctions in ISO-NE, ISO-wide capacity prices cleared at the administrative floor. Two of the retired or retiring nuclear units in the country are in ISO-NE, making low capacity revenues a key factor in those specific retirements. New England’s remaining plants face some of the highest retirement risks in the country. There are several ways that capacity markets could be reformed to help address existing nuclear retirements: Any changes that occur in capacity markets need to recognize the rapidly changing technologies in electricity. Existing capacity markets are still young and developing, focused on economic efficiency, and were (effectively) limited to only thermal units. New and emerging energy technologies, particularly renewable energy and energy storage, will challenge overall market design and capacity markets specifically. Due to their major economic ramifications for generator revenues and customer costs, the policy process to drive changes in capacity markets is complex and contentious. The economic challenges facing some nuclear reactors in the short term and most reactors in the long-term boil down to one problem: insufficient cost competitiveness with non-nuclear plants due to both ‘true’ competition and market design. In deregulated markets, lower electricity prices greatly reduce revenues for existing nuclear plants; in rate-regulated markets, low natural gas and renewable prices can offer lower cost (and potentially lower risk) alternatives. Government subsidies for nuclear plants could address nuclear plants’ lack of cost competitiveness in both types of electricity markets. In energy world, the term “subsidies” is often used widely with many definitions depending on the context. For purposes of this article, we refer to subsidies in a narrow sense: a subsidy is a direct government transfer from taxpayers used to meet some specific public policy objective. This definition includes direct grants or tax credits but would not include something indirect or intangible, like the debated Price Anderson ‘subsidy’. In both deregulated and rate-regulated markets, subsidies increase plant revenues. In deregulated markets, subsidies directly increase a plant’s competitiveness in the market; at current market prices and nuclear costs, the most vulnerable nuclear plants would be profitable at a moderate subsidy. In rate-regulated markets, subsidies increase the relative cost competitiveness of existing nuclear reactors during commission and utility decision-making. A nuclear subsidy can be implemented at either the state or federal level. Either the relevant state legislature or Congress would need to pass legislation. Administratively, subsidies are relatively straightforward with limited technical complications. The government decides on what basis to provide money: the plant’s capacity, its generation, or some financial metric like investment or operating costs. Most likely, any nuclear subsidy would probably be in $/MWh, like existing production tax credits. Perhaps more than any other potential nuclear solution, subsidies for existing nuclear generation are likely to face significant political opposition. There are several major considerations that likely make subsidies infeasible: Of these five, the last is a major limitation. A general principle of US energy regulation (derived from the broader economy) is that consumers should be responsible for all costs associated with their service. Reality is far from this ideal. Nevertheless, this principle underlies the rate-shifting concerns of the net metering debates as well as environmental regulations that internalize external costs. Unlike every other solution presented in this series (excepting perhaps nationalization), subsidies would violate this principle by shifting the cost burden to taxpayers. Subsidies are often more visible and transparent than regulatory actions as they come directly from the legislature, as opposed to PUCs or the ISOs. The prospect of taxpayers subsidizing ratepayers is likely to engender significant political opposition to any existing nuclear subsidy. From a legal standpoint, additional obstacles come into focus; planners must be careful in crafting government-sponsored subsidies. Where subsidies are found to be discriminatory, they are potentially illegal, and so basic risk management could require that subsidy programs be applied to every nuclear plant in a jurisdiction. For states, this might mean a nuclear plant or two would receive unnecessary subsidies to keep other plants online. A national nuclear subsidy would similarly provide revenues to the whole nuclear fleet, even though only nuclear units in restructured markets are most at risk. As existing nuclear plant’s current challenges are largely economic, increasing energy prices indirectly via imposing a carbon tax on fossil generation could be ideal: Carbon pricing can be implemented at almost any level of energy policymaking: state, regional, and federal. There are two major carbon pricing schemes in the US today: California’s cap and trade system and the Northeast’s Regional Greenhouse Gas Initiative. Unlike subsidies, the financial effects of carbon pricing can depend on a nuclear plant’s regulated environment. In all deregulated wholesale markets, carbon prices increase fossil prices which are on the margin, increasing energy prices and driving higher revenues for nuclear facilities. Meanwhile, in rate-regulated markets, carbon prices make fossil generation less attractive compared to existing nuclear units but do not directly affect plant revenues. Whereas subsidies in a rate-regulated market lead to more revenue for nuclear units, a carbon price would not. Over the short to mid-term, a moderate carbon price would likely be sufficient to keep all but the most uneconomic reactors online. Brattle Group recently estimated a $12-20/ton CO2 tax would be sufficient to prevent most additional retirements. Over time, the carbon price would likely need to rise: While carbon pricing is promising, it has so far proven ineffectual at prevent nuclear retirements. More than half of retired or retiring nuclear reactors are already located in areas subject to cap-and-trade (RGGI and California C+T). With natural gas (not coal) dominating the margin in these markets, carbon prices in both of these trading schemes have been too low to sufficiently increase power prices to benefit struggling nuclear facilities. The low carbon prices in RGGI and California arise from differing circumstances. In RGGI, policymakers have consistently set the cap too high, making CO2 permits especially cheap. In California, complementary policies reduce carbon reductions required from the cap and trade scheme, also reducing CO2 permit prices. Politically, carbon pricing may be the most promising regional or federal solution presented in this article. Unlike other policies, it offers a strong opportunity for nuclear owners to coalition build with non-nuclear interests. It is favored by regulators, industry, and many politicians. It will not happen nationwide in the current administration, but regional efforts may continue and a national carbon price may be inevitable. While carbon pricing may be a more politically acceptable solution, it still faces political opposition that make it unlikely in the short term. Tightening RGGI or California’s carbon cap could help nukes in those specific markets but may be politically unviable; excess existing permits may keep prices depressed regardless. Perhaps the most radical policy proposal to keep the U.S.’s nuclear fleet online calls for government ownership and management of the U.S. nuclear fleet. In short, this option involves nationalization of private nuclear facilities in varying degrees. Although this idea generated considerable interest, there has been limited discussion as to what it would look like in practice. Nationalization occurs when a national government takes control of an existing private entity. In modern times, the US has practiced both temporary nationalization (AIG and General Motors) and permanent nationalization (Amtrak and TSA). To use nationalization as a policy solution for struggling nuclear units, the federal government would purchase or take ownership of one or more existing nuclear units. Critically, nationalization does not mean the government is forcing a mandatory purchase of a nuclear facility. The federal government (via an appropriate agency) could negotiate with a nuclear plant owner on a fair price to purchase the plant voluntarily. If such negotiations proved unsuccessful, the federal government could seize ownership of one or more existing nuclear reactors using its power of eminent domain. In such a case, the government would need to compensate the owner of the nuclear reactor at a market rate. Either a voluntary or mandatory nationalization program almost certainly require an act of Congress to grant authority and supply any necessary funding. To a certain degree, nationalizing the nuclear fleet is not as radical as it might first sound. The federal government has significant technical, operational, and even institutional expertise in nuclear power: One of these units, Watts Bar 2, was the first new nuclear unit to come online in two decades in the US last year. Beyond nuclear power, the federal government already owns and operates much of the nation’s hydropower. Four Federal Power Marketing Administrations marketed 42% of the nation’s existing hydropower in 2012. Once the government owns some (or all) existing nuclear facilities, the key question is how markets compensate these plants for their generation. The economic challenges facing nuclear do not just disappear if the plants are owned by the federal government. Reduced profit incentives (and reduced borrowing costs) only somewhat reduce required market revenues. Most likely, nationalized nuclear plants would need to be compensated through some sort of cost of service regulation (without a need for a rate of return). If the plants just received market revenues, they would lose money, which would ultimately come from the federal taxpayer. As noted above, regulatory principles generally call for ratepayers to be responsible for compensating electricity costs, not taxpayers. Since Congressional legislation is required for nationalization, Congress could well mandate in that same legislation that nationalized nuclear facilities receive cost of service compensation from wholesale power markets (i.e. ISOs/RTOs). As with other potential solutions, timing is a critical factor. Mandatory nationalization is a longer term option for the nuclear fleet but highly unlikely to occur in either the short or the mid-term (energy and cultural norms would have to change before policy). However, it is possible that a voluntary nationalization program could occur relatively quickly at a targeted scale. Under such a voluntary program, federal power agencies could purchase and then operate select nuclear power plants that would otherwise retire. If structured well, such legislation could minimize direct costs to the taxpayer while also ensuring that nuclear facilities are fully valued for the public goods they provide. The post Addressing the Plight of Existing Nuclear Retirements, Part 3 appeared first on SparkLibrary.