News Article | March 1, 2016
Economics of Batteries for Stabilizing and Storage on Distribution Grids It appears the world has ample fossil fuels for at least the next 100 years, even with a growing gross world product and population. A worldwide, full-scale transition away from fossil fuels likely would take at least 100 years. It would not be wise to subsidize the build-out of technologies that have very little potential to provide the world with abundant, low-cost energy. Any rational planning and design of energy systems, and the systems of the users, should be based on the world’s fossil fuels being depleted in the distant future. Economically viable technologies are evolving to enable more generation closer to the user. Generating capacity on a distribution grid could provide most of the energy consumed within a distribution grid. Thus, the distribution grid would be less dependent on the high voltage grid, which, as a side benefit, would reduce energy losses on the high voltage grid. The role of the high voltage grid would decrease, but not eliminated. There still would be significant energy generated and fed into the high voltage grid, such as from nuclear, hydro, wind, concentrated solar power, CSP, plants, and PV solar plants; fossil plants would gradually disappear. This article has three parts. Part I mostly deals with the economics of battery systems. Part II mostly deals with a viable US energy source mix without fossil fuels. Part III deals with zeroing population, energy and gross world product growthrates, and then having negative growthrates, because those measures are even more important for a sustainable world than moving away from fossil fuels. THE NEED FOR ENERGY STORAGE Energy storage, “before and after the meter” would need to be built out, because it is likely: – The expansion of transmission systems will not proceed as quickly as required to keep up with the growth of variable, intermittent wind and solar energy, often because of cost and NIMBY concerns. – Demand-side management options are at best uncertain means to manage the grid and cannot be relied upon as a substitute for increased investment in energy storage. – Whereas fossil fuel-fired power plants can have up to several months of reserves (gas storage) or direct access to fuel (coal), there is no such strategic reserve in case of often-occurring, protracted events with insufficient wind and solar energy. – The US power market will not operate as one grid, leading to bottlenecks whenever inter-grid energy balancing is required. For example, the Texas grid has minor connections with the Eastern Interconnect and Western Interconnect. In the future, there will be many PV solar systems tied to a distribution grid. Increasing the capacity, MW, of those PV solar systems would decrease the distribution grid stability, especially during variable-cloudy weather. Battery systems tied to distribution grids with many PV solar systems are used in California and Germany to smooth excessive energy variations on distribution grids, and high voltage grids, in case of excess energy generation. They act as dampers, which work as follows: – The varying DC energy of the PV systems is fed as AC into the distribution grid. – The battery systems maintain distribution grid stability by absorbing energy from or providing energy to the grid, as needed. – DC to AC inverters of the battery systems are about 85%, 50%, and 10% efficient at 20%, 10% and 2% outputs, respectively, i.e., 50% of the converted energy is lost as heat, if charged and discharged energy quantities occur at less than 10% of inverter capacity! NOTE: Such charging and discharging has nothing to do with storing PV solar energy during the day for use at night, as is sometimes claimed. Typically, in damping mode, the battery system would be charged to 60 to 70% of rated capacity, MWh, so it can be charged up to 90 to 95% and discharged down to 50 to 20%, depending on the battery type. The charge controller, to preserve battery life, prevents charging above and discharging below these set points. Typically, in damping mode, the charging-discharging range is well within 60% to 70%, i.e., this charging and discharging generates significant heat (energy is wasted), as it occurs at less than 10% of inverter capacity!! As more PV systems are added to the distribution grid, additional battery capacity would be required. Most articles on batteries, as applied to electric grids, often written by non-technical people, are not based on real world data. As a result, unfounded optimism is spread regarding the economics of battery systems and their near-term implementation. This article is based on the real-world, operating limitations of the Chevy-Volt and TESLA lithium-ion batteries, and the TESLA powerwall specification sheet data, to determine battery losses, operating limits and energy storage costs, c/kWh, of the battery systems attached to distribution grids. Without the real world data as a basis, erroneous conclusions would have been the result. Chevy-Volt: The 2014 Chevy-Volt has a 16.5 kWh battery, but it uses a maximum of about 10.8 kWh (about 65% of its capacity, a slightly greater % on subsequent models), because the battery controls are set to charge to about 90% and discharge to about 25% of rated capacity. The 10.8 kWh gives the Chevy-Volt an electric range of about 38 miles on a normal day, say about 70 F, less on colder and warmer days, less as the battery ages. TESLA Model S: The TESLA Model S uses 75.9 kWh of its 85 kWh battery for rare, extremely long trips, so called “range driving”, 75.9/85 = 89% of rated capacity, and uses 67.4 kWh for maximum “normal driving” discharges, or 67.4/85 = 79% of rated capacity. Almost all people use much less than maximum “normal driving” range, because they take short trips and charge their vehicles on a daily basis, thereby preserving battery life. Batteries are not fully charged, nor fully discharged, i.e., there is a range of charge. Using large ranges shortens battery life. For example, a 30-mile commute consumes about 10 kWh. The minor 10/85 = 12% discharge of a TESLA Model S battery allows TESLA to offer an 8-y/unlimited miles warrantee, whereas the major 10/16.5 = 61% discharge of a Chevy-Volt battery requires GM to offer an 8-y/100,000 miles warrantee to minimize warrantee costs. That warrantee is for manufacturing defects, does NOT cover performance. According to GM, the battery is expected to have a performance loss of about 15% over its 8-y warrantee life, and more beyond that 8-y life. COST OF OWNING AND OPERATING A TESLA AND NISSAN LEAF VEHICLE TESLA: Below is a quick way and a more accurate way to determine the cost per mile of owning and operating a TESLA car for 8 years. Assumptions: 85 kWh battery; battery warrantee 8 years, unlimited miles; $80,000, new, $15,000 at 8 years; driven 100,000 miles in 8 years; 0.30 kWh/mile at customer meter; 10% free, on-road charging, 90% at home charging at 0.20 $/kWh. Quick way cost is 70.4 c/mile, with ignored costs about 90 c/mile. Annual payment for amortizing $80,000 at 3%, 8y, is $11,396, or (8 x 11396 – 15000)/100000 = 76.2 c/mile. More accurate way cost is 81.6 c/mile, with ignored costs about 90 c/mile. Nissan Leaf: Below is a quick way and a more accurate way to determine the cost per mile of owning and operating a Nissan Leaf car for 8 years. Assumptions: 24 kWh battery; battery warrantee 8 years, 100,000 miles; $30,000, new, $5,500 at 8 years; driven 100,000 miles in 8 years; 0.30 kWh/mile at customer meter; no on-road charging, 100% at home charging at 0.20 $/kWh. Quick way cost is 30.5 c/mile, with ignored costs about 40 c/mile. Annual payment for amortizing $30,000 at 3%, 8y, is $4,274, or (8 x 4274 – 5500)/100000 = 28.7 c/mile. More accurate way cost is 34.7 c/mile, with ignored costs about 40 c/mile. Ignored costs: The cost of financing and amortizing (for the “quick way”), PLUS any costs for O&M of car and at-home charger, PLUS taxes, license, registration; PLUS any capacity degradation due to cycling, are ignored. Capacity degradation means it takes more energy to charge and discharge the battery, a shorter range for a given battery discharge, less livelier throttle response during acceleration and uphill driving. NOTE: Assuming a new owner buys an 8-y-old TESLA for $15,000, he likely would install a new battery for about 85 kWh x $125/kWh = $10,625, plus labor and materials, and disposal of the old battery. The price of a new TESLA likely would be about $80,000 eight years from now, because increases in car costs likely would be offset by decreases in battery costs. If electric rates are high, and gasoline prices are low, EVs are not a good deal. Batteries have a lesser RATE OF DISCHARGE to the DC motor of an EV on colder days, say 10F, than on normal days, say 70F. This causes the EV to act sluggish, especially with snow on the road, or going uphill, and causes it to have a lesser range. Also, in cold climates, cars need a cabin heater, heated seats and heated outside mirrors. Thermal management of lithium-ion battery systems is critical for electric vehicle performance. For example: an active system may be required to heat or chill a liquid before pumping it through the battery system to regulate the temperature throughout the system. On hot days, the chilled liquid absorbs heat from the batteries, rejects it using a radiator, before going through the chiller again. On cold days, the heated liquid supplies heat to the batteries to ensure efficient charging, and to maintain a proper rate of discharge during driving. TESLA markets a wall-hung, 7 kWh Powerwall battery @ $3,000, and floor-mounted a 100 kWh Powerpack @ $25,000, or $250/kWh*, all of which are lithium-ion cells made by Panasonic. The 7 kWh unit is designed for daily charging/discharging. Up to nine units can be connected for a capacity of 63 kWh. * TESLA utility-scale, turnkey, Powerpack battery systems would be about $400/kWh. The INSTALLED cost of a 7 kWh unit is $3,000, factory FOB + S & H + Contractor markup of about 10 percent + $2,000 for an AC to DC inverter + Misc. hardware + Installation by 2 electricians, say 16 hours @ $60/h = $6,500, or 929/kWh. TESLA offers a 10-y warrantee for manufacturing defects, does NOT cover performance. TESLA estimates 10% degradation in performance by year 10. The battery-charging rate is usually defined as the battery capacity divided by a number. Rapid charging, such as C/1 or C/2, should be avoided, as it overheats the battery and reduces battery life. Normal charging, such as C/3 or C/4, preserves battery life. – For a TESLA Model S, C/3 = 85/3 = 28.3 kWh per hour. About 8.8 kWh would be charged in one hour at 40 A and 220 V, or 28.3 kWh in 1.6 hour at 80 A and 220 V. The 28.3 kWh would enable a moderate-speed commute of about 28.3/0.3 = 94 miles. The energy from the user’s metered outlet would be about 1.1 x 28.3 = 31 kWh. – If the energy were from a PV solar system, a 5 kW system would deliver about 3.75 kWh for an hour around noontime, on a sunny day, of which 7/3 = 2.33 kWh could be charged into a 7 kWh unit. The below four examples assume the TESLA 7 kWh Powerwall batteries actually store 10 kWh, of which 70%, or 7 kWh DC, is available once every day for 10 years, 3650 cycles. If 7.756 kWh AC is fed via an AC to DC inverter into the battery, 7 kWh DC is charged (charging efficiency 0.914). If 7 kWh DC is discharged via a DC to AC inverter, 6.400 kWh AC is the usable energy (discharge efficiency 0.914). The AC-to-AC efficiency 0.914 x 0.914 = 0.836 is likely less in the real world, due to other system energy losses, and due to degradation of performance over time. An AC-to-AC efficiency of 0.80 or less would be more realistic. See URL. Example No. 1, Store Daytime Solar Energy for Use at Night: The economics of this scheme is based on the unrealistic assumption PV solar energy would be available to charge the batteries, to the maximum extent possible, each and every day for 10 years, and that all of that energy would be used at night. This would yield the worst-case economics. The below calculations are based on that assumption. NOTE: The output of a solar PV system could be split with DC, via a charge controller, to the batteries, and DC to an oversized hot water storage tank and other DC users in the house, with the remaining DC, via the solar PV system inverter, as AC to the house and the grid. Assumptions: NO performance loss over its 10-yr warrantee life; one cycle per day, i.e., 3,650 cycles; daytime solar energy generated by the homeowner could have been sold to the utility at 30 c/kWh; homeowner avoids buying nighttime energy from the utility at 20 c/kWh; usable energy 6.400 kWh (discharge eff. = 6.4/7 = 0.914); charging energy 7.656 kWh (charging eff. = 0.914). A quick way to estimate the minimum cost of storage with a 7 kWh unit: $6500/3650 = $1.78/d, dividing by the retrieved energy 1.78/6.400 = 27.8 c/kWh; with ignored costs, the actual storage cost would be about 35 c/kWh. Ignored costs: The cost of financing and amortizing, PLUS any costs for O&M and disposal, PLUS any capacity degradation due to cycling, PLUS other system losses, PLUS efficiency reductions of part-load operation of AC/DC and DC/AC inverters, are ignored. Conclusion: Storing a quantity of high-value, on-peak solar energy during the day, to retrieve a smaller quantity of low-value, off-peak energy during the night, is not smart, unless the rate differential and/or subsidies are extremely high. NOTE: For people living “off-the-grid”, it is essential to store solar energy during the day for use at night. Example No. 2, Store Nighttime Grid Energy for Use During Daytime: The economics of this scheme is based on the unrealistic assumption the batteries would be charged from the grid at night, to the maximum extent possible, each and every day for 10 years, and that all of that energy would be used during the day. The below calculations are based on that assumption. Assumptions: NO performance loss over Powerwall 10-yr warrantee life; one cycle per day, i.e., 3,650 cycles; off-peak cost of charging is 20 c/kWh; on-peak avoided cost is 30 c/kWh; usable energy 6.400 kWh (discharge eff. = 6.4/7 = 0.914); charging energy 7.656 kWh (charging eff. = 0.914). A quick way to estimate the minimum cost of storage with a 7 kWh unit: $6500/3650 = $1.78/d, dividing by the retrieved energy 1.78/6.400 = 27.8 c/kWh; with ignored costs, the actual storage cost would be about 35 c/kWh. Ignored costs: The cost of financing and amortizing, PLUS any costs for O&M and disposal, PLUS any capacity degradation due to cycling, PLUS other system losses, PLUS efficiency reductions of part-load operation of AC/DC and DC/AC inverters, are ignored. Conclusion: Storing a quantity of low-value, off-peak grid energy during the night, to retrieve a smaller quantity of high-value, on-peak energy during the day, is not smart, unless the rate differential and/or subsidies are extremely high. Some utilities plan to install multiples of 100 kWh battery systems (utility-owned or customer-owned), and plan to distribute hundreds of 7 kWh battery systems on their distribution grids to minimize grid disturbances due to PV systems and for peak shifting. These utilities may provide the wall-mounted battery systems to customers, whether they own a PV system or not. Battery systems on customer premises are called “before-the-meter” systems. For example, Green Mountain Power, a utility in Vermont, offers the following options: – Customers can lease a 7 kWh, TESLA Powerwall unit for $37.50 a month with no upfront cost, but by choosing this option they must allow GMP to access to battery to offset energy demand during peak hours. See below Item 1. – Customers can purchase a 7 kWh unit for $6,500 (and be responsible for any O&M and disposal costs). Customer can choose to a) share the access with GMP and receive $31.76 in monthly credit, or b) not share and use the system for backup and to offset his on-peak usage. See below Items 2 and 3. The economics of this scheme is based on the unrealistic assumption the batteries would be charged from the grid at night, to the maximum extent possible, each and every day for 10 years, and that all of that energy would be used during the day. This yields the best-case economics. The below calculations are based on that assumption. 1) GMP Owns, GMP Peak Shaving: Assuming GMP had access every day, and the battery is charged off-peak, the 10-y customer cost would be 37.50 x 120 = $4,500 in lease payments, to enable GMP to retrieve from storage 6.400 kWh x 3650 = 23,360 kWh of on-peak energy over 10 years. The economics of this scheme is all within GMP. Presumably, the customer still would have access for about 2 hours of backup during an outage. 2) Customer Owns, GMP Peak Shaving: Assuming GMP had access every day, the 10-y customer cost would be $6500 – $31.76 x 120 = $2,689, to enable GMP to retrieve from storage 23,360 kWh of on-peak energy over 10 years. The economics of this scheme is all within GMP. Presumably, the customer still would have access for about 2 hours of backup during an outage. 3) Customer Owns, Customer Peak Shaving: At present, the GMP customer rate is the same on-peak and off-peak. However, this may change. Accordingly, for this case, rates were assumed for illustration purposes. See “Using Batteries to Store Nighttime Grid Energy for Use During the Day” regarding offsetting customer on-peak usage. The customer would have about 2 hours of backup during an outage. It is assumed, the customer would not sell all of his solar energy at the current feed-in tariff of 19 cent/kWh, but store it for use it at night, thereby avoiding buying grid energy at the current 20 cent/kWh. A quick way to estimate the minimum cost of storage of a 7 kWh unit: $6500/3650 = $1.78/d, dividing by the retrieved energy 1.78/6.400 = 27.8 c/kWh; with ignored costs, the actual storage cost would be about 35 c/kWh. Ignored costs: In cases 2 and 3, the cost of customer financing and amortizing, PLUS any costs for O&M and disposal, PLUS any capacity degradation due to cycling, PLUS other system losses, PLUS efficiency reductions of part-load operation of AC/DC and DC/AC inverters, are ignored. Conclusion: There is no way these three “before-the-meter” schemes would ever pay for a Vermont homeowner customer, unless the on-peak/off-peak rate differential and/or government subsidies were very high. In Germany, at the start of the ENERGIEWENDE in 2000, household electric rates were about 20 eurocent/kWh and PV solar feed-in tariffs were about 55 eurocent/kWh. German households reacted to this great deal by loading up their roofs with solar systems. About 7400, 7500, 7600 MW of solar systems were installed in 2010, 2011, 2012, respectively. Since then, household rates have increased to about 30 eurocent/kWh (the second highest in Europe, after Denmark), due to various increases in taxes, surcharges and fees, and PV solar feed-in tariffs have decreased to about 12 eurocent/kWh, and systems installation decreased to about 39698 – 38236 = 1462 MW in 2015, despite much lesser system costs/kW. As it no longer pays to sell solar energy to the utility, some households have installed battery systems to use that energy themselves, which, as shown above, likely does not pay, but households install the battery systems anyway, because cash subsidies are at least 30% of turnkey system cost, and because they may be somewhat ignorant of the real economics. The economics of this scheme is based on the unrealistic assumption PV solar energy would be available to charge the batteries, to the maximum extent possible, each and every day for 10 years, and that all of that energy would be used at night. The below calculations are based on that assumption. NOTE: The output of a solar PV system could be split with DC, via a charge controller, to the batteries, and DC to an oversized hot water storage tank and other DC users in the house, with the remaining DC, via the solar PV system inverter, as AC to the house and the grid. Assumptions: NO performance loss over its 10-yr warrantee life; one cycle per day, i.e., 3,650 cycles; daytime solar energy generated by the homeowner could have been sold to the utility at 12 eurocent/kWh; homeowner avoids buying nighttime energy from the utility at 30 eurocent/kWh; usable energy 6.400 kWh (discharge eff. = 6.4/7 = 0.914); charging energy 7.656 kWh (charging eff. = 0.914). The German turnkey cost of the TESLA 7 kWh unit likely would be about 25% higher than in the US, due to shipping, import duties, labor rates, value added taxes, etc., which would be offset by the 30% cash subsidy, i.e., 6500 x 1.25 = 8,125 euro, less 30% = 5,688 euro, or 1.558 euro/d, or 1.558/6.400 = 0.243 euro/kWh; with ignored costs, the actual storage cost would be about 0.30 euro/kWh. Ignored costs: The cost of financing and amortizing, PLUS any costs for O&M and disposal, PLUS any capacity degradation due to cycling, PLUS other system losses, PLUS efficiency reductions of part-load operation of AC/DC and DC/AC inverters, are ignored. Conclusion: The rate differential would need to be even higher to offset the remaining cost, and/or subsidies would need to be increased. NOTE: For people living “off-the-grid”, it is essential to store solar energy during the day for use at night. If a battery system is used for backup, it would need to have sufficient capacity to provide energy, kWh, during an outage, which may last from 1 to 36 hours. For a freestanding house using about 500 kWh per month, this may be up to 15 kWh, assuming some appliances remain turned off during the outage. That means several Powerwall units would be required. In that case, it would be much more cost-effective to have a 3 – 5 kW, propane-fired generator. Utilities aim to reduce purchases of on-peak energy from the grid during peak demands. One way is by starting up diesel-generators and open cycle gas turbine-generators, OCGTs, for a few hours each day. The levelized cost of energy, LCOE, of 50 MW OCGT peaking plants is about 19 – 22 c/kWh over their 30-year lives. The LCOE varies with the cost of capital, operating hours/y, fixed and variable O&M, efficiency and gas prices/million Btu. As the average on-peak wholesale energy price over the next 30 years likely would be less than 19 – 22 c/kWh, the peaking plant would be operated at a loss, which is common for peaking plants. Assumptions: The capital cost of a 50 MW OCGT peaking plant is about $50 million; 50% is private capital requiring a return at 10%/y; 50% is borrowed at 5%/y. Estimates of the major annual costs are as follows: * At the current price of gas of less than $2/million Btu, the LCOE would be 17.57 c/kWh. SoCal Edison is planning a 32-MWh (8 MW for 4 h), lithium-ion energy storage project in a region with a potential 4,500 MW of wind turbines. LG Chem, a South Korean company, is providing the batteries. ABB, a Swiss company, is providing the balance of plant. Project capital cost $53.5 million (includes $25 million as a cash subsidy from USDOE), or $1,672/kWh. For comparison, the below project capital cost of a TESLA-Powerpack-based system is about $400/kWh, about 4 times less. This URL has extensive detail regarding 12 case studies of stabilizing the grid with battery systems. Case Study No. 3 shows a 21 MW wind turbine system in Maui, Hawaii, needs an 11 MW lithium-ion battery system, capable of delivering 300 kWh for 4 hours, for balancing the wind energy. Capital cost is about $11 million. Estimated cycles is 8000, and life is 20 years. Project funds are $91 million, government + $49 million, private = $140 million. The project’s infrastructure includes an energy storage system; a 9-mile, 34.5-kilovolt powerline; an interconnection substation; a microwave communication tower; and a construction access road. Each generator pad requires about 2.4 acres of cleared area. The entire project covers 1,466 acres. From the above, it is clear, the turnkey installed cost, $/kWh, of a battery system based on TESLA’s 100 kWh Powerpack energy storage units would be several times less than of any competitor. Another way of reducing utility purchases of on-peak energy from the grid during peak demands is by means of battery systems. This approach is in its infancy, as battery prices per kWh have only recently decreased to make it more financially viable, compared with traditional peaking plants. Batteries systems can meet peak demands with lower emissions than OCGTs, by charging during low-demand periods, and discharging during peak demand periods, which displaces the need to burn natural gas in a peaking plant. Battery systems can perform regulating, and filling-in and balancing services, when not in peaking mode. These services are much less stressful, as they use a smaller range of the system capacity. The LCOE of battery systems is dependent on difference of wholesale on- and off-peak rates, c/kWh, the useful service life, year, the degradation of the batteries, %/y, and the range of charge/discharge, %. As a minimum, the electric rate difference must be large enough to offset the “round-trip” losses of charging, discharging, and AC to DC and DC to AC conversion, which may be up to 18.6% of the off-peak energy fed into the battery system. The real-world loss likely would be at least 20%, due to other system losses. Below is calculated the LCOE of a TESLA Powerpack-based, peak-shaving system using the following assumptions: – The battery system is to provide 100 MWh in 2 hours. – Replacement battery cost in year 11 and year 21 about 50% of $250/kWh = $125/kWh – Removal, disposal, and install new in year 11 and year 21 about 15% of new battery cost, or $37.5/kWh NOTE: About 100 MWh/0.80 = 125 MWh needs to be charged into the battery to recover 100 MWh, for a loss of 25 MWh/d. The annual cost of that loss is 365 x 25 x 75 = $684,375, at an assumed average wholesale price of $75/MWh over the next 30 years. The battery capacity would need to be 100/(0.80 x 0.79 x 0.90) = 176 MWh The battery capital cost would be 176 x 1000 x 250 = $44.0 million The capital cost of balance of plant, BOP, would be about $24.0 million 50% is private capital requiring a return at 10%/y; 50% is borrowed at 5%/y The capital cost of the turnkey, battery SYSTEM would be about $68 million, or $387/kWh Estimates of the major annual costs are as follows: Private amortizing removal, disposal, and install new at 10%……279,879 Borrowed amortizing removal, disposal, and install new at 5%…..213,443 * This cost is only for batteries; not included are the cost of removing and disposing of the old batteries, installing the new ones, and any BOP upgrades. Even though, battery systems can perform other services, when not in peak-shaving mode, the LCOE of the battery system, operating life of 10 to at most 15 years versus about 30 years for OCGT peaking plants, would need to become about 20 c/kWh or less to cause utilities to replace older OCGT peaking plants (which likely are already paid for) with new battery systems, unless it is mandated by law, and heavily subsidized. EXAMPLE OF AN ENERGY INTENSIVE INDUSTRY USING ENERGY FROM CSPs Over the past 60 years electric arc furnaces, EAFs, have increased their US production to about 55.2 million metric ton, 62.6% of total steel production in 2013. EAFs have capacities up to about 400 metric ton of steel per hour. The below calculation is for a 300-metric ton unit. At 400 kWh/metric ton, a 300-metric ton, industrial, EAF requires about 120 MWh of energy to melt the steel, and a “power-on” time (the time steel is being melted with an arc) of about 37 minutes, and “power-off” time of about 20 minutes, for a total tap-to-tap time of about 57 minutes, to produce 300 metric ton of steel. At a capacity factor of 0.55, the EAF steel production would be 300 x 8760 x 0.55 = 1,445,400 metric ton/y, and energy consumption would be 120 x 8760 x 0.55 = 578,160 MWh/y, for 24/7/365 operation. The entire EAF mill has other energy inputs, which are ignored. Electric arc steelmaking is economical where there is plentiful electricity, with a well-developed electrical grid. In many locations, EAF mills operate during off-peak hours, when utilities have surplus power generating capacity and the price of electricity is less. If the EAF mill were located near the US southwest, and a CSP plant with at least 10 hours of storage were to provide energy for continuous operation (capacity factor 0.48, at grid feed-in point), the required minimum CSP plant capacity would be 120 MWh/(37/60) = 195 MW. Such a CSP plant would require about 10 acre/MW, and cost about $9 million/MW. CSP energy production would be 185 x 8760 x 0.48 = 818,231 MWh/y, of which the EAF plant would use 578,160 MWh/y and 240,071 MWh/y would be fed to the grid. NOTE: The above 55.2 million metric ton of steel would need the equivalent of 55.2 million/1.445 million = 38 such CSP plants. Part II of this article deals with various aspects of electrifying the US economy and moving away from fossil fuels. Some aspects of wind and solar energy are described. The importance of rotational inertia for grid stability is mentioned. Examples of energy production and capital cost of offshore wind energy in the UK and of CSP with storage in Morocco are provided. Examples of energy production and capital cost of large-scale wind energy in the Great Plains and CSP energy in the US southwest are provided. An example of a viable US energy mix, without fossil energy, is provided, including capacities and estimated capital costs. WIND AND SOLAR ENERGY DEPEND ON OTHER GENERATORS AND ENERGY STORAGE Wind and PV solar energy are weather-dependent, variable and intermittent, i.e., therefore are not steady, high-quality, dispatchable, 24/7/365 energy sources. In New England, Germany, etc.: – Wind energy is near zero at least 25% of the hours of the year (it takes a wind speed of about 7 mph to start the rotors), minimal most early mornings and most late afternoons. About 70% of annual wind energy is generated during October – April, and about 30% during May – September. – PV Solar energy is zero about 65% of the hours of the year, minimal early mornings and late afternoons, minimal much of the winter, and near-zero with snow and ice on the panels. CSP with 10 hours of storage provides steady, high-quality, dispatachable, 24/7/365 energy. – New England has poor winter conditions for PV solar energy, due to snow, icing and clouds. Monthly min/max PV solar ratios are about 1/4. On a daily basis, the worst winter day is as low as 1/25 of the best summer day. – Often both, wind and PV solar, are simultaneously at near-zero levels during many hours of the year. See URL, click on Renewables. In the Fuel Mix Chart you see the instantaneous wind and PV solar %. – Germany has very poor winter conditions for PV solar energy, due to fog, snow, icing and clouds. Monthly min/max PV solar ratios were 1/14.9, 1/12.4, and 1/8.8 for 2013, 2014, and 2015, respectively. That means, in New England, Germany, etc., without adequate and viable energy storage systems, almost ALL other existing generators must be kept in good running order, staffed, fueled, and ready to provide steady, high-quality, dispatachable, 24/7/365 energy. At higher wind energy percentages, a greater capacity of flexible generators would be required to operate at part load, and ramp up and down, which is inefficient (more Btu/kWh, more CO2/kWh*) to provide energy for peaking, filling-in and balancing the variable PV solar and wind energy. See below Synchronous Rotational Inertia and Grid Stability section. * The CO2 reduction effectiveness of wind energy in Ireland, with an island grid, is about 52.6% at 17% annual wind energy on the grid. Peaking, filling-in and balancing of the wind energy is mostly with gas-fired, combined-cycle, gas turbine generators, as it would be in New England, unless adequate capacity HVDC lines to Canada were built to enable Hydro-Quebec to perform this service with near-CO2-free hydro energy. Output Shortfall Due to System and Field Conditions in Germany: Below are two days with record PV solar output. The table shows a significant reduction in net output compared with installed capacity. – System losses are built-in, and due to conversion from DC to AC, about 17.5%. – Other losses are due to system conditions and field conditions, such as component and panel aging, panels not new, not clean, not un-shaded, not correctly angled, not south-facing, insolation (altitude, distance from equator, sun position), and weather conditions (fog, snow, ice, cloudiness, etc.) The real and reactive power, and frequency and voltage of the energy of wind turbine plants are variable. These very short-term variations are due to a blade passing the mast*, about once per second, and the various wind speed velocities and directions entering the plane swept by the rotor. A plant with multiple wind turbines would have a “fuzzy”, low-quality, unsteady output. These short-term variations are separate from those due to the weather, and usually need to be reduced, such as by reactive power compensation with synchronous-condenser systems, before feeding into a grid, especially “weak” grids, to avoid excessive grid disturbances. The Lowell Mountain wind turbine plant in Vermont is required to have a $10.5 million, 62-ton, synchronous-condenser system to minimize disruptions of the rural high voltage grid. * The passing of a rotor blade past the mast creates a burst of audible and inaudible sound of various frequencies; the base frequency is about 1 Hz, similar to a person’s heart beat, and the harmonics, at 2, 4 and 8 Hz, are similar to the natural frequencies of other human organs. Infrasound interferes with the body’s natural biorhythms, and likely causes adverse health impacts on nearby people and animals, including DNA damage to nearby pregnant animals, and their fetuses and newborn offspring. Because infrasound travels long distances, a buffer zone of at least one mile would be required to reduce these adverse impacts on people. However, roaming animals would continue to be exposed. See wcfn.org URL. EXAMPLE OF OFFSHORE WIND ENERGY IN THE UK The UK is planning to build a 1,200 MW wind turbine plant, 75 miles offshore, in the North Sea. It will have 174 wind turbines, at 6.9 MW each, 623-ft tall. The capital cost will be $5.429 billion, or $4,524,000/MW, excluding subsidies and financing and amortization costs. The production would be about 1200 x 8760 x 0.45 = 4,730,400 MWh/y. The average output would be 0.45 x 1200 = 540 MW, but the minimum output could be near-zero MW, or up to about 1,100 MW. Energy will be sold at 20.3 c/kWh, whereas UK wholesale prices are 5.1 c/kWh. The difference, totaling $6.1 billion over the 25-year life, will be charged to users as a surcharge on their electric bills. Europe HAS to resort to such expensive wind energy production systems, because it has few onshore areas with adequate wind, and these areas are too densely populated. The LCOE of such systems would significantly increase as high-cost RE energy is used for owning, operating and maintaining them, i.e., as high-cost RE replaces low-cost fossil energy. It would be extremely unwise for the US to have such expensive build-outs of wind turbine plants off the Atlantic coast, which would produce heavily subsidized energy at 20 – 25 c/kWh, because the capital cost of Great Plains build-outs would be less than $2 million/MW, and would produce much greater quantities of energy at about 6 c/kWh, with minimal subsidies. EXAMPLES OF CSP ENERGY IN MOROCCO AND US SOUTHWEST Morocco: In November 2009 Morocco announced it will install 2000 MW of solar capacity by 2020; estimated capital cost $9 billion. The Moroccan Agency for Solar Energy (MASEN), a public-private venture, has invited expressions of interest in the design, construction, operation and maintenance, and financing of the first of five solar power stations. After completion, the 2000 MW solar project will provide 18% of Morocco’s annual electricity generation. Morocco, the only African country to have a power cable link to Europe, aims to benefit from the 400 b euro ($440 b) expected to come from the ambitious pan-continental Desertec Industrial Initiative. The capital cost of the first solar power station (510 MW of CSP plants, plus a 70 MW PV solar plant; total land area 6,178 acres, 10.7 acres/MW) is estimated at about $3.2 billion for the CSP plants (about $6.3 million/MW), plus about $250 million for the PV solar plant. Financing of about $1.2 billion at near-zero interest from the World Bank, et al., and about $2.0 billion from private sources, which with accelerated depreciation to reduce taxes of investors, reduces the effective cost of capital for the project to about 2 – 3%, which enables the energy to be sold at reduced costs/kWh under 25-y power purchase agreements, PPAs. Noor 1, commissioned Feb, 2016; 500,000 single-axis, tracking parabolic mirrors; output 160 MW gross, 143 MW to grid; 3-h molten salt storage; fossil-fired boiler plant for CSP start-up and supplementary energy, as needed; wet cooling with water from a nearby reservoir; Dowtherm A at 293 C into solar field, 393 C out of solar field; capital cost $1.15 b; energy will be sold at 18.9 c/kWh. Noor 2; single-axis, tracking parabolic mirrors; output 200 MW, estimated 180 MW to grid; 7-h molten salt; dry cooling; energy will be sold at 14 c/kWh. Noor 3; mirrors focused on a tower; output 150 MW, estimated 135 MW to grid; 7 – 8 h molten salt storage; dry cooling; energy will be sold at 15 c/kWh. This configuration was included for comparison purposes. Noor 4; PV solar systems; output 70 MW. This configuration is included for comparison purposes, because the cost of utility-scale PV systems has declined to enable energy generation at an LCOE of less than 50% of CSP!! US Southwest: The Crescent Dunes CSP plant, tower-type, is located in the US southwest. Capacity: 110 MW, 10-h storage is required for continuous operation. Estimated production: 500,000 MWh/y of steady (voltage, frequency, phase-angle), dispatchable energy. CF = 500,000/(8,760 x 110) = 52%; a more likely CF would be 45 to 50 percent. Capital cost: $1.6 billion, or 14,545/kW, a very high cost. A quick way to calculate MINIMUM LCOE over 30 years = $1,600,000,000/(500,000 MWh x 30 years) = 10.7 c/kWh. If O&M, insurance, taxes, replacements, etc., and financing and paying interest on borrowed money, and owner’s return on investment, over 30 years are included, the likely LCOE would be about 16 – 18 c/kWh, less with subsidies, cash grants, tax benefits due to depreciation, etc. Remember, all of this is STANDARD, WELL-DEVELOPED technology, i.e., no cost-reducing break-throughs can be expected. US ADVANTAGES REGARDING WIND AND SOLAR ENERGY Wind Energy: In the future, the Great Plains, from the Canadian to Mexican borders, could become the Saudi Arabia of wind energy. There could be at least 350,000 wind turbines with tall masts (higher capacity factor), at 3 MW each, at a turnkey, installed capital cost of about 300,000 x 3 x $2 million/MW = $1,800 billion, producing about 300000 x 8760 x 0.35 = 2,759 TWh/y. The wind turbines would be connected with HVDC transmission lines to population centers in the eastern and western US. The annual average CF would be even greater, if 120-meter masts became commonplace in the future. Solar Energy: Similarly, the US southwest, with thousands of square miles of flat, uninhabited, desert-like terrain, could become the Saudi Arabia of solar energy. There could be at least 10,000 square miles of CSP plants with at least 10 hours of high-temperature, thermal storage for 24-h operation, at a turnkey, installed capital cost of about 10000 x 640/10 x $9 million/MW = $5,760 billion, producing 10000 x 640/10 x 8760 x 0.48 = 2,691 TWh/y. CSP with 10-h storage would provide steady (voltage, frequency, phase-angle) energy, and it is dispatchable, a major improvement over PV solar and wind. During most of the daytime hours, energy would be stored in excess of what is needed to run the plant at a high percent of rated output. After the sun goes down, the plant would be run at 60% of rated output, or less, to ensure there is enough thermal energy left over for the next early morning. Hopefully, the sun will shine and the cycle is repeated. If not, nuclear has to take up the slack, assuming fossil is on the way out. The CSP plants would be connected with HVDC transmission lines to population centers in the eastern and western US. Those plants would provide a major part of the US electrical energy requirements, plus a major part of the peaking, filling-in and balancing of variable, intermittent wind and PV solar energy, thereby reducing the need for expensive, energy storage systems. Because the capital costs of PV solar systems has significantly declined during the past 5 years, PV solar systems with 10-h thermal storage in the US southwest likely would have a lesser capital cost/MW, and likely would have a lesser LCOE than equivalent CSP systems with 10-h thermal storage. The DC energy of the PV solar systems would electrically heat the stored liquid. Europe has much less such natural advantages, because the windiest area around the North Sea would not produce sufficient wind energy, and almost all CSP plants would need to be located in the Sahara Desert, which would require protection from terrorists. In the event of a simultaneous multi-day partial wind lull in the Great Plains and a multi-day partial overcast condition in the US southwest, significant wind and solar energy, TWh, would not be generated. The energy production shortfall is estimated at (7.373, CSP + 3.511, PV + 7.560, Wind)/2 = 9.222 TWh/d. This energy shortfall could be offset by a combination of a build-out of bio-synthetic fuel production and storage systems for fuelling 60% efficient combined cycle gas turbine plants, PLUS electrical demand management. With additional transmission capacity, MW, the US northeast could import additional energy from hydro plants in Canada via HVDC lines. The capacity of the CCGTs would be 2000 TWh/(8760 x 0.80 capacity factor) = 285,388 MW, and the capital cost would be $314 billion, at $1.1 million/MW. The daily average production of the CCGTs would be 5,479 TWh/d of steady, high-quality, dispatachable, 24/7/365 energy. The bio-synthetic fuel production and storage systems capacity would need to be sufficient for at most one month of continuous operation, so storage could be drawn down during at least 2 closely spaced weather events, and be built up and maintained full during other times. The curtailment by means of demand management would be 9.222 – 5.479 = 3.742 TWh/d, about 3.742/33.686 = 11.1% of total daily generation. During a significant snowstorm or hurricane, businesses, etc., usually are closed and millions of people stay home, blackouts occur, energy consumption is reduced. As part of demand management, this form of temporary curtailment could become “business as usual” during significant wind lulls and overcast conditions. Non-essential activities, such as operating casinos in Las Vegas, could be curtailed, which would enable much of the Hoover Dam energy to be diverted to the rest of the US. National airline travel and heavy-duty truck travel could be curtailed to preserve synthetic fuels. National electric rates could be temporarily increased to 3 or 4 times normal to curtail consumption. Nationwide supply and demand management would not be possible without centralized management of the entire US grid. An essential element of such management would be a nationwide HVDC grid. The outputs of wind and solar plants can be converted to a high voltage DC current (eliminating above-mentioned, power, frequency and voltage variation issues encountered with AC transmission lines), and instantly sent, at near the speed of light, via the nationwide HVDC overlay grid. The HVDC overlay grid would be connected to existing, local HVAC grids, just as the US national highway system was connected to existing, local highway and road systems. As part of the energy transition, and due to widely used, economically viable, HVDC technology, local HVDC grids would be built out. Local HVAC grids would exist in parallel with local HVDC grids, as more users would be powered with DC energy, such as EVs, heat pumps, electronic devices, etc. At all times, the US electrical system has thousands of fossil, nuclear and hydro generators in synchronous operation, at 3600-rpm, to provide 60 Hz AC energy to the grid. Their steady, synchronous, rotational inertia is critical for grid stability. CSP plants, with thermal storage, have steady, synchronous, rotational inertia. Wind turbine plants have rotational inertia, but it is unsteady and not synchronous, which detracts from grid stability. PV solar plants have zero rotational inertia. As wind and PV solar energy increase on the grid, and fossil plants are decommissioned, sufficient synchronous rotational inertia needs to be in operation throughout the US for grid stability. As HVDC lines do not transmit the stabilizing function of rotational inertia, any future planning regarding the location of synchronous inertia needs to reflect that condition. That means the HVDC overlay grid must connect at many points to the Eastern, Western and Texas Interconnections, and generators and synchronous-condenser systems, with steady, synchronous, rotational inertia, must be distributed throughout the US. With fossil plants and their synchronous rotational inertia disappearing, nuclear plants, which can be located anywhere in the US, would be needed to replace their energy and their synchronous rotational inertia. Energy generated anywhere, by any source, at any time, can be instantly distributed anywhere in the US, with such an arrangement. Germany has been closing down its nuclear plants and older coal and gas plants, and has been building new, more efficient coal plants. The net effect likely would lead to less flexibility for balancing wind and solar energy and less synchronous rotational inertia. However, during higher levels of wind and solar generation (sunny and windy periods), often coinciding with low night-time demands, Germany has to export its excess generation, because its own generators cannot balance it; curtailments would be a solution, but would not be politically acceptable. Instead, Germany is borrowing the spare balancing capacity of nearby grids, plus their synchronous rotational inertia, to help stabilize its domestic grid. See Note No. 6 in this URL States should have enforced building codes requiring “zero-energy” and preferably “energy-surplus” construction for ALL NEW buildings to ensure building energy requirements are minimal. Such “energy-sipping” buildings would be energy efficient, Passivhaus-standard or better. Such buildings, with the addition of PV solar, and ground- or air source heating and cooling systems, could easily become “energy-surplus” buildings. New residential, industrial, commercial, institutional and governmental buildings would produce most of their own energy by having PV solar systems on their roofs or parking lots, and ground- or air source heat pump systems to offset building energy requirements, power electric heat pumps, and charge electric cars. The piping for the ground source heat pump systems could be under the parking lots. Intel’s Folsom, CA, campus has a 6.5 MW PV solar carport on about 100 acres, which provides 16 charging stations, and shade for about 3,000 vehicles. The energy efficiency measures, plus the distributed generation by buildings would significantly reduce generation by large central plants connected to high voltage grids, and would reduce overall US energy requirements and fossil fuel CO2 emissions. See Part II at the end of the article. US primary energy for transportation was 27.1 quad in 2014, of which 21.4 quad was rejected as heat and 5.68 quad performed services to users. The energy categories are as shown in the below table. Air and Ships would require syn- and biofuels; most of Rail could be electric; some of Hv Truck could be electric battery; all of Lt Truck and LDVs could be battery. A quad = 10 ^15 Btu. In this article, by 2050, 18.5 quad is assumed to be replaced by 5.68/27.1 x 18.5 x 1.2 (battery loss) = 4.65 quad of electricity, or 1363.7 TWh. About 27.1 – 18.5 = 8.6 quad would be syn- and biofuels, which would provide to services to users of 5.68/27.1 x 8.6 = 1.8 quad, or 528 TWh. Energy per Mile: In 2013, 38.4044 quad was used to generate 4065.965 TWh; less self use of 164.78 netted 3901.185 TWh; plus imports of 46.73 yielded 3947.915 TWh to the grid; less T & D of 253.580 netted 3694.335 TWh to user meters, or 12.6056 quad, resulting in an energy in/out ratio of 0.328. An EV requires about 0.30 kWh/mile, or 1024 Btu/mile, or 1024/0.328 = 3119 Btu/mile on a primary energy basis. Gasohol (10% ethanol/90% gasoline) contains about 120,900 Btu/gal. An ICE vehicle, @ 38.8 MPG, would use 120900/38.8 = 3116 Btu/mile. EPA MPG-Equivalent: For the EPA to claim the EV mileage is about 38.8/0.328 = “118 MPG-equivalent” is misleading, to say the least. The EPA-invented mileages are used to help manufacturers meet the federal CAFE requirement of 54.5 MPG, EPA-Combined, by 2025. Worldwide CAFE Standards: The three largest passenger car markets representing two-thirds of global sales have strong fuel economy standards in place: US, 54.5 mpg by 2025; EU, 56.9 mpg by 2021; China, 47.7 mpg by 2020. EXAMPLE OF A VIABLE FUTURE US ENERGY MIX WITHOUT FOSSIL ENERGY The energy providing services to users (energy coming out of radiators to heat buildings and going to wheels of vehicles, etc.) has been about 38 – 40 quad since 1998. That means, even though the US population and gross national product increased, the energy providing services to users remained about the same for 17 years, because users became more energy efficient, and shifted from energy-intensive goods to less energy-intensive services. In this article 11,353 TWh/y, or 38.738 quad, is assumed to be the energy providing services to users. The energy flow chart in the below URL shows energy providing services to users was 38.43 and 38.90 quad in 2013 and 2014, respectively. See below Energy and Capital Cost Projections table. There is no reason for this energy to increase in the future, if increased energy efficiency measures, plus additional taxes on resource- and energy-intensive activities, are implemented. However, at some point energy efficiency, etc., would reach a limit, and zero population growth, zero GNP growth and zero energy growth would be required for a sustainable future. Implementing these zero-growth percentages would be a much greater political challenge than eliminating fossil fuels from the energy mix. Eliminating Fossil Fuels: Fossil fuels, i.e., coal, petroleum, natural gas, were (18.0386 + 34.6132 +26.8185, quad)/97.2804, quad = 81.7% of US primary energy in 2013. Eliminating them from the US energy system, by government mandate or due to depletion, is a serious issue. Fossil fuels have provided steady, high-quality, dispatchable, 24/7/365 energy to the US economy since 1800. Any future US energy mix must be able to do the same. Nuclear energy is steady, high-quality, dispatchable, 24/7/365 energy; it would be an essential and viable replacement for a significant part of the fossil energy. See below table. Energy and Capital Cost Projections: The 2050 projections in the below table are based on the above Jacobson Report energy projections, reduced by increased energy efficiency. The “overnight” capital costs are shown. “Overnight” assumes all is in place overnight, as if by magic wand. Various costs, such as financing and amortization, are ignored. Comparing projects on an overnight versus overnight basis is common practice * The US energy system 2013 CF was 4113 TWh/(8760 x 1060000 MW) = 0.443; the 2050 CF would be 12296 TWh/ 8760 x 3289539 MW) = 0.427. PV solar for 2013 is included in CSP for 2013. Other is bio, wood, geothermal, tide, wave and hydro; its potential to increase is very limited. Bio, with a very low, less than 1.0 W/m2, energy density, and very low ratio of energy return over energy invested, ERoEI, would take up too much valuable farmland area. ^ In 2050, a large quantity of the about 4,000 TWh/y of PV solar and onshore wind energy, per above table, would end up in storage and would be subject to about 20% losses. That means, additional production capacity and energy production would be required to make up for energy losses in battery and other energy storage systems. The storage systems shown in the table are for normal operations, which does not cover extreme conditions, as described in the Demand Management section. + The estimate of the storage systems capital cost is based on average daily daytime and nighttime generation, with about 25% of the PV solar and wind energy entering the storage systems. For a delivered 100 units of energy, the battery capacity would need to be 100/(0.80, loss x 0.79, charging range x 0.90, aging) = 176 units. Storage systems turnkey unit cost is assumed at $400/kWh. For a more exact analysis, see Peaking, Filling-in and Balancing below. # For illustration purposes, if 1,000 units of thermal energy were collected by the solar field, and during daytime, 300 units were used to produce electrical energy at a 25% plant efficiency, then 75 units of electrical energy would be sent to the grid over 8 hours. If during nighttime, operating off storage, 700 units were used at 22%, then 154 units would be sent to the grid over 16 hours, for a total of 229 units of electrical energy to the grid over 24 hours. That means 77% of the collected energy would be process loss!! US Energy Mix Without Fossil Energy: A future US energy mix of the “electrified” economy would require its electrical generation of 4,066 TWh in 2013 to increase to 12,296 TWh by 2050, i.e., 3.0 times. As a result, solar would need to multiply (2691 + 1281)/21 = 189.2 times, wind 2759/168 = 16.4 times, nuclear 5004/789 = 6.3 times, and Other 559/401 = 1.4 times, if fossil fuels were not used. See above table. The mostly steady, high-quality, dispatachable, 24/7/365 energy of nuclear, plus CSP with storage, plus Other would be 40.7 + 21.8 + 4.5 = 67.1% in 2050, which would be about equal to the 66.1% of steady, high-quality, dispatachable, 24/7/365 fossil energy in 2013. The variable energy of wind, plus PV solar would be 22.4 + 10.4 = 32.9% in 2050, which is about where Germany will be in a few years. If Germany can manage 33% of variable, intermittent energy with its existing generators, connections to foreign grids, and minor additional energy storage systems, so can the US. See above table. Peaking, Filling-in and Balancing: Hour by hour spreadsheet analyses of changing wind and solar energy generation, and of the controllable outputs of other generators, for meeting energy demand, modified by demand management, for one whole year, 8760 rows, based on weather data of prior years, would be required to determine the times and quantities of energy in and out of storage, and storage system capacities for peaking, filling-in and balancing*. The analysis would determine the need for additional generating capacity for peaking, filling-in and balancing, for covering scheduled and unscheduled outages, and for covering extreme conditions, as described in the Demand Management section. * The quantities in and out of storage systems would be subject to an energy loss of about 20%. The outputs of load following nuclear, CSP, and Other plants, etc., would be varied to share the burden of peaking, filling-in and balancing. Capital Costs for Energy Sector: A future “electrified” US economy, without fossil fuels, might have 11,353 TWh of energy to users by 2050, by making investments in NEW energy systems of at least 16998/35 = $486 BILLION PER YEAR, during the 2016 – 2050 period. Not shown in the above table are about $100 billion per year for other costs, such as: – Financing and amortization of above energy sector capital costs, plus ongoing investments for replacements and refurbishments of the existing energy systems, as they would be needed during the transition period. – Refurbishments/decommissionings/replacements, BEFORE 2050, of the existing and newly built renewable energy systems, mostly wind and solar systems with short, say 20 – 25 year lives, i.e., replacements would be kicking in while the build-outs of new systems are proceeding. – Writing off the A to Z fossil infrastructures, upstream and downstream, and power plants, as they would become “stranded”. Those costs likely would be added to consumer electric bills and to the national debt to “hold harmless” the owners of those systems. Capital Costs for Other Economic Sectors: It would take about $200 billion per year to transform all other sectors of the US economy, for a total of about $487 + $100 + $200 = $786 BILLION PER YEAR. Here is a partial list of the items included for Other Economic Sectors: – All residential, commercial, institutional, governmental and industrial buildings would need to be upgraded for energy efficiency and modified for heating and cooling with heat pumps. – All light- and medium duty vehicles would need to be plug-in electric (no hybrids) with charging stations everywhere. Most trains would be electrically powered, but heavy-duty trucks, ships and planes would use liquid fuels made with electricity. – Build-outs would be required for the large increases in the mined quantities of natural resources, and for the enlargement of upstream and downstream facilities and infrastructures required for building, and operating and maintaining, the new energy generating systems, energy storage systems, and grid systems. It is obvious, such a helter-skelter approach, i.e., implement all this by 2050, often proposed by non-technical politicians, government bureaucrats, and owners of subsidized renewable energy businesses, would not be politically and economically feasible. It would be much better to stretch the energy transition over a period of at least 100 years, as that would be a more feasible time period, because it would reduce capital costs from about $786 b/y to about $262 b/y. Part III of this article deals with energy transition capital costs, with world population, world energy consumption and gross world product, and with sustainable growthrates. The capital cost to remove fossil fuels from the US energy mix and electrify the US economy would be at least $786 b/y for the 2016 – 2050 period, based on moderate growthrates of population, energy consumption and GNP; if higher growthrates, the capital costs would be higher. For comparison, the US defense budget is about $600 b/y. Worldwide, the energy transition capital cost would be about $3.93 TRILLION PER YEAR for the 2016 – 2050 period, because the US economy is only 20% of the world economy. However, world capital costs would be higher, because world growthrates are higher than of the US. All capital costs are “overnight”. See below World Population, World Energy Consumption, Gross World Product section. During COP-21, the 2015 UN Climate Conference in Paris, some proponents were urging to increase worldwide energy transition spending from $285.9 billion* in 2015 to $1.0 trillion/y, which shows a significant lack of understanding of the magnitude of the worldwide transition. It would be much better to stretch the worldwide energy transition over a period of at least 100 years, as that would be a more feasible time period, because it would reduce worldwide capital costs from about $3.93 trillion/y to about $1.31 trillion/y. * US $44.1 b, Europe $48.8 b, Japan $43.6 b, China $102.9 b. Diverting this capital from activities that likely would produce profitable goods and services, to the build-outs of renewable energy systems that produce more expensive energy than from fossils, would make China’s economy less competitive, which would contribute to its slower economic growth. There are side benefits, such as cleaner air, etc., but they would take some decades to realize, and their economic benefits, such as lesser health care expenses, could not be easily quantified. Below is a table of world population, world energy consumption, WEC, and gross world product, GWP, for various years. Any GW mitigation efforts would have to be sufficiently overarching to not only offset the GW effects of the growth factors in the table, but must simultaneously transform the entire world economy away from fossil fuels!! Regions, such as Europe, US, Japan, etc., with lower growthrates for population, energy consumption, and gross national product, would find it easier to make the transition away from fossil fuels, than the regions with higher growthrates, such as China, India, etc. What would the world look like with a population 35%, world energy consumption 49%, and gross world product 226% greater than in 2010? See below table. The above $3.93 trillion per year is based on the moderate US growthrates for population and GNP, and zero increase in energy consumption by 2050. If world growthrates were 0.75%/y for population, 1%/y for WEC, and 3%/y for GWP, as shown in the below table, those capital costs would be much greater. – Assumed world population growthrate is 0.75%/y for 2010 to 2050; growth factor 1.35 – Assumed WEC growthrate is 1%/y for 2010 to 2050; growth factor 1.49 – Assumed GWP growthrate is 3%/y for 2010 to 2050; growth factor 3.26 – Assumed the goods/services ratios as shown in the table. Unsustainable Growthrates: In 1800, before the advent of fossil fuels, world population, WEC and GWP were only small fractions of what they are today. The actual GWP multiplier is 234.5 in 1990$, but would be about 410 in 2015$. The environmental damage of each year is increasingly added to that of the prior years, as Nature has increasingly fallen behind with repairing the damage. The multipliers of actual and projected world population, WEC and GWP in the above table have been unsustainable for decades. The world’s central banks provided multi-trillion-dollar quantitative easing and reduced interest rates to near zero. Prices of energy and other natural resources are greatly reduced. Yet, the world economy is growing at less than 2%/y, i.e., despite the stimuli, not enough wealth is internally generated to sustain higher economic growth at traditional interest rates. Europe and Japan are growing at near-zero %/y, the US at about 2%/y, and China, India, and a few other nations at greater than 2%/y. This indicates the world’s economy needs to find a new equilibrium that is sustainable without these stimuli. In a business, that would mean shedding unproductive assets, getting out of low margin/money-losing businesses and cost cutting. That politically unpopular approach, in fact, IS the remedy for the world economy, because the current world economy has far outgrown the world’s physical capability to sustain it. More people would merely mean more poverty, more unrest/wars and more refugees. More energy consumption, with or without fossil fuels, and more GWP would merely mean more pollution and more environmental damage. The current paradigm of “growth forever” in a finite world has more than ran its course, and measures of quantitative easing and interest rate reducing to “jumpstart” the economy would make conditions worse rather than better, i.e., when in a hole……. Sustainable Growthrates: Zeroing population, energy and GNP growthrates is even more important than moving away from fossil fuels, because the increasing presence of the combination of other global warming and earth-destroying factors, such as deforestation, industrial agriculture, urbanization, worldwide shipping of goods and services, and the altering of the atmosphere and oceans with pollutants, would present an ever-growing existential threat to the survival of most of the flora and fauna of the world. Japan and Denmark have modern, high-level lifestyles and use about 50% less primary energy/$ of GDP than the US. Europe and Japan already have near-zero growrhrates for population, energy consumption, GNP. The whole world needs to follow their lead for a future sustainable world with a thriving flora and fauna. A thriving fauna and flora separates OUR world from ALL OTHER known planets. The worst is yet to come regarding the OTHER fauna and flora, which does not have modern, technological support systems. In 1800, before the advent of fossil fuels, there were about 1.2 billion people. Humans used those fuels to become dominant, and the collateral damage was the squashing of other species. By the time about 10 billion realize what they have done, it will be decades too late. According to Dr. Paul Ehrlich, biochemist, these population, energy and GNP growthrates likely would need to be negative for many decades to enable the world’s flora and fauna (includes humans) to reestablish themselves on a sustainable path. He estimates the world can support at most one billion people in a sustainable manner in harmony with a thriving fauna and flora. According to Dr. Edward Wilson, biologist, at least 50% of the world should be kept in its undisturbed state to ensure the survival of the flora and fauna. A future GWP would need to have a much greater proportion of locally produced goods, and its ratio of goods to services would need to be about 30 to 70, and it should have maximal recycling, and minimal use of newly mined resources. There would continue to be qualitative improvements within such a GWP. If you like this article, please press the LIKE button at the top.
News Article | April 13, 2016
Quotient Clinical, the Translational Pharmaceutics company, has expanded its clinical spray drying capability through the acquisition of a Niro Mobile Minor Spray Dryer. Quotient has a proven track record of developing spray-dried dispersions to overcome drug compound solubility issues, and the addition of a larger scale spray dryer will allow the production of a range of batch sizes, from milligrams up to two kilograms. Sited at Quotient’s new GMP facility at MediCity in Nottingham, UK – scheduled to open later in 2016 – this expansion is a direct response to customer requests for ongoing product development support, including toxicology and later stage clinical studies. Nikki Whitfield, VP of Pharmaceutical Sciences, commented: “Poor solubility is increasingly prevalent in drug pipelines across the industry. We have established a broad suite of formulation approaches within our Translational Pharmaceutics platform to address these complex solubility and bioavailability challenges, and this latest investment will allow us to efficiently scale up the production of optimized formulations to support our clients’ downstream clinical development programs.”
A team of investigators from Houston Methodist Research Institute may have transformed the treatment of metastatic triple negative breast cancer by creating the first drug to successfully eliminate lung metastases in mice. This landmark study appears this week in Nature Biotechnology. The majority of cancer deaths are due to metastases to the lung and liver, yet there is no cure. Existing cancer drugs provide limited benefit due to their inability to overcome biological barriers in the body and reach the cancer cells in sufficient concentrations. Houston Methodist nanotechnology and cancer researchers have solved this problem by developing a drug that generates nanoparticles inside the lung metastases in mice. In this study, 50 percent of the mice treated with the drug had no trace of metastatic disease after eight months. That’s equivalent to about 24 years of long-term survival following metastatic disease for humans. Due to the body’s own defense mechanisms, most cancer drugs are absorbed into healthy tissue causing negative side effects, and only a fraction of the administered drug actually reaches the tumor, making it less effective, says Mauro Ferrari, Ph.D, president and CEO of the Houston Methodist Research Institute. This new treatment strategy enables sequential passage of the biological barriers to transport the killing agent into the heart of the cancer. The active drug is only released inside the nucleus of the metastatic disease cell, avoiding the multidrug resistance mechanism of the cancer cells. This strategy effectively kills the tumor and provides significant therapeutic benefit in all mice, including long-term survival in half of the animals. This finding comes 20 years after Ferrari started his work in nanomedicine. Ferrari and Haifa Shen, M.D., Ph.D., are co-senior authors on the paper, which describes the action of the injectable nanoparticle generator (iNPG), and how a complex method of transporting a nano-version of a standard chemotherapy drug led to never before seen results in mice models with triple negative breast cancer that had metastasized to the lungs. “This may sound like science fiction, like we’ve penetrated and destroyed the Death Star, but what we discovered is transformational. We invented a method that actually makes the nanoparticles inside the cancer and releases the drug particles at the site of the cellular nucleus. With this injectable nanoparticle generator, we were able to do what standard chemotherapy drugs, vaccines, radiation, and other nanoparticles have all failed to do,” says Ferrari. Houston Methodist has developed good manufacturing practices (GMP) for this drug and plans to fast-track the research to obtain FDA-approval and begin safety and efficacy studies in humans in 2017. “I would never want to overpromise to the thousands of cancer patients looking for a cure, but the data is astounding,” says Ferrari, senior associate dean and professor of medicine, Weill Cornell Medicine. “We’re talking about changing the landscape of curing metastatic disease, so it’s no longer a death sentence.” The Houston Methodist team used doxorubicin, a cancer therapeutic that has been used for decades but has adverse side effects to the heart and is not an effective treatment against metastatic disease. In this study, doxorubicin was packaged within the injectable nanoparticle generator that is made up of many components. Shen, a senior member of the department of nanomedicine at Houston Methodist Research Institute, explains that each component has a specific and essential role in the drug delivery process. The first component is the nanoporous silicon material that naturally degrades in the body. The second component is a polymer made up of multiple strands that contain doxorubicin. Once inside the tumor, the silicon material degrades, releasing the strands. Due to natural thermodynamic forces, these strands curl-up to form nanoparticles that are taken up by the cancer cells. Once inside the cancer cells, the acidic pH close to the nucleus causes the drug to be released from the nanoparticles. Inside the nucleus, the active drug acts to kill the cell. “If this research bears out in humans and we see even a fraction of this survival time, we are still talking about dramatically extending life for many years. That’s essentially providing a cure in a patient population that is now being told there is none,” says Ferrari, who holds the Ernest Cockrell Jr. Presidential Distinguished Chair and is considered one of the founders of nanomedicine and oncophysics (physics of mass transport within a cancer lesion). The Houston Methodist team is hopeful that this new drug could help cancer physicians cure lung metastases from other origins, and possibly primary lung cancers as well. Additional researchers who collaborated with Ferrari and Shen on the Nature Biotechnology paper were: Rong Xu, Guodong Zhang, Junhua Mai, Xiaoyong Deng, Victor Segura-Ibarra, Suhong Wu, Jianliang Shen, Haoran Liu, Zhenhua Hu, Lingxiao Chen, Yi Huang, Eugene Koay, Yu Huang, Elvin Blanco, and Xuewu Liu (Department of Nanomedicine, Houston Methodist Research Institute, Houston, Texas); Jun Liu (Department of Pathology and Laboratory Medicine, The University of Texas-Houston Medical School); and Joe Ensor (Houston Methodist Cancer Center, Houston, Texas). The work was supported by grants from Department of Defense (W81XWH-09-1-0212 and W81XWH-12-1-0414), National Institute of Health (U54CA143837 and U54CA151668), and The Cockrell Foundation. Source: Houston Methodist Research Institute
For life science and other regulated manufacturers, cleanroom maintenance is an important part of compliance. It’s not enough that you have written meticulous quality protocols and procedures for maintenance of your equipment and facilities — it’s equally important to control and manage all accompanying documentation that will show regulators that your cleanroom is compliant. For companies regulated by the FDA, requirements pertaining to cleanroom and controlled environment procedures can be found in predicate rules such as 21 CFR Part 211 for pharmaceutical companies and 21 CFR Part 820 for medical device manufacturers.1 These require documentation of standard operating procedures and instructions as well as documented procedures for process changes related to equipment, buildings, and facilities. A recent warning letter to a medical device firm shows the importance of proper documentation and corresponding control of those documents. The FDA noted the company’s failure to establish and maintain written procedures for avoiding contamination of equipment, as well as the incongruity between some of its cleanroom procedures and records of actual practice.2 Long document cycle times To maintain a cleanroom and train its personnel, organizations generate numerous records and documents that need to be controlled and managed. Whether it’s the SOP for sanitizing a cleanroom against particulates and pathogen contamination or written materials for training technicians on sterile gowning procedure, you need to be able to create, approve, and revise documents efficiently. The biggest problem for companies that use paper or hybrid (part electronic and part paper) processes lies in administration itself. When the cleanroom and quality departments create documents, they must route those files for review and approval either in person or through email. Follow up is also conducted by email, phone, or in person. Once approved, those documents are typically stored in electronic servers, printed in hard copy, compiled in binders, and stored in filing cabinets in a document room. It is difficult to manage hundreds or thousands of documents, especially if they undergo multiple revisions and regular updates. Companies typically cut down their review and approval turnaround time significantly after automating their document control process. A manufacturer of nutritional supplements noted that its document-approval cycle time improved from months to days after it switched to an electronic document management system. This company automated 46,000 documents during the switch.3 With an electronic system in place, routing, follow up, escalation, and distribution are all automatic, saving time and effort. Obtaining approval and signatures of stakeholders who are out in the field or are scattered in various facilities is also easier, especially users can participate in quality processes with mobile devices. An electronic document control system that allows cleanroom personnel to use a smartphone or a tablet instead of paper documents is more convenient and efficient for them, which in turn could help reduce document cycle times. Lack of control in the change control process Where companies usually run into trouble in the distribution of new revisions is not knowing where all the copies of the old revision are. It helps to have a system that allows the lock down uncontrolled copies and facilitates tracking of controlled copies. Make document review a part of your change control process. Most regulated companies have a document review policy to ensure that important quality documents are still applicable and accurate. A regular review process can help ensure necessary changes have been documented and that the actual process and the documented process are in sync. Training falls through the cracks Once a quality document is approved (either for the first time or after an update), affected personnel should be notified and provided access to the document so they can be trained on the document prior to its effective date. For example, if your organization has implemented an extra wipe-down of the cleanroom at the end of every work day as a result of an audit observation, you have to make sure all cleanroom operators understand the change and are given enough time and appropriate tools to perform the new task. Not only that, you have to make sure your GMP records reflect this additional wipe-down for the benefit of your next audit. In a paper or hybrid quality system, it’s easy for training to fall through the cracks because the document approval process is not connected to the training process. Someone has to make sure that when a quality document is approved and released, the corresponding training will be conducted. Automatic quality systems integrate document control with a learning management system to ensure that training related to important documents will be implemented. Going back to our example above, once the SOP explaining the additional wipe-down is approved, the system will automatically notify all affected personnel of the change. It will seamlessly trigger the training of cleanroom technicians on the updated SOP. In regulated environments, document control is the foundation of quality. All procedures and processes that directly affect product quality and safety must be documented. In turn, all important documents (and the process for changing those documents) must be controlled. It is no different for cleanrooms and controlled environments of organizations that want to ensure regulatory compliance. References 1. For medical device companies, see 21 CFR 820.70 (Production and Process Controls): http://1.usa.gov/24YUmQ4. For pharmaceutical companies, see 21 CFR 211.67 (Equipment cleaning and maintenance) and 21 CFR 211 Subpart C (Building and Facilities), Subpart D (Equipment) and Subpart J (Records and Reports): http://1.usa.gov/1YOQ35I 2. FDA warning letter to Excelsior Medical Corp., issued on Nov. 7, 2014: http://1.usa.gov/1QWBS9k. 3. MasterControl Helps Weider Nutrition International Stay on Top of its Document Control and Change Management Processes: http://bit.ly/1TGFNx1 Dave Hunter is product management director at MasterControl Inc. His extensive technology experience includes working for Microsoft, EDS, Intel, and TenFold for over 20 years. firstname.lastname@example.org This article appeared in the issue of Controlled Environments.
This is the second and last part of a review of the draft World Health Organisation (WHO) guidance entitled Guidance on Good Data and Record Management Practices.1 In Part 1 of this review, I discussed the principles, risk management and involvement of senior management in a data integrity program within regulated organizations. In this second part of the review, we will discuss the role of suppliers and service providers, staff training, good documentation practices, designing systems for data quality and addressing data reliability issues. The pharmaceutical world is now built on outsourcing: research, development, API synthesis, manufacturing and analysis. The driver for this is cost reduction. However, although this is driven from a finance perspective, there remains one small problem, the accountability for the work remains with the company that outsourced it. Therefore, the company outsourcing work needs to ensure: The guidance states explicitly in the first sentence that “Personnel should be trained in data integrity policies and agree to abide by them.” This training needs to cover both paper and electronic working, and by implication hybrid systems. Furthermore, the second sentence states that management has the responsibility to ensure that the training is carried out and that personnel understand the difference between the right and wrong way of working and the consequences for anybody if they work the wrong way. Training begins when an employee joins an organization and, similar to refresher training, needs to be repeated at frequent intervals. In addition, the guidance recommends layers of training, as supervisors and managers require training in measure to prevent and detect poor data management actions e.g. electronic review of data including the specific audit trails that monitor data changes. Quality assurance (I’ll not use the quality unit term used by the guidance) also needs this training, as they will also need to check work for adherence to the data integrity policies and procedures. Annoyingly and confusingly, the guidance abbreviates good documentation practice to GDP which clashes immediately with European Union Good Distribution Practice regulations. As WHO are based in Europe, you would have thought that the writers might at least have a look outside of their silo and be aware of this terminology. For more discussion on data integrity training, the author has written an article on the subject2 as well as one for how to perform chromatographic integration in a regulated environment.3 Section 9 on the topic of good documentation practices is the longest section in the guidance and for good reason, as it is the heart of data integrity. The approach is based on the five ALCOA principles: attributable, legible (expanded to include traceable and permanent), contemporaneous, original and accurate. It is disappointing that the guidance does not include the four principles of ALCOA+: complete, consistent, enduring and available. As you’ll see above, enduring has been snuck into ALCOA under the legible banner. Rather than a fudge, why not go the whole hog and work with ALCOA+ principles? For example, consistent is a key requirement both in the FDA’s 1993 Inspection of Pharmaceutical Quality Control Laboratories4 and Compliance Program Guide (CGP) 7346.832.5 Available is a key requirement following the changes to the FD&C Act in 2012, and it is actually mentioned in the WHO guidance if outsourcing work. Complete has an echo of the US GMP regulation of complete laboratory data in 211.194(a).6 Why not change? You know it makes sense. The layout of the section is shown in Figure 1. The section is split into the five ALCOA terms and, under each one, is a definition of the term and then a table of containing the expectations for both paper and electronic records. Underneath this table is a discussion on special risk management considerations for the topic. Section 9 is a really, really useful part of the document. Read and understand this section. You will have to interpolate for hybrid systems, but some of the special considerations discussion will help you in this respect. To give an example of the approach, we’ll look at the first of the ACLOA principles. Attributable is defined as means information is captured in the record so that it is uniquely identified as executed by the originator of the data e.g. person or computer system. On paper records, initial or a handwritten signature can be used, as long as there is a link between the person and the initials / signature which in the vast majority of organizations is the signature list. The guidance mentions the use of personal seals. This is typically not relevant for Europe and North America, but in some countries personal seals are used for signing documents. Security of the seal so that it cannot be used by others is crucial. For an electronic system, a unique user identity or an electronic signature can be used for attributing an action to an individual. Under the special considerations, a scanned handwritten signature cannot be considered to be used as an electronic signature, which should dissuade people that this is a simple way of implementing an electronic signature. Under the other four areas of the ALCOA criteria, there is a lot of good advice. After reading the whole of this section, you’ll realize that some elements dealing with review of records generated by hybrid systems are rather onerous and bureaucratic. Take some time to stand back and consider why it is better to work electronically and remove paper and transcription error checking from the analytical process. Designing systems for data quality and managing data and records Section 10, entitled Designing Systems to Ensure Data Quality, is less about the design of computer systems and more about the proper configuration of software and the consequent validation of the overall system for its intended purpose. The tasks include documenting configuration specifications for commercial off the shelf (COTS) systems, which are typically GAMP software category 3 systems unless the supplier’s marketing department has twisted the meaning of the COTS acronym. System administration should be limited to independent personnel where technically feasible, but this raises major questions with the use of standalone systems in the majority of laboratories. Section 11 Looks at Management Data and Records Across the Life Cycle defines a simple life cycle as data collection and recording, data processing, data review and reporting and data retention and retrieval. An alternative life cycle is available in the author’s review of the MHRA data integrity guidance published in 2015.7 This section looks wider than chemical analysis and also considers clinical systems with some recommended controls e.g. patient confidentiality and blinding of data. In discussing the control of the data life cycle, a risk assessment of the record vulnerability is recommended, and then controls to secure the records and the storage of them should be implemented. There is good advice on keeping business processes simple as possible, and any work performed must be scientifically sound and documented to GXP principles. The last section of the guidance focusses on how an organization should investigate a data integrity problem. All pertinent data should be secured and staff interviewed to understand the nature of the failure, root causes and where appropriate corrective and preventative actions should be implemented. If the problem was caused by falsification or wrong data management practices, then disciplinary action may result. The investigation is also about understanding how the issue impacts product, patient or submission, and if authorities need to be informed about the problem. All-in-all this WHO guidance is much better than the corresponding MHRA guidance on data integrity issued earlier in 2015. The WHO guidance provides greater scope and more detail than the earlier document. There is a lot of practical advice that is of use to organisations. Although this has been a short review of the contents of the WHO document, reading the document yourselves is important, as you may interpret it differently in the specific context of your job in your company. R.D. McDowall is Director, R D McDowall Limited. He may be reached at editor@ScientificComputing.com.