As Figure 3 shows, mineralization at Clarence Stream South is located along the contacts between gabbro intrusives and sediments. At least two additional gabbro intrusive dykes have been mapped, and several others are evident from magnetic interpretations, in the 300 metres between the end of the south-most drilled hole and the large intrusive to the south. In that hole furthest south of, and outside the limits of the South Zone resource, an assay of 9.0 g/t Au over 0.5m is present at a gabbro contact. This zone was never followed up with drilling, because shortly after discovering it (in the 17th hole), Freewest discovered the main South Zone, and subsequently drilled that for two km along strike.


Time filter

Source Type

News Article | May 10, 2017
Site: www.forbes.com

Despite recent increases in oil and gas activity, some doubt that employment in the sector will ever reach the highs of 2014 again. Producers have been cutting costs and increasing productivity, partly thanks to an increased deployment of technology. Does this technological progress mean that some of the jobs lost in the sector over the last two years will not come back, even if high oil prices do? Fears that technological progress might eliminate jobs are nothing new. While technology increases productivity and makes society wealthier, these gains aren’t always equally shared. For some, it can be difficult to switch careers, and new jobs might not pay as well as the old ones. But the story of technological progress in the U.S. oil and gas industry is a hopeful one. Technology has had a record of creating jobs, not destroying them. Today, lackluster hiring is due to cyclical pressures, not the replacement of people with machines. In the mid-2000s, a singular innovation— combining hydraulic fracturing and horizontal drilling—dramatically lowered the cost of extracting oil and gas from shale. This cost-reduction kicked off an energy revolution: the North American shale boom. For the next 10 years, further improvements in oil and gas extraction technology spurred unprecedented employment growth in both E&P firms and oilfield services (Figure 1). The timing of the shale boom could not have been better. While most of the country suffered high unemployment during the Great Recession and its aftermath, the oil patch was hiring. As our research at the Center for Energy Studies has shown, this helped create jobs at a time when the country needed them most. However, just like the oil booms before, this one also came to an abrupt end. Oil prices began falling quickly in mid-2014. On November 27 of that year, OPEC decided to maintain oil production levels and allow prices to stay low. The incredible expansion of the American oil and gas industry stopped. As Figure 1 shows, employment fell precipitously in what the Bureau of Labor Statistics (BLS) calls the oil and gas extraction sector (E&P firms) and especially oil and gas support activities (oilfield services firms). Though employment levels in both extraction and support activities have followed similar trajectories, the paths of wages have diverged for several years. Average wages in support activities have been falling since the 2014 OPEC meeting, but they have been growing in the extraction sector. (Figure 2) The opposite trajectories of extraction and support wages, while at first glance surprising, are consistent with classical economic theory. Theory sees wages as a measure of labor productivity: the marginal worker is paid the value he or she brings to the firm. Higher wages, therefore, are associated with higher productivity and vice versa. Classical theory would suggest that as oil prices fell, firms would cut low-productivity workers and keep high-productivity ones. E&P firms, captured in the extraction sector, appear to be retaining their highest-value workers who are more experienced and more expensive. In contrast, wages in oil and gas support activities began a steady decline beginning in mid-2013, suggesting that on average, oilfield services produced less output per worker. Though employment fell significantly in oil and gas support activities starting in 2015, the drop was nowhere near as fast or far as the drop in the number of rigs actively drilling for oil and gas (Figure 3). A higher number of workers in support services per active rig meant that on average, each hour of work in the sector was associated with fewer new wells than before. Figure 4 takes a closer look at the ratio of employees in oil and gas support activities to the number of active rigs. The ratio spiked immediately following the 2014 OPEC meeting, marked by the second vertical dashed line. It is far easier and less costly to idle rigs than fire people, so the adjustment of employment in oil and gas support activities is slower than adjustment in rig-counts. This is especially true in times of uncertainty. During a downturn, if prices and investment are projected to come back up, a larger labor force gives firms the flexibility to ramp operations back up. In fact, during the price swings of 2008—2009, rig-counts also dropped quickly, while layoffs were slower to materialize. This made the employee-rig ratio spike and allowed the sector to recover when prices rose and drilling activity picked back up.


News Article | December 31, 2015
Site: www.theenergycollective.com

Ignacio Pérez-Arriaga of the MIT Sloan School of Management and the Institute for Research and Technology at Comillas University in Spain and a team of Comillas and MIT researchers are examining how the large-scale adoption of solar power may affect operations, costs, and other aspects of today’s electric power systems. Photo: Carlos Rosillo Deploying solar power at the scale needed to alleviate climate change will pose serious challenges for today’s electric power system, finds a study performed by researchers at MIT and the Institute for Research and Technology (ITT) at Comillas University in Spain. For example, local power networks will need to handle both incoming and outgoing flows of electricity. Rapid changes in photovoltaic (PV) output as the sun comes and goes will require running expensive power plants that can respond quickly to changes in demand. Costs will rise, yet market prices paid to owners of PV systems will decline as more PV systems come online, rendering more PV investment unprofitable at market prices. The study concludes that ensuring an economic, reliable, and climate-friendly power system in the future will require strengthening existing equipment, modifying regulations and pricing, and developing critical technologies, including low-cost, large-scale energy storage devices that can smooth out delivery of PV-generated electricity. Most experts agree that solar power must be a critical component of any long-term plan to address climate change. By 2050, a major fraction of the world’s power should come from solar sources. However, analyses performed as part of the MIT “Future of Solar Energy” report found that getting there won’t be straightforward. “One of the big messages of the solar study is that the power system has to get ready for very high levels of solar PV generation,” says Ignacio Pérez-Arriaga, a visiting professor at the MIT Sloan School of Management from IIT-Comillas. Without the ability to store energy, all solar (and wind) power devices are intermittent sources of electricity. When the sun is shining, electricity produced by PVs flows into the power system, and other power plants can be turned down or off because their generation isn’t needed. When the sunshine goes away, those other plants must come back online to meet demand. That scenario poses two problems. First, PVs send electricity into a system that was designed to deliver it, not receive it. And second, their behavior requires other power plants to operate in ways that may be difficult or even impossible. The result is that solar PVs can have profound, sometimes unexpected impacts on operations, future investments, costs, and prices on both distribution systems — the local networks that deliver electricity to consumers — and bulk power systems, the large interconnected systems made up of generation and transmission facilities. And those impacts grow as the solar presence increases. To examine impacts on distribution networks, the researchers used the Reference Network Model (RNM), which was developed at IIT-Comillas and simulates the design and operation of distribution networks that transfer electricity from high-voltage transmission systems to all final consumers. Using the RNM, the researchers built — via simulation — several prototype networks and then ran multiple simulations based on different assumptions, including varying amounts of PV generation. In some situations, the addition of dispersed PV systems reduces the distance electricity must travel along power lines, so less is lost in transit and costs go down. But as the PV energy share grows, that benefit is eclipsed by the need to invest in reinforcing or modifying the existing network to handle two-way power flows. Changes could include installing larger transformers, thicker wires, and new voltage regulators or even reconfiguring the network, but the net result is added cost to protect both equipment and quality of service. Figure 1 below presents sample results showing the impact of solar generation on network costs in the United States and in Europe. The outcomes differ, reflecting differences in the countries’ voltages, network configurations, and so on. But in both cases, costs increase as the PV energy share increases from 0 to 30 percent, and the impact is greater when demand is dominated by residential rather than commercial or industrial customers. The impact is also greater in less sunny regions. Indeed, in areas with low insolation, distribution costs may nearly double when the PV contribution exceeds one-third of annual load. The reason: When insolation is low, many more solar generating devices must be installed to meet a given level of demand, and the network needs to be ready to handle all the electricity flowing from those devices on the occasional sunny day. One way to reduce the burden on distribution networks is to add local energy storage capability. Depending on the scenario and the storage capacity, at 30 percent PV penetration, storage can reduce added costs by one-third in Europe and cut them in half in the United States. “That doesn’t mean that deployment of storage is economically viable now,” says Pérez-Arriaga. “Current storage technology is expensive, but one of the services with economic value that it can provide is to bring down the cost of deploying solar PV.” Another concern stems from methods used to calculate consumer bills — methods that some distribution companies and customers deem unfair. Most U.S. states employ a practice called net metering. Each PV owner is equipped with an electric meter that turns one way when the household is pulling electricity in from the network and the other when it’s sending excess electricity out. Reading the meter each month therefore gives net consumption or (possibly) net production, and the owner is billed or paid accordingly. Most electricity bills consist of a small fixed component and a variable component that is proportional to the energy consumed during the time period considered. Net metering can have the effect of reducing, canceling, or even turning the variable component into a negative value. As a result, users with PV panels avoid paying most of the network costs — even though they are using the network and (as explained above) may actually be pushing up network costs. “The cost of the network has to be recovered, so people who don’t own solar PV panels on their rooftops have to pay what the PV owners don’t pay,” explains Pérez-Arriaga. In effect, the PV owners are receiving a subsidy that’s paid by the non-PV owners. Unless the design of network charges is modified, the current controversy over electricity bills will intensify as residential solar penetration increases. Therefore, Pérez-Arriaga and his colleagues are developing proposals for “completely overhauling the way in which the network tariffs are designed so that network costs are allocated to the entities that cause them,” he says. In other work, the researchers focused on the impact of PV penetration on larger-scale electric systems. Using the Low Emissions Electricity Market Analysis model — another tool developed at IIT-Comillas — they examined how operations on bulk power systems, the future generation mix, and prices on wholesale electricity markets might evolve as the PV energy share grows. Unlike deploying a conventional power plant, installing a solar PV system requires no time-consuming approval and construction processes. “If the regulator gives some attractive incentive to solar, you can just remove the potatoes in your potato field and put in solar panels,” Pérez-Arriaga says. As a result, significant solar generation can appear on a bulk power system within a few months. With no time to adjust, system operators must carry on using existing equipment and methods of deploying it to meet the needs of customers. A typical bulk power system includes a variety of power plants with differing costs and characteristics. Conventional coal and nuclear plants are inexpensive to run (though expensive to build), but they don’t switch on and off easily or turn up and down quickly. Plants fired by natural gas are more expensive to run (and less expensive to build), but they’re also more flexible. In general, demand is met by dispatching the least expensive plants first and then turning to more expensive and flexible plants as needed. For one series of simulations, the researchers focused on a power system similar to the one that services much of Texas. Results presented in Figure 2 in the slideshow above show how PV generation affects demand on that system over the course of a summer day. In each diagram, yellow areas are demand met by PV generation, and brown areas are “net demand,” that is, remaining demand that must be met by other power plants. Left to right, the diagrams show increasing PV penetration. Initially, PV generation simply reduces net demand during the middle of the day. But when the PV energy share reaches 58 percent, the solar generation pushes down net demand dramatically, such that when the sun goes down, other generators must go from low to high production in a short period of time. Since low-cost coal and nuclear plants can’t ramp up quickly, more expensive gas-fired plants must cut in to do the job. As a result, when PV systems are operating and PV penetrations are high, prices are low, and when they shut down, prices are high. Owners of PV systems thus receive the low prices and never the high. Moreover, their reimbursement declines as more solar power comes online, as shown by the downward sloping blue curve in Figure 1 in the slideshow above. Under current conditions, as more PV systems come online, reimbursements to solar owners will shrink to the point that investing in solar is no longer profitable at market prices. “So people may think that if solar power becomes very inexpensive, then everything will become solar,” Pérez-Arriaga says. “But we find that that won’t happen. There’s a natural limit to solar penetration after which investment in more solar will not be economically viable.” However, if goals and incentives are set for certain levels of solar penetration decades ahead, then PV investment will continue, and the bulk power system will have time to adjust. In the absence of energy storage, the power plants accompanying solar will for the most part be gas-fired units that can follow rapid changes in demand. Conventional coal and nuclear plants will play a diminishing role — unless new, more flexible versions of those technologies are designed and deployed (along with carbon capture and storage for the coal plants). If high subsidies are paid to PV generators or if PV cost diminishes substantially, conventional coal and nuclear plants will be pushed out even more, and more flexible gas plants will be needed to cover the gap, leading to a different generation mix that is well-adapted for coexisting with solar. A powerful means of alleviating cost and operating issues associated with PVs on bulk power systems — as on distribution networks — is to add energy storage. Technologies that provide many hours of storage — such as grid-scale batteries and hydroelectric plants with large reservoirs — will increase the value of PV. “Storage helps solar PVs have more value because it is able to bring solar-generated electricity to times when sunshine is not there, so to times when prices are high,” Pérez-Arriaga says. As Figure 3 in the slideshow above demonstrates, adding storage makes investments in PV generation more profitable at any level of solar penetration, and in general the greater the storage capacity, the greater the upward pressure on revenues paid to owners. Energy storage thus can play a critical role in ensuring financial rewards to prospective buyers of PV systems so that the share of generation provided by PVs can continue to grow — without serious penalties in terms of operations and economics. Again, the research results demonstrate that developing low-cost energy storage technology is a key enabler for the successful deployment of solar PV power at a scale needed to address climate change in the coming decades. This research was supported by the MIT Future of Solar Energy study and by the MIT Utility of the Future consortium. This article appears in the Autumn 2015 issue of Energy Futures, the magazine of the MIT Energy Initiative.


News Article | December 30, 2015
Site: news.mit.edu

Deploying solar power at the scale needed to alleviate climate change will pose serious challenges for today’s electric power system, finds a study performed by researchers at MIT and the Institute for Research and Technology (ITT) at Comillas University in Spain. For example, local power networks will need to handle both incoming and outgoing flows of electricity. Rapid changes in photovoltaic (PV) output as the sun comes and goes will require running expensive power plants that can respond quickly to changes in demand. Costs will rise, yet market prices paid to owners of PV systems will decline as more PV systems come online, rendering more PV investment unprofitable at market prices. The study concludes that ensuring an economic, reliable, and climate-friendly power system in the future will require strengthening existing equipment, modifying regulations and pricing, and developing critical technologies, including low-cost, large-scale energy storage devices that can smooth out delivery of PV-generated electricity. Most experts agree that solar power must be a critical component of any long-term plan to address climate change. By 2050, a major fraction of the world’s power should come from solar sources. However, analyses performed as part of the MIT "Future of Solar Energy" report found that getting there won’t be straightforward. “One of the big messages of the solar study is that the power system has to get ready for very high levels of solar PV generation,” says Ignacio Pérez-Arriaga, a visiting professor at the MIT Sloan School of Management from IIT-Comillas. Without the ability to store energy, all solar (and wind) power devices are intermittent sources of electricity. When the sun is shining, electricity produced by PVs flows into the power system, and other power plants can be turned down or off because their generation isn’t needed. When the sunshine goes away, those other plants must come back online to meet demand. That scenario poses two problems. First, PVs send electricity into a system that was designed to deliver it, not receive it. And second, their behavior requires other power plants to operate in ways that may be difficult or even impossible. The result is that solar PVs can have profound, sometimes unexpected impacts on operations, future investments, costs, and prices on both distribution systems — the local networks that deliver electricity to consumers — and bulk power systems, the large interconnected systems made up of generation and transmission facilities. And those impacts grow as the solar presence increases. To examine impacts on distribution networks, the researchers used the Reference Network Model (RNM), which was developed at IIT-Comillas and simulates the design and operation of distribution networks that transfer electricity from high-voltage transmission systems to all final consumers. Using the RNM, the researchers built — via simulation — several prototype networks and then ran multiple simulations based on different assumptions, including varying amounts of PV generation. In some situations, the addition of dispersed PV systems reduces the distance electricity must travel along power lines, so less is lost in transit and costs go down. But as the PV energy share grows, that benefit is eclipsed by the need to invest in reinforcing or modifying the existing network to handle two-way power flows. Changes could include installing larger transformers, thicker wires, and new voltage regulators or even reconfiguring the network, but the net result is added cost to protect both equipment and quality of service. Figure 1 below presents sample results showing the impact of solar generation on network costs in the United States and in Europe. The outcomes differ, reflecting differences in the countries’ voltages, network configurations, and so on. But in both cases, costs increase as the PV energy share increases from 0 to 30 percent, and the impact is greater when demand is dominated by residential rather than commercial or industrial customers. The impact is also greater in less sunny regions. Indeed, in areas with low insolation, distribution costs may nearly double when the PV contribution exceeds one-third of annual load. The reason: When insolation is low, many more solar generating devices must be installed to meet a given level of demand, and the network needs to be ready to handle all the electricity flowing from those devices on the occasional sunny day. One way to reduce the burden on distribution networks is to add local energy storage capability. Depending on the scenario and the storage capacity, at 30 percent PV penetration, storage can reduce added costs by one-third in Europe and cut them in half in the United States. “That doesn’t mean that deployment of storage is economically viable now,” says Pérez-Arriaga. “Current storage technology is expensive, but one of the services with economic value that it can provide is to bring down the cost of deploying solar PV.” Another concern stems from methods used to calculate consumer bills — methods that some distribution companies and customers deem unfair. Most U.S. states employ a practice called net metering. Each PV owner is equipped with an electric meter that turns one way when the household is pulling electricity in from the network and the other when it’s sending excess electricity out. Reading the meter each month therefore gives net consumption or (possibly) net production, and the owner is billed or paid accordingly. Most electricity bills consist of a small fixed component and a variable component that is proportional to the energy consumed during the time period considered. Net metering can have the effect of reducing, canceling, or even turning the variable component into a negative value. As a result, users with PV panels avoid paying most of the network costs — even though they are using the network and (as explained above) may actually be pushing up network costs. “The cost of the network has to be recovered, so people who don’t own solar PV panels on their rooftops have to pay what the PV owners don’t pay,” explains Pérez-Arriaga. In effect, the PV owners are receiving a subsidy that’s paid by the non-PV owners. Unless the design of network charges is modified, the current controversy over electricity bills will intensify as residential solar penetration increases. Therefore, Pérez-Arriaga and his colleagues are developing proposals for “completely overhauling the way in which the network tariffs are designed so that network costs are allocated to the entities that cause them,” he says. In other work, the researchers focused on the impact of PV penetration on larger-scale electric systems. Using the Low Emissions Electricity Market Analysis model — another tool developed at IIT-Comillas — they examined how operations on bulk power systems, the future generation mix, and prices on wholesale electricity markets might evolve as the PV energy share grows. Unlike deploying a conventional power plant, installing a solar PV system requires no time-consuming approval and construction processes. “If the regulator gives some attractive incentive to solar, you can just remove the potatoes in your potato field and put in solar panels,” Pérez-Arriaga says. As a result, significant solar generation can appear on a bulk power system within a few months. With no time to adjust, system operators must carry on using existing equipment and methods of deploying it to meet the needs of customers. A typical bulk power system includes a variety of power plants with differing costs and characteristics. Conventional coal and nuclear plants are inexpensive to run (though expensive to build), but they don’t switch on and off easily or turn up and down quickly. Plants fired by natural gas are more expensive to run (and less expensive to build), but they’re also more flexible. In general, demand is met by dispatching the least expensive plants first and then turning to more expensive and flexible plants as needed. For one series of simulations, the researchers focused on a power system similar to the one that services much of Texas. Results presented in Figure 2 in the slideshow above show how PV generation affects demand on that system over the course of a summer day. In each diagram, yellow areas are demand met by PV generation, and brown areas are “net demand,” that is, remaining demand that must be met by other power plants. Left to right, the diagrams show increasing PV penetration. Initially, PV generation simply reduces net demand during the middle of the day. But when the PV energy share reaches 58 percent, the solar generation pushes down net demand dramatically, such that when the sun goes down, other generators must go from low to high production in a short period of time. Since low-cost coal and nuclear plants can’t ramp up quickly, more expensive gas-fired plants must cut in to do the job. As a result, when PV systems are operating and PV penetrations are high, prices are low, and when they shut down, prices are high. Owners of PV systems thus receive the low prices and never the high. Moreover, their reimbursement declines as more solar power comes online, as shown by the downward sloping blue curve in Figure 1 in the slideshow above. Under current conditions, as more PV systems come online, reimbursements to solar owners will shrink to the point that investing in solar is no longer profitable at market prices. “So people may think that if solar power becomes very inexpensive, then everything will become solar,” Pérez-Arriaga says. “But we find that that won’t happen. There’s a natural limit to solar penetration after which investment in more solar will not be economically viable.” However, if goals and incentives are set for certain levels of solar penetration decades ahead, then PV investment will continue, and the bulk power system will have time to adjust. In the absence of energy storage, the power plants accompanying solar will for the most part be gas-fired units that can follow rapid changes in demand. Conventional coal and nuclear plants will play a diminishing role — unless new, more flexible versions of those technologies are designed and deployed (along with carbon capture and storage for the coal plants). If high subsidies are paid to PV generators or if PV cost diminishes substantially, conventional coal and nuclear plants will be pushed out even more, and more flexible gas plants will be needed to cover the gap, leading to a different generation mix that is well-adapted for coexisting with solar. A powerful means of alleviating cost and operating issues associated with PVs on bulk power systems — as on distribution networks — is to add energy storage. Technologies that provide many hours of storage — such as grid-scale batteries and hydroelectric plants with large reservoirs — will increase the value of PV. “Storage helps solar PVs have more value because it is able to bring solar-generated electricity to times when sunshine is not there, so to times when prices are high,” Pérez-Arriaga says. As Figure 3 in the slideshow above demonstrates, adding storage makes investments in PV generation more profitable at any level of solar penetration, and in general the greater the storage capacity, the greater the upward pressure on revenues paid to owners. Energy storage thus can play a critical role in ensuring financial rewards to prospective buyers of PV systems so that the share of generation provided by PVs can continue to grow — without serious penalties in terms of operations and economics. Again, the research results demonstrate that developing low-cost energy storage technology is a key enabler for the successful deployment of solar PV power at a scale needed to address climate change in the coming decades. This research was supported by the MIT Future of Solar Energy study and by the MIT Utility of the Future consortium. This article appears in the Autumn 2015 issue of Energy Futures, the magazine of the MIT Energy Initiative.


TORONTO, ONTARIO--(Marketwired - Dec. 20, 2016) - Galway Metals Inc. (TSX VENTURE:GWM) (the "Company" or "Galway") is pleased to provide an exploration update on the Clarence Stream property on which, in August 2016, the Company reported that it had entered into Purchase and Option Agreements and staked claims to acquire a 100% undivided interest. The Company is also pleased to report that it has staked additional prospective claims contiguous on the North and South of Clarence Stream, and that, subject to regulatory approval, it has purchased the Lower Tower Hill Property from Globex Mining Enterprises Inc. Clarence Stream is located 70 kilometres (km) south-southwest of Fredericton in south-western New Brunswick, Canada. Galway's consolidated land position comprises at least 45 km of strike length of the Sawyer Brook Fault System within an overall strike length of 65 km (and a width of up to 28 km), which straddles several intrusives that are believed to have created the conditions necessary for gold deposition. Clarence Stream hosts Indicated Resources of 182,000 ounces of gold at 6.9 g/t (241,000 oz at 9.1 g/t uncut), plus Inferred Resources of 250,000 oz at 6.3 g/t (313,000 oz at 8.0 g/t uncut). The property also hosts antimony, with Indicated Resources totalling 7.3 mm lb at 2.9% Sb. For details on the Clarence Stream resource, refer to Roscoe Postle Associates' NI 43-101 report, dated September 7, 2012, on Galway Metals' website or on the Company's issuer profile on SEDAR. Galway chose to increase its land position at Clarence Stream by 25% to 54,564 hectares (134,830 acres) from 43,800 hectares previously by staking 463 claim units and acquiring a further 11 claim units that comprise the Lower Tower Hill Property because the Company's early exploration efforts have enhanced its views of the potential for the Clarence Stream gold district. Gold districts need major fault systems through which mineralized fluids can be trapped. These conditions exist at Clarence Stream with the Sawyer Brook Fault System and the many intrusives located along its 45-plus km trend. Gold deposits around the world are commonly found by following up initial till sample anomalies, soil sample anomalies, boulders back to their source gold veins, and/or mineralized bedrock chip samples; Galway has all four. Galway bases its views and the reasons for acquiring additional properties at Clarence Stream on a number of factors, including the following: Michael Sutton, Director and VP of Exploration for Galway Metals said, "The systematic exploration that Galway has conducted, plus the compilation of results from previous operators and the governments of New Brunswick and Canada, are further evidence to back up our initial views that new discoveries may be made in the Clarence Stream gold district. It is not typical to identify large areas, spread across ten's of kilometres, that are correspondingly anomalous for gold and other indicator elements in till, soil, boulder and chip samples, especially where a comparatively small two km stretch has already been shown by drilling to contain a high grade gold resource. The new southern target is a bonus as we did not realize that no one had drilled the actual contact of the intrusive that is thought to be the source of the gold. Galway's findings indicate that Clarence Stream is worthy of significant additional systematic exploration." New South Target A new area has been delineated as having high potential, located immediately south of the Clarence Stream South Zone, between it and the intrusive that is thought to be the source of the mineralizing fluids (Figure 2). There are no known drill holes in this area, which is of equal size and contains the same types of geological contacts as the known South Zone deposits (that contains 78% of Clarence Stream's total resource ounces). The contact with the intrusive is of particular interest. In this high potential area south of the South Zone, numerous boulders and bedrock chip samples have returned encouraging assays rich in gold, including chip grabs taken by Freewest of 84.3 g/t, 22.8 g/t, 22.0 g/t, 12.1 g/t, 11.6 g/t, 9.3 g/t, and 6.3 g/t (Figure 3). This area also contains three of the nine highest gold-in-soil sample grades taken by Freewest, plus it coincides with a significant gold-in-till anomaly. Chip samples are selected samples and are not representative of the mineralization hosted on the property. As Figure 3 shows, mineralization at Clarence Stream South is located along the contacts between gabbro intrusives and sediments. At least two additional gabbro intrusive dykes have been mapped, and several others are evident from magnetic interpretations, in the 300 metres between the end of the south-most drilled hole and the large intrusive to the south. In that hole furthest south of, and outside the limits of the South Zone resource, an assay of 9.0 g/t Au over 0.5m is present at a gabbro contact. This zone was never followed up with drilling, because shortly after discovering it (in the 17th hole), Freewest discovered the main South Zone, and subsequently drilled that for two km along strike. Prospector George Murphy discovered several gold-bearing boulders in strategic locations for Galway. For example, in its early prospecting program, Galway sampled a well-mineralized boulder that returned 35.5 g/t Au, located in the five km undrilled gap between the Clarence Stream South deposit and the Jubilee property (Figure 1). This gap area is characterized by strong gold and arsenic till and soil anomalies along where the Sawyer Brook Fault should transect. Boulders sampled at Jubilee returned 16.3 g/t Au and 7.5 g/t Au. These samples are selected samples and are not in-place and are not representative of the mineralization hosted on the property. Drilling at Jubilee returned up to 11.3 g/t Au over 0.5m, 1.1g/t Au over 23.9m (including 10.1 g/t Au over 1.4m), and 2.1g/t Au over 8.5m (including 8.3 g/t Au over 1.4m; all interval true widths are unknown). In addition, near the end of Wolfden's prospecting at Clarence Stream, it identified three boulders immediately northeast of Jubilee that assayed 16.5 g/t Au, 13.5 g/t Au and 7.9 g/t Au that also corresponded well with gold and arsenic till and soil anomalies. Chip grab sampling of bedrock veins at various locations along the Clarence Stream South Zone has returned assays such as 173.0 g/t, 81.1 g/t, 42.1 g/t, 30.7 g/t, 14.0 g/t and 6.0 g/t, and chip grab sampling of bedrock veins at various locations immediately south of the South Zone returned assays such as 84.3 g/t, 22.8 g/t, 22.0 g/t, 12.1 g/t, 11.6 g/t, 9.3 g/t, and 6.3 g/t. At Lower Tower Hill, bedrock chip samples in trenches returned gold assays of 89.1 g/t and 50.4 g/t over 0.8m. With the exception of drilling at Jubilee and along the South and North Zones where the Clarence Stream resource is situated, boulders and bedrock chip samples found at Clarence Stream have not been followed up with drilling, even when they correspond with strong gold and arsenic till and soil anomalies. These represent drill targets for Galway's exploration program. Galway Metals has undertaken a very aggressive soil sampling program consisting of more than 10,000 samples located along 12 km of the Sawyer Brook Fault System, and from discreet areas located to the north of it (at similar distal proximities as the North Zone is to the main Sawyer Brook structure). In addition to the soil sampling undertaken by previous operators, Clarence Stream has now been systematically sampled over a strike length of 24 km, although Galway awaits results covering approximately eight km from Leverville. This sampling was undertaken to cover areas that contained high gold, arsenic and bismuth till samples that were previously taken by the New Brunswick and Canadian governments. Till samples were taken on the corners of each claim throughout southwest New Brunswick. This equates to one sample every 400 metres. Such strong till anomalies and stream sediment anomalies on/near the Clarence Stream deposit led to its discovery (Figure 1). Follow-up soil sampling, which led to the identification of drill targets, resulted in the delineation of the North and South deposits, along with some other anomalous showings that remain untested by drilling. These samples were taken at 25 metre intervals along lines 100 metres apart. In the current program, similar sample intervals were used. The soil results are similar to those found over zones where the resource is located at Clarence Stream, and in some respects better in the following ways: Of the nine Freewest soils over 200 ppb Au, three are found in the undrilled New South Target area located south of the South Zone, with the highest at 385 ppb Au. As discussed above (New South Target), these strong soil anomalies south of the South Zone may prove to be important as their location coincides well with strong chip samples identified by Freewest that have not been followed up with drilling, and they are located closer to the major intrusive in the area, which is believed to be the source of the gold-bearing fluids. Five of the six other Freewest soils above 200 ppb Au are found in the vicinity of the North Zone, with two of these samples that have not been drill tested grading above 400 ppb Au (Figure 4); the remaining one of Freewest's samples above 200 ppb Au is located near the northern border with Jubilee. Two of the current top seven soils identified by Galway, including the 694 ppb soil assay, is located immediately north of the to be acquired Lower Tower Hill property. This Lower Tower Hill property contains trench chip samples of 89.1 g/t Au plus 50.4 g/t Au over 0.8m. Soil samples are the brown soils directly below the roots and other organic matter that contain chemically (and mechanically) concentrated gold and other elements, whereas till samples are located below the soils in glacial till (gravel) that contains gold and other elements that are mechanically transported by glaciers. The tills in the region are generally thin (1-5 metres) and are thought to have been transported short distances (generally less than 350m). The Leverville soil survey (Figure 5) was designed to cover an area 8.5 km to 18.5 km west of and along strike of the Clarence Stream South Zone. At least seven anomalous linear trends have been delineated, with a high assay of 287 ppb Au. The Leverville grid area is located south of Lower Tower Hill and covers the Sawyer Brook Fault System in that area. Assays from about 20% of soil samples taken by Galway along the Leverville grid have been received. The Pleasant Ridge soil survey (Figure 6) was designed to cover an area two km to four km east of and along strike of the Clarence Stream South Zone and on strike with the Sawyer Brook Fault System. At least six anomalous linear trends parallel with the Sawyer Brook Fault have been delineated, one of which appears to be along the contact with the intrusive to the south that is thought to be the source of the mineralizing fluids. The high assay is 205 ppb Au. Interestingly, two additional linear trends that strike perpendicular to the general northeast-southwest direction of the fault have been identified at Pleasant Ridge. Galway geologists are uncertain whether these perpendicular trends, which are 1.4 km and 1.6 km in length, are related to cross faults that are known to exist at Clarence Stream, which may be mineralized. Five limited soil surveys were carried out to cover an area 10 km, 12 km, and 13 km west of the Clarence Stream South Zone, and 8 km and 13 km west of the North Zones (Figure 7). These targeted till samples that are anomalous in arsenic. Drilling is now ongoing, with 11 holes completed, with assays from the first hole returning 6.3 g/t Au over 1.0m (1.0m TW; in 2.1 g/t Au over 6.0m (5.9m TW)). Strong quartz veining was intersected in this hole, with abundant antimony, pyrrhotite and arsenopyrite. This intersection is located 47m below a previous intersection of 4.2 g/t Au over 5.0m (4.9m TW), and 84m above a previous intersection of 8.0 g/t Au over 12.0m (10.4m TW). Complete assays are pending for the remaining holes. The holes are targeting both extensions to the resource, and the upgrading of resources from inferred (100m centres) to indicated categories (as in the case of hole CS-331). Two mineralized horizons have been drilled, with all holes in the South Zone. Mineralization in the South Zone is steeply-dipping, and in close proximity to the Sawyer Brook Fault. Other veins intersected in the first hole returned 1.7 g/t Au over 0.8m, and 0.8 g/t Au over 1.0m. Robert Hinchcliffe, President and CEO of Galway Metals, said, "After just four months of ownership, we are extremely pleased with the amount of progress our team has made in New Brunswick, with the till and soil surveys and boulder and bedrock chip sampling suggesting that this 45-plus km Break-related property could well be a major new gold district. We think it is an emerging gold trend with limited systematic exploration, which it deserves." As mentioned above, Galway has entered into an agreement to acquire 100% of the Lower Tower Hill Property (Figure 1) from Globex Mining Enterprises (TSX: GMX, G1M - Frankfurt, Stuttgard, Berlin, Munich, Tradegate, Lang & Schwartz Stock Exchanges and GLBXF - OTCQX International) for 260,000 shares, subject to regulatory approval, plus a 2.5% Gross Metal Royalty on those claims. This strategic property hosts historic trenching that returned chip samples of 89.1 g/t Au and 50.4 g/t Au over 0.8m. The mineralization is similar to that existing at Clarence Stream and is also thought to be intrusive related (in this case to an intrusive to the west where a chip sample graded 20.2 g/t Au) and shear-related. The NW soil grid borders the to be acquired Lower Tower Hill property on the north and contains two soils grading more than 200 ppb Au, including the highest ever reported in any survey in the area of 694 ppb Au. The new acquisition is 250 hectares in size and consists of 11 contiguous claim units with a dimension of roughly two km by 1.5 km. Galway now completely surrounds this newly acquired claim block. Because of the success in delineating targets with soil samples where till samples are anomalous, and the realization that the prospective New South Target has never been drilled despite strongly anomalous gold-in-till, soil and chip samples, Galway has recently staked 463 additional claim units representing 10,497 hectares. The combination of recently staked claims plus the Lower Tower Hill acquisition amounts to a 25% increase in Galway's Clarence Stream property to 54,564 hectares. Two groups of staked claims were added to the land package, located to the north and south (Figure 1), which widens Galway's property package up to 28 km. The claims were staked to cover anomalous till samples. The following is taken from various sections of RPA's NI 43-101 report on the Clarence Stream Property, dated September 7, 2012. Clarence Stream is located near the boundary of the Gander and Avalon terranes of the Canadian Appalachians. In southwest New Brunswick, the boundary between these major terranes is obscured by Palaeozoic age sedimentary rocks of the Mascarene Basin and the St. Croix terrane, which are the primary hosts of gold mineralization at Clarence Stream. The Sawyer Brook Fault separates these two groups of metasedimentary rocks and is interpreted as a dextral strike-slip fault and may be part of a regional, belt-parallel fault system. The Clarence Stream deposits can be characterized as intrusion-related quartz-vein hosted gold deposits. These deposits consist of quartz veins and quartz stockwork within brittle-ductile fault zones that include adjacent crushed, altered wall rocks and veinlet material. The mineralized systems are hosted in intrusive and metasedimentary rocks within high strain zones controlled by regional fault systems. Pyrite, base metal sulphides, and stibnite occur in these deposits along with anomalous concentrations of bismuth, arsenic, antimony and tungsten. Alteration in the host rocks is confined within a few metres of quartz veins and occurs mainly in the form of sericitization and chloritization. Gold-bearing minerals at Clarence Stream include aurostibite (AuSb ), electrum (20%-34% Ag), native gold, arsenopyrite (FeAsS), gudmundite (FeSbS), berthierite (FeSb S ), jamesonite (Pb FeSb S ), and stibnite (Sb S ). Pyrite (FeS ) and pyrrhotite (Fe S) are common but not associated with gold. Gold mineralization has been discovered in two main areas of the Clarence Stream property, each with unique host rocks and deposit geometry. The South Zone lies immediately to the northwest of the Saint George (Magaguadavic) Batholith, while the Anomaly-A (North) Zone lies 3.5 km further northwest. The South Zone lies within a steeply dipping, east-northeast trending high-strain zone. RPA outlined 38 individual lenses over a strike length of two km, to a maximum depth of 350 metres. Gold mineralization is commonly hosted in quartz veins, quartz stockwork, and along the contacts and within sheared and altered metagabbro and microgranite sills and dikes that crosscut the meta-sedimentary rocks of the Waweig Formation. There is a strong spatial relationship between veining and the microgranitic dikes and sills that, in detail, crosscut and post-date the gabbro. Evidence suggesting that the South Zone is related to the Saint George (Magaguadavic) Batholith includes the close spatial relationship of gold mineralization with the batholith, the presence of hornfels and veined and altered auriferous microgranite dikes, and high concentrations of Bi, As and Sb. RPA outlined five lenses within a one km by two km area known as Anomaly-A (North Zone). The lenses are primarily hosted within metagreywacke and argillite of the Kendal Mountain Formation. The AD-MW Lens, which dominates the mineralized veins in the North Zone, forms a bowl-shaped structure with an average vertical thickness of approximately three metres that outcrops at surface and reaches a depth of 100 metres. The geometry of the Murphy Lens is less understood due to widely spaced drilling. Gold generally occurs in areas of strong quartz veining and cataclasite. Stringer and semi-massive stibnite, arsenopyrite, and pyrite are common. Traces of sphalerite, chalcopyrite, and visible gold occur locally. The best gold values are found in shallow-dipping sediment-hosted quartz veins and stockwork exhibiting brecciation and the emplacement of a second generation of sulphides, and in clear hairline quartz veinlets. In compliance with National Instrument 43-101, Mr. Mike Sutton, P.Geo. is the Qualified Person who supervised the preparation of the scientific and technical disclosure in this news release. All core, chip/boulder samples, and soil samples are assayed by Activation Laboratories, 41 Bittern Street, Ancaster, Ontario, Canada, who have ISO/IEC 17025 accreditation. All core is under watch from the drill site to the core processing facility. All samples are assayed for gold by Fire Assay, with gravimetric finish, and other elements assayed using ICP. The Company's QA/QC program includes the regular insertion of blanks and standards into the sample shipments, as well as instructions for duplication. Standards, blanks and duplicates are inserted at one per 20 samples. Approximately five percent (5%) of the pulps and rejects are sent for check assaying at a second lab with the results averaged and intersections updated when received. Core recovery in the mineralized zones has averaged 99%. Some samples discussed in this news release are "discreet" samples taken from boulders of float or outcrop. They are not necessarily representative. Galway Metals is well capitalized with approximately CAD$9.7 million at September 30, 2016, after accounting for the Clarence Stream and Estrades acquisitions. The Company began trading on January 4, 2013, after the successful spinout to existing shareholders from Galway Resources following the completion of the US$340 million sale of that company. With substantially the same management team and Board of Directors, Galway Metals is keenly intent on creating similar value as it had with Galway Resources. CAUTIONARY STATEMENT: Neither the TSX Venture Exchange nor its Regulation Services Provider (as that term is defined in the policies of the TSX Venture Exchange) accepts responsibility for the adequacy of this news release. No stock exchange, securities commission or other regulatory authority has approved or disapproved the information contained herein. This news release contains forward-looking information which is not comprised of historical facts. Forward-looking information involves risks, uncertainties and other factors that could cause actual events, results, performance, prospects and opportunities to differ materially from those expressed or implied by such forward-looking information. Forward-looking information in this news release includes statements made herein with respect to, among other things, the Company's objectives, goals or future plans, potential corporate and/or property acquisitions, exploration results, potential mineralization, exploration and mine development plans, timing of the commencement of operations, and estimates of market conditions. Factors that could cause actual results to differ materially from such forward-looking information include, but are not limited to, exploration results being less favourable than anticipated, capital and operating costs varying significantly from estimates, delays in obtaining or failures to obtain required governmental, environmental or other project approvals, political risks, uncertainties relating to the availability and costs of financing needed in the future, changes in equity markets, inflation, changes in exchange rates, fluctuations in commodity prices, delays in the development of projects, risks associated with the defence of legal proceedings and other risks involved in the mineral exploration and development industry, as well as those risks set out in the Company's public disclosure documents filed on SEDAR. Although the Company believes that management's assumptions used to develop the forward-looking information in this news release are reasonable, including that, among other things, the Company will be able to identify and execute on opportunities to acquire mineral properties, exploration results will be consistent with management's expectations, financing will be available to the Company on favourable terms when required, commodity prices and foreign exchange rates will remain relatively stable, and the Company will be successful in the outcome of legal proceedings, undue reliance should not be placed on such information, which only applies as of the date of this news release, and no assurance can be given that such events will occur in the disclosed time frames or at all. The Company disclaims any intention or obligation to update or revise any forward-looking information contained herein, whether as a result of new information, future events or otherwise, except as required by applicable securities laws.


News Article | November 28, 2016
Site: www.theenergycollective.com

Regardless of the outcome of the meeting on 30 November, the future of OPEC looks uncertain. The organisation is facing a perfect storm, squeezed as it is between the revolution in shale oil, which has increased global supply and brought down prices, and the prospect of a global peak demand stemming from climate policies and falling costs of alternatives. Some have even declared the death of OPEC, but according to Thijs Van de Graaf, professor at the Ghent Institute for International Studies, this is premature. He believes it is more likely that OPEC will evolve from an output-setting cartel into an information clearing house. In fact, he writes, OPEC was never a cartel to begin with – and certainly never a powerful one. OPEC is facing some of the most severe threats in its 56-year history. The ‘fracking revolution’ has unlocked large swaths of new oil and gas supplies, contributing to a global glut. Alternative energy technologies are seeing impressive falls in costs—with solar photovoltaics prices dropping more than 60 percent between 2009 and 2016. A new climate treaty was adopted by 195 nations in December 2015, aiming to limit climate change to ‘well below’ 2 degrees Celsius, which would render the bulk of fossil fuel reserves ‘unburnable’. On top of that, the dramatic fall in oil prices since mid-2014, after a four-year period of relatively stable and high prices, has exposed the economic fragility of many OPEC countries who are heavily reliant on revenues from the foreign sales of crude oil, most notably Venezuela, which saw its economy shrink by 5.7% in 2015. The self-proclaimed cartel has failed to adopt a coherent, united stance in response to these challenges. At a dramatic meeting in November 2014, OPEC opted to let market forces play out. The inability of OPEC to agree to production cuts triggered a battle for market share, both inside and outside the cartel (see Figure 1). An attempt to forge a ‘production freeze’ (not to be conflated with a production cut) between OPEC countries and Russia at a meeting in Doha in April 2016 utterly failed. The talks collapsed at the 11th hour after Saudi Arabia refused to sign a deal without Iran, which in turn did not want to participate in a production freeze, arguing it needed to recapture market share lost while it was under international sanctions. Another attempt in September 2016 seemed more successful, with OPEC countries agreeing to adopt a production target ‘ranging between 32.5 and 33.0 mb/d’.  On 30 November, the OPEC members will make a formal decision on this plan. But even if OPEC may reassert itself at this meeting, questions remain, in particular whether (i) OPEC countries will actually follow through on this commitment; and (ii) whether such a production cut could have major knock-on effects on global prices, in light of the large inventory overhang that needs to be cleared first. What is more, the battle for market share is only half the story. With its vast and cheap oil reserves, Saudi Arabia has long been wary of ‘demand destruction’ and wants to keep oil consumers hooked to oil, as was illustrated in a US Department of State cable that was made public by Wikileaks. ‘Saudi officials are very concerned that a climate change treaty would significantly reduce their income,’ James Smith, the U.S. ambassador to Riyadh, wrote in a 2010 memo to U.S. Energy Secretary Steven Chu. ‘Effectively, peak oil arguments have been replaced by peak demand.’  It thus seems reasonable to assume that for Saudi officials, low oil prices also serve as a hedge against a rising tide of fuel economy, biofuels, electric vehicles, natural gas vehicles, advances in energy storage, et cetera. Some analysts suggest that the cartel’s failure to reach a united position ‘is not merely a sign that its influence is at a cyclical low ebb, but rather a portent of a more structural shift into irrelevance.’ By announcing a national plan to wean the kingdom’s economy off oil revenue, Saudi Arabia is said to ‘sounding the group’s death knell.’ Even within the organisation itself, the view is gaining root that the club is in decay. At the May 2016 OPEC board of governors meeting in Vienna, a representative from a ‘non-Gulf Arab country’ pronounced OPEC dead. Predictions of OPEC’s demise have a long history, of course, and so far they have always proven to be exaggerated. Yet, some analysts, such as Ed Morse from the investment bank Citigroup, maintain that ‘this time around might well be different,’ because the shale revolution has heralded a ‘new oil order.’ Certainly it’s difficult to deny that OPEC does indeed face a dramatically altered external environment, brought about by three main trends: the fracking revolution and the risk of prolonged low oil prices, tightening climate policies, and cheaper alternatives to oil. The conventional view of energy geopolitics has long been underpinned by the expectation that global demand for oil will continue to grow unabatedly. The geopolitics of energy was then framed as a struggle for access to scarce oil and gas reserves—a dominant image that is still often reproduced in the media. That common wisdom has now changed. The new geopolitics of energy is characterised by abundance rather than scarcity, even at low prices. In fact, OPEC countries might not be able to burn through all their fossil fuel reserves due to climate change regulation, leaving them with stranded assets. Key trends in efficiency, fuel-switching and market saturation are pointing into the direction of a demand peak for oil instead of a supply peak. Oil producers are coming to realise that oil in the ground is not like ‘money in the bank’ but that these resources might someday be less valuable than oil produced and sold in the short term. The first crack in this conventional view of energy geopolitics arose due to the recent shale and fracking revolution, which has unlocked large new oil and gas deposits for commercial extraction. To be sure, tight oil and shale gas production comes at a price compared to conventional extraction, both in terms of higher exploration and production cost, a lower energy return on investment (EROI), and grave environmental and social risks. These costs and externalities, though, have not prevented the rapid and vast boom of the shale gas and tight oil industry in the United States, which alone added almost 4 mb/d of oil to the world’s oil production between 2007 and 2015 (see Figure 1). The IEA expects a number of countries to follow into the footsteps of the United States, with China likely in the vanguard, though it will take a few more years before their efforts to tap shale gas and tight oil deposits at a large scale will bear fruit. There are other emerging sources of supply, next to shale oil, including biofuels, oil sands, deepwater deposits, and growing conventional production from countries like Iraq, which might substantially increase the global reserve base. Coupled with OPEC’s rising internal demand and stagnant or even falling upstream capacity, the group’s share of the export market might be eroded over time. But the advent of the shale and tight oil industry stands out for three reasons. First, by unlocking vast resources that had long been deemed uneconomical, the fracking technology has dispelled ‘peak oil’ worries just as rising climate concerns have begun to cast doubt on the long-term outlook for oil demand growth. This has fueled speculation that a huge ‘carbon bubble’ is in the making, that large amounts of oil would have to ‘stay in the ground’, and that some of OPEC’s resources might end up being ‘stranded assets’. This might change the revenue-maximising strategy of low-cost producers like Saudi Arabia and give them an incentive to speed up, rather than slow down, oil extraction. Second, the shale revolution accelerates the eastward migration of the global oil market, whereby the center of gravity of oil consumption, and hence oil trade flows, are decidedly shifting to the so-called ‘East of Suez’ region. That leaves oil exporters competing with each other for an increasingly concentrated Asian market, which is itself dominated by supergiant Chinese oil trading companies with considerable market power. This situation provides another deterrent for OPEC to implement production cuts. Last but not least, what stands at the center of the shale oil revolution is that it has changed the cost curve and elasticity of oil supply. The fracking industry operates on a much shorter investment cycle than the conventional oil industry: upfront costs are relatively low, decline rates are steep, lead times and payback times are short. There is no real exploration process to speak of because the location and broad characteristics of the main plays are well known. The time from an investment decision to actual production is measured in months, rather than years, making the tight oil industry far more nimble and responsive to price signals. On the demand side, the Paris Agreement concluded in December 2015 might prove to be a game-changer. Even though the text of the Agreement nowhere mentions the words ‘oil’, ‘gas’, ‘energy’, ‘fossil fuels’ or even ‘carbon’, the deal effectively implies a complete overhaul of the world’s energy mix. By agreeing on the political goal of limiting the average global surface temperature increase to ‘well below’ 2°C above preindustrial levels and even try to keep it below 1.5°C, the Paris Agreement boils down to a commitment to phase out fossil fuels before the end of the century. Under a scenario where fossil fuel use is reduced to limit global warming to 2°C, oil will probably be phased out slower than coal which is far more polluting and has more substitutes. Yet, oil will certainly not be able to expand at the same rate as it used to. The IEA’s latest 450 scenario, which is consistent with a 50% chance of less than 2 °C of global warming, projects global oil demand to reach a peak of 93.7 million b/d in 2020 but thereafter fall to 74.1 million b/d by 2040. This would imply that the oil industry’s decades-old expansion would come to a halt, and enter a permanent decline, implying that oil would become an ex-growth sector. This could trigger a ‘race to sell oil’ among petrostates. McGlade and Ekins have calculated that, globally, a third of oil reserves, half of gas reserves and over 80% of current coal reserves should remain unused from 2010 to 2050 in order to have a better-than-even chance of meeting the target of 2 °C. These ‘unburnable reserves’ do not decrease very much in a scenario with widespread deployment of carbon capture and storage (CCS). For example, the amount of ‘unburnable’ oil inches slightly downwards from 35% to 33% of all reserves if CCS is widely deployed. The modest effect of CCS is due to the fact that CCS will take decades to scale up globally, the technique might not be more cost-effective than renewables or nuclear and it is not entirely carbon-free. Table 3 depicts the shares of fossil fuel reserves that should be kept under the ground to have a medium chance of limiting warming to 2 °C in a scenario with CCS deployment. Admittedly, there are good reasons to be skeptical about the actual results of COP21. However, even if the 2 °C goal is not met, there are significant drivers that could lead to a peak in global oil demand, including lower economic growth (especially in emerging markets), the falling cost of renewables and electricity storage, the emergence of prosumers with a keen interest in electric vehicles, the spread of more stringent policies to mitigate air pollution or water stress, and the growing decoupling between oil consumption and economic growth due to greater efficiency. In short, the writing is on the wall that oil will never again grow at its historic rates. As Figure 2 shows, in only 9 of the past 50 years did the global demand for oil contract. In all other years it grew, quite often by more than 3% on an annual basis. Over the whole period (1966-2015), the compound annual average growth rate of oil demand was 1.94%. Throughout all of the IEA’s scenarios (2013-2040), this rate will slow down to 0.88% (Current Policies Scenario), 0.43% (New Policies Scenario), or even -0.85% (450 Scenario). In light of these challenges, observers have declared OPEC dead as a cartel. There are four major reasons why this view is misguided. First, OPEC never really was a cartel, let alone an omnipotent one. It only began to enact production targets since 1982, 22 years after it was founded, and even then it was not very successful. Despite OPEC’s efforts to function as a cartel, the oil price plummeted in the first half of the 1980s. Most, if not all OPEC countries cheated on their allocated production targets until Saudi Arabia’s patience was exhausted and the Kingdom decided to flood the market in 1986, in order to regain market share and punish the cheaters. Colgan (2014) finds that the cartel has overproduced a staggering 96 percent of the time in the period 1982-2009. Second, OPEC’s reserves are not stranded yet. OPEC still commands the largest conventional oil reserves. Especially the Gulf members of OPEC have very low production costs. If oil is stranded due to climate policies, it will most likely be the most expensive, risky and polluting fields, such as the Arctic, the ultra-deepwater fields, and the tar sands. Cherp et al. (2013) find that a peak in oil demand due to climate policies could even lead to a higher concentration of production in the hands of those states holding the largest conventional oil reserves, which are generally cheaper and less carbon-intensive. Third, OPEC has demonstrated a remarkable flexibility and resilience during its lifetime. The organisation has survived various price crashes, as well as the emergence of the North Sea in the 1970s, Alaska in the 1980s, offshore oil production in the 1990s and biofuels in the 2000s—all of which were seen as existential threats. Crucially, OPEC even hung together when the Saudis inflicted a lot of pain on their fellow cartel members in the late 1980s. Most curiously, OPEC oil ministers have continued to meet in Vienna even when they were at war with each other, such as Iraq and Iran in the 1980s (see picture 1), Iraq and Kuwait in the 1990s, and Iran and Saudi Arabia today (who are fighting proxy wars in Yemen and Syria). As Antoine Halff, a former IEA oil market specialist, has convincingly argued: ‘OPEC has changed and the idea that giving up on supply management means repudiating what the group is all about only focuses on a limited period of its history and confuses one stage of policy with the essence of the group.’ In this flexibility also lies the key to understand why OPEC is the only commodity organisation to have survived, whereas earlier commodity agreements (including for tin, coffee, and natural rubber) have faltered and disappeared. OPEC does not have a legal clause on how to intervene in specific market conditions, thus allowing it to respond flexibly to changing circumstances. Finally, OPEC will not wither away quickly because it still proves useful for its member countries, as is most vividly illustrated by the recent re-entries of Indonesia and Gabon to the club. The re-entry of Indonesia is most remarkable since the country has become a net importer. Yet, for its members, OPEC is useful as a forum to share information, as a forum for deliberation, and most notably, a source of prestige. There is a persistent rational myth that OPEC is a powerful cartel. International media are obsessed with OPEC meetings, outcomes and declarations, even if the group’s (long-term) impact on oil prices is heavily disputed. But what these reports are missing is that its output-setting function is not the primary reason for OPEC’s existence. OPEC is just as much a high-level, influential international organisation of oil exporters. OPEC’s endurance amidst the tectonic changes that have taken place in the global petroleum market has been called an ‘enigma of world politics’, and the organisation itself has been referred to as a ‘striking anachronism’. Students of international relations know however that international organisations rarely die. Robert Keohane famously stated that international institutions are ‘easier to maintain than to construct.’ There are many examples of international organisations that have outlived their original mandate. Think of NATO’s resilience after the end of the Cold War, the World Bank’s endurance after the postwar reconstruction of Europe, or the Bank for International Settlements surviving the Great Depression and the Second World War. In the same vein, it is conceivable that OPEC survives the transition to a post-carbon society, as long as it finds a niche for itself that proves valuable to its member states. Other international energy organisations such as the IEA have been busy for years to adapt to major shifts in the global energy landscape. The IEA has been quite successful in this regard, and is touted as a model for the reform of other global institutions. Yet, other  international energy bodies have been much slower to adapt and some even stick around without being very meaningful. A case in point is the Energy Charter Treaty, which has been in complete disarray since Russia’s formal withdrawal in 2009, despite recent attempts to reinvigorate the organisation. The key question thus becomes whether OPEC countries will engage in a far-reaching examination of the organisation’s mission and toolbox, or whether the club will sink into oblivion. A systematic account of the history of global energy governance has shown that oil exporters might engage in institutional innovation when they are dissatisfied with the level of their oil revenues. The current low oil prices might thus provide a window of opportunity to reform OPEC. Over the short to medium term, OPEC might continue to serve as a forum to facilitate attempts at managing oil supply. For all the doubts expressed about it, the recent Algiers signals that there still exists a willingness to intervene and stabilise oil markets in spite of the rhetoric that the oil market should now manage itself. Toward the longer term, as the world shifts away to cleaner fuels, OPEC could provide a valuable framework for exchanging critical information among member states about the implications of this shift. This could be technical cooperation on technologies such as CCS, which may play their role in the transition and prove to be another source of income for OPEC countries out of their depleted oil and gas wells. But it could also entail the sharing of best practices of how to make a national economy less dependent on the revenues from the foreign sales of crude. Despite many attempts to diversify petro-economies, there are only scant examples of success (e.g., Indonesia, Malaysia and Dubai), and it can be questioned whether these models can be replicated. OPEC’s Secretariat could become an information clearing-house to share information on what works and what does not in particular circumstances. Thijs Van de Graaf [thijs.vandegraaf@ugent.be] is an Assistant Professor of International Politics at the Ghent Institute for International Studies, Ghent University. His research focus is on global energy governance and energy policy. This article is an edited and shortened version of a fully annotated academic paper to be published in the journal Energy Research and Social Science.


News Article | January 1, 2016
Site: phys.org

Most experts agree that solar power must be a critical component of any long-term plan to address climate change. By 2050, a major fraction of the world's power should come from solar sources. However, analyses performed as part of the MIT "Future of Solar Energy" report found that getting there won't be straightforward. "One of the big messages of the solar study is that the power system has to get ready for very high levels of solar PV generation," says Ignacio Pérez-Arriaga, a visiting professor at the MIT Sloan School of Management from IIT-Comillas. Without the ability to store energy, all solar (and wind) power devices are intermittent sources of electricity. When the sun is shining, electricity produced by PVs flows into the power system, and other power plants can be turned down or off because their generation isn't needed. When the sunshine goes away, those other plants must come back online to meet demand. That scenario poses two problems. First, PVs send electricity into a system that was designed to deliver it, not receive it. And second, their behavior requires other power plants to operate in ways that may be difficult or even impossible. The result is that solar PVs can have profound, sometimes unexpected impacts on operations, future investments, costs, and prices on both distribution systems—the local networks that deliver electricity to consumers—and bulk power systems, the large interconnected systems made up of generation and transmission facilities. And those impacts grow as the solar presence increases. To examine impacts on distribution networks, the researchers used the Reference Network Model (RNM), which was developed at IIT-Comillas and simulates the design and operation of distribution networks that transfer electricity from high-voltage transmission systems to all final consumers. Using the RNM, the researchers built—via simulation—several prototype networks and then ran multiple simulations based on different assumptions, including varying amounts of PV generation. In some situations, the addition of dispersed PV systems reduces the distance electricity must travel along power lines, so less is lost in transit and costs go down. But as the PV energy share grows, that benefit is eclipsed by the need to invest in reinforcing or modifying the existing network to handle two-way power flows. Changes could include installing larger transformers, thicker wires, and new voltage regulators or even reconfiguring the network, but the net result is added cost to protect both equipment and quality of service. Figure 1 below presents sample results showing the impact of solar generation on network costs in the United States and in Europe. The outcomes differ, reflecting differences in the countries' voltages, network configurations, and so on. But in both cases, costs increase as the PV energy share increases from 0 to 30 percent, and the impact is greater when demand is dominated by residential rather than commercial or industrial customers. The impact is also greater in less sunny regions. Indeed, in areas with low insolation, distribution costs may nearly double when the PV contribution exceeds one-third of annual load. The reason: When insolation is low, many more solar generating devices must be installed to meet a given level of demand, and the network needs to be ready to handle all the electricity flowing from those devices on the occasional sunny day. One way to reduce the burden on distribution networks is to add local energy storage capability. Depending on the scenario and the storage capacity, at 30 percent PV penetration, storage can reduce added costs by one-third in Europe and cut them in half in the United States. "That doesn't mean that deployment of storage is economically viable now," says Pérez-Arriaga. "Current storage technology is expensive, but one of the services with economic value that it can provide is to bring down the cost of deploying solar PV." Another concern stems from methods used to calculate consumer bills—methods that some distribution companies and customers deem unfair. Most U.S. states employ a practice called net metering. Each PV owner is equipped with an electric meter that turns one way when the household is pulling electricity in from the network and the other when it's sending excess electricity out. Reading the meter each month therefore gives net consumption or (possibly) net production, and the owner is billed or paid accordingly. Most electricity bills consist of a small fixed component and a variable component that is proportional to the energy consumed during the time period considered. Net metering can have the effect of reducing, canceling, or even turning the variable component into a negative value. As a result, users with PV panels avoid paying most of the network costs—even though they are using the network and (as explained above) may actually be pushing up network costs. "The cost of the network has to be recovered, so people who don't own solar PV panels on their rooftops have to pay what the PV owners don't pay," explains Pérez-Arriaga. In effect, the PV owners are receiving a subsidy that's paid by the non-PV owners. Unless the design of network charges is modified, the current controversy over electricity bills will intensify as residential solar penetration increases. Therefore, Pérez-Arriaga and his colleagues are developing proposals for "completely overhauling the way in which the network tariffs are designed so that network costs are allocated to the entities that cause them," he says. In other work, the researchers focused on the impact of PV penetration on larger-scale electric systems. Using the Low Emissions Electricity Market Analysis model—another tool developed at IIT-Comillas—they examined how operations on bulk power systems, the future generation mix, and prices on wholesale electricity markets might evolve as the PV energy share grows. Unlike deploying a conventional power plant, installing a solar PV system requires no time-consuming approval and construction processes. "If the regulator gives some attractive incentive to solar, you can just remove the potatoes in your potato field and put in solar panels," Pérez-Arriaga says. As a result, significant solar generation can appear on a bulk power system within a few months. With no time to adjust, system operators must carry on using existing equipment and methods of deploying it to meet the needs of customers. A typical bulk power system includes a variety of power plants with differing costs and characteristics. Conventional coal and nuclear plants are inexpensive to run (though expensive to build), but they don't switch on and off easily or turn up and down quickly. Plants fired by natural gas are more expensive to run (and less expensive to build), but they're also more flexible. In general, demand is met by dispatching the least expensive plants first and then turning to more expensive and flexible plants as needed. For one series of simulations, the researchers focused on a power system similar to the one that services much of Texas. Results presented in Figure 2 in the slideshow above show how PV generation affects demand on that system over the course of a summer day. In each diagram, yellow areas are demand met by PV generation, and brown areas are "net demand," that is, remaining demand that must be met by other power plants. Left to right, the diagrams show increasing PV penetration. Initially, PV generation simply reduces net demand during the middle of the day. But when the PV energy share reaches 58 percent, the solar generation pushes down net demand dramatically, such that when the sun goes down, other generators must go from low to high production in a short period of time. Since low-cost coal and nuclear plants can't ramp up quickly, more expensive gas-fired plants must cut in to do the job. That change has a major impact on prices on the wholesale electricity market. Each owner who sends a unit of electricity into the bulk power system at a given time gets paid the same amount: the cost of producing a unit of electricity at the last plant that was turned on, thus the most expensive one. So when PVs come online, expensive gas-fired plants shut off, and the price paid to everyone drops. Then when the sun goes away and PV production abruptly disappears, gas-fired plants are turned back on and the price goes way up. As a result, when PV systems are operating and PV penetrations are high, prices are low, and when they shut down, prices are high. Owners of PV systems thus receive the low prices and never the high. Moreover, their reimbursement declines as more solar power comes online, as shown by the downward sloping blue curve in Figure 1 in the slideshow above. Under current conditions, as more PV systems come online, reimbursements to solar owners will shrink to the point that investing in solar is no longer profitable at market prices. "So people may think that if solar power becomes very inexpensive, then everything will become solar," Pérez-Arriaga says. "But we find that that won't happen. There's a natural limit to solar penetration after which investment in more solar will not be economically viable." However, if goals and incentives are set for certain levels of solar penetration decades ahead, then PV investment will continue, and the bulk power system will have time to adjust. In the absence of energy storage, the power plants accompanying solar will for the most part be gas-fired units that can follow rapid changes in demand. Conventional coal and nuclear plants will play a diminishing role—unless new, more flexible versions of those technologies are designed and deployed (along with carbon capture and storage for the coal plants). If high subsidies are paid to PV generators or if PV cost diminishes substantially, conventional coal and nuclear plants will be pushed out even more, and more flexible gas plants will be needed to cover the gap, leading to a different generation mix that is well-adapted for coexisting with solar. A powerful means of alleviating cost and operating issues associated with PVs on bulk power systems—as on distribution networks—is to add energy storage. Technologies that provide many hours of storage—such as grid-scale batteries and hydroelectric plants with large reservoirs—will increase the value of PV. "Storage helps solar PVs have more value because it is able to bring solar-generated electricity to times when sunshine is not there, so to times when prices are high," Pérez-Arriaga says. As Figure 3 in the slideshow above demonstrates, adding storage makes investments in PV generation more profitable at any level of solar penetration, and in general the greater the storage capacity, the greater the upward pressure on revenues paid to owners. Energy storage thus can play a critical role in ensuring financial rewards to prospective buyers of PV systems so that the share of generation provided by PVs can continue to grow—without serious penalties in terms of operations and economics. Again, the research results demonstrate that developing low-cost energy storage technology is a key enabler for the successful deployment of solar PV power at a scale needed to address climate change in the coming decades. Explore further: PV production grows despite a crisis-driven decline in investment More information: The Remuneration Challenge: New Solutions for the Regulation of Electricity Distribution Utilities Under High Penetrations of Distributed Energy Resources and Smart Grid Technologies: mitei.mit.edu/system/files/20141015-The-Remuneration-Challenge-MIT-CEEPR-No-2014-005.pdf A Framework for Redesigning Distribution Network Use of System Charges Under High Penetration of Distributed Energy Resources: New Principles for New Problems. mitei.mit.edu/system/files/20141028_UOF_DNUoS-FrameworkPaper.pdf


News Article | August 22, 2016
Site: news.mit.edu

Researchers at MIT are making fluorescent polymer gels that change color when they’re shaken, heated, exposed to acid, or otherwise disrupted. Given such a response, these novel materials could be effective sensors for detecting changes in structures, fluids, or the environment. To create the gels, the researchers combine a widely used polymer with a metal that fluoresces and a chemical that can bind the two together. Mixed into a solvent, the metal and binder instantly self-assemble, grabbing the polymer molecules and pulling them together to form a gel. By using different metals, the researchers can control the physical properties of the gel as well as the color of light it emits. In a series of tests, the gels emitted a color-coded response to a variety of subtle external stimuli and later returned to their pre-stressed state and color. Natural organisms display some remarkable behaviors. The mussel, for example, produces strong fibers that allow it to attach tightly to boats, rocks, and other underwater surfaces. But those fibers are changeable. Pull on an attached mussel, and the stiff fibers become stretchy. Let go, and the fibers go back to their original stiff state, “self-repairing” any damage that’s occurred. In contrast, human-made materials are typically not very dynamic, and when they break, the damage is irreversible. Niels Holten-Andersen, assistant professor in the Department of Materials Science and Engineering (DMSE) and the Doherty Professor in Ocean Utilization, has long been interested in mussel fibers and the component that’s key to their success — the metal coordination complex. This structure consists of a single metal ion (a charged particle) with several chemically bound arms, or “ligands,” radiating outward. The ligands are made of organic (carbon-containing) molecules and can attach to other molecules, enabling the complex to serve as a crosslink that binds materials together. Given that capability, the metal coordination complex plays an important role in many biological systems, including the human body, where it catalyzes enzyme-controlled reactions and binds oxygen to hemoglobin in blood. Holten-Andersen says he’s always been fascinated by the way that nature assembles materials, putting together proteins and sugars and fatty acids in creative ways to form complex dynamic structures. “We can’t copy nature’s materials. For example, it’s difficult to synthesize proteins in the lab,” he says. “But we can see how nature builds its materials and why they work the way they do. We can then try to mimic the way nature has done it but using simple, inexpensive building blocks that we know how to make.” To Holten-Andersen, polymer building blocks seemed like a good bet. “We know how to make simple, cheap, green polymers in large quantities,” he says. So four years ago, he decided to try making polymer gels held together by metal coordination complexes built on transition metals — a family of elements he’d frequently seen in biological settings. Initial results were promising. The polymer molecules, metals, and ligands instantly self-assembled into gels, and the mechanical properties and emitted colors of the gels depended on the transition metal used. Encouraged by those results, Holten-Andersen decided to try a different family of metals, the lanthanides. Like the transition metals, the lanthanides — often referred to as the rare-earth elements — provide a host of interesting and complicated behaviors. But they have one additional intriguing characteristic: They fluoresce. Shine ultraviolet light on a lanthanide, and it becomes excited and emits light at a characteristic wavelength. “By using the lanthanides, we could still control the properties of our gels, but now we’d have light emission that would reflect any changes in those properties,” Holten-Andersen says. “With those two features intimately coupled, any time the physical properties were disturbed — say, by a change in the temperature of the nearby air or the pH of the surrounding water — the color emitted would change.” Such a polymer gel could report on its own state and serve as an excellent sensor. For example, it could be used as a coating that monitors the structural integrity of pipes, cables, and other underwater structures critical to offshore oil and gas and wind farm operations. Before beginning to work with polymers, Holten-Andersen wanted to confirm — as others had shown — that mixing lanthanide ions and ligands in a solvent would produce light-emitting fluids. Accordingly, he and his team in the Laboratory for Bio-Inspired Interfaces — Pangkuan Chen, postdoc in DMSE; Qiaochu Li and Scott Grindy, both DMSE graduate students; and rising senior Rebecca Gallivan of DMSE and rising junior Caroline Liu of mechanical engineering — combined terpyridine, a commercially available ligand material, with selected lanthanides in a solvent. As Figure 1 in the slideshow above demonstrates, the mixtures produced liquids that fluoresce under ultraviolet light in the characteristic colors of the three lanthanides: blue for lanthanum, red for europium, and green for terbium. Those results confirm that the complexes formed as expected. But a mixture emitting pure white light would make a far better sensor: It’s easier to see white light turn slightly green than it is to see green light become a little less green. To their surprise, the researchers found that producing a white-light-emitting fluid was simple. Since white light is actually a combination of many colors, they just needed to mix together their blue, red, and green light-emitting fluids. As shown in Figure 1 in the slideshow above, putting together equal parts of the three colored fluids produced a glowing white liquid. The researchers next exposed their white-light-emitting fluid to a series of external stimuli to see if they’d get a color-coded response — and they did. For example, when they gently heated the fluid from room temperature to 55 degrees Celsius, the emitted light gradually changed color. When they let it cool down, the white light returned. The ligand and metal ions had come apart when they were heated and then reassembled when they were allowed to cool. The fluid also proved sensitive to wide-ranging changes in pH. “So we found that this simple blue-red-green approach to making a white-light-emitting system indeed leads to materials that respond to a variety of stimuli, and with that response comes a change in color,” says Holten-Andersen. The fluids might therefore serve as good sensors for detecting chemical variations within a liquid or for observing velocity gradients in fluid flow experiments — differences in flows that now must be determined indirectly by simulation. In the next series of tests, the researchers tried incorporating their lanthanide ions and terpyridine ligands into a widely used polymer called polyethylene glycol, or PEG. At the beginning of the experiments, the polymer molecules coupled with ligands were free-floating in a solvent. “We then mixed in one of our lanthanide metals, and after some gentle shaking, the mixture changed from a fluid to a fluorescent gel,” says Holten-Andersen. The metal ions and ligands had self-assembled, linking the polymers together. Once again, they found that gels based on different lanthanides emitted different colors, and combining them in various ratios produced shifts in color. The lead image above shows a series of gels made using europium and terbium. (It turned out they didn’t need lanthanum because the ligand itself emits blue.) The sample at the far left is all europium, therefore red; the one at the far right is all terbium, therefore blue-green; and those in between are made with various ratios of the two. Bright white luminescence appears in the second sample from the right, when the mixing ratio of terbium to europium is 96 to 4. The samples demonstrate the simplicity of designing “metallogels” with a broad spectrum of colors. Like the fluids, the gels proved to be sensitive detectors of changes in temperature and pH. But perhaps the most dramatic response came when the gels were sonicated, that is, disrupted by exposure to high-frequency sound waves. Figure 2 above shows changes in the white-light-emitting gel during immersion in an ultrasonic bath. In the sample taken after 5 minutes, the gel is partially broken down into a fluid. The gel that remains retains its white luminescence, while the fluid gives off blue light — emitted by the now-unbound ligands. After another 11 minutes of shaking, the conversion of the sample from gel to fluid is complete, and the blue light of the ligands dominates. And again, given time, the white-light gel reassembles. “When we let the blue fluid rest overnight, the polymers found each other again, and it turned back into a gel and made white light,” says Holten-Andersen. “That was very exciting for us because it really shows in principle that as a proof-of-concept, our approach works under these conditions. We can make a material that emits white light, reports its own failure, and then recovers. So it’s a self-reporting material that’s also self-healing.” Holten-Andersen and his team are now investigating the use of their materials as coatings that can sense structural failure as well as pH and temperature changes — a capability that will be valuable in many energy and environmental systems. Current work focuses on coatings for underwater cables used to transport electric power from offshore wind turbines to shore. The researchers are also planning more fundamental studies. There’s a lot of interest in making materials that can change in response to various outside stimuli and then autonomously repair, returning to their original state. The availability of such self-healing materials would reduce the need to fabricate replacements for them over time. Knowing how to build self-healing materials, however, requires knowing how those materials fail and repair in the field, and that’s difficult to study, says Holten-Andersen. He hopes their new materials may help. The chemical bonds in the metal-coordinate crosslinks have a remarkable ability to break and then re-form — and to announce that activity with changes in light emission. Guided by those light changes plus high-resolution imagery, the researchers may be able to get new insights into when, where, and how the material breaks and then comes back together. Holten-Andersen stresses that we still have lots to learn from nature. “We’re just scratching the surface in understanding nature, given the technology we now have to look at it,” he says. For example, he believes that we’re far from finding all the metal coordination complexes that nature uses. They could occur in other natural materials with remarkable properties — perhaps in spider silk, which is tough, elastic, resilient, and one of the strongest materials known. “It’s hard to see these metals,” he says. “They appear in tiny concentrations and a single molecule at a time. But I think metal coordination complexes are much more prevalent in nature’s materials than we are currently aware of.” And coordination complexes are just one among many tricks that nature has developed over millions of years to help organisms deal with challenging environmental conditions. This research was sponsored by the MIT Energy Initiative (MITEI) Seed Fund Program and by MIT Sea Grant via the Doherty Professorship in Ocean Utilization. Student researcher Caroline Liu received support from the Energy Undergraduate Research Opportunities Program through MITEI with funding from Lockheed Martin, a Sustaining Member of MITEI. This article appears in the Spring 2016 issue of Energy Futures, the magazine of the MIT Energy Initiative.

Loading As Figure collaborators
Loading As Figure collaborators