News Article | February 15, 2017
MINNEAPOLIS--(BUSINESS WIRE)--Xcel Energy announced today it is partnering with Financial Services Information Sharing and Analysis Center (FS-ISAC) to create a new threat information sharing community. The new community will enhance the energy sector’s access to cyber and physical security intelligence. “Ensuring the integrity of critical infrastructure demands constant vigilance and it is a responsibility we take seriously,” said Ben Fowke, chairman, president and CEO, Xcel Energy. “We’re proud to take the lead in the creation of a new community that will provide additional tools to address potential cyber and physical risks to our business.” The new electric grid-centered intelligence community is called the Energy Analytic Security Exchange (EASE) and will be coordinated by the FS-ISAC Sector Services team on behalf of energy sector members. For 18 years, FS-ISAC has been a trusted organization that provides cyber and physical risk intelligence to the worldwide financial services industry. FS-ISAC will share its expertise, technology and its access to intelligence partnerships with the new community. “We’re excited to welcome Xcel Energy as the initial member of the Energy Analytic Security Exchange and look forward to quickly adding more members to this community. Xcel Energy has an impressive understanding of the threats faced by the energy industry and it is committed to protecting itself and its customers. FS-ISAC Sector Services looks forward to helping the EASE sharing community enhance the security and resilience of the sector,” said Cindy Donaldson, COO, FS-ISAC Sector Services. Utilities have experience sharing security information within their sector. For example, the Electricity ISAC is a highly valuable security resource and it is managed by the North American Electric Reliability Council (NERC). By supplementing its sharing network to include the resources of FS-ISAC Sector Services, the energy sector will have access to additional timely and relevant information, research and analysis, and threat intelligence from other industries and sources. “We’re always working to provide safe, reliable and cost-effective energy to customers. Having additional security systems at our disposal helps to ensure the reliability of the system and protects customers,” said Fowke. About Xcel Energy Xcel Energy (NYSE: XEL) provides the energy that powers millions of homes and businesses across eight Western and Midwestern states. Headquartered in Minneapolis, the company is an industry leader in responsibly reducing carbon emissions and producing and delivering clean energy solutions from a variety of renewable sources at competitive prices. For more information, visit xcelenergy.com or follow us on Twitter and Facebook. About FS-ISAC Sector Services The Sector Services division of the Financial Services Information Sharing and Analysis Center (FS-ISAC) is a not-for-profit entity committed to facilitating the creation and growth of threat sharing organizations across many sectors. Its mission is to support organizations in sharing timely, relevant, and actionable cyber and physical threat intelligence, with the goal of strengthening overall resilience and promoting information sharing across sectors. Its experts have 18 years of experience with threat sharing communities across industry sectors including financial services, oil and gas, legal, real estate and retail.
News Article | February 16, 2017
A University of Central Florida professor is working with NASA to figure out a way to extract metals from the Martian soil - metals that could be fed into a 3-D printer to produce the components of a human habitat, ship parts, tools and electronics. "It's essentially using additive-manufacturing techniques to make constructible blocks. UCF is collaborating with NASA to understand the science behind it," said Pegasus Professor Sudipta Seal, who is interim chair of UCF's Materials Science and Engineering program, and director of the university's Advanced Materials Processing & Analysis Center and NanoScience Technology Center. NASA and Seal will research a process called molten regolith electrolysis, a technique similar to how metal ores are refined here on Earth. Astronauts would be able to feed Martian soil - known as regolith - into a chamber. Once heated to nearly 3,000 degrees Fahrenheit, the electrolysis process would produce oxygen and molten metals, both of which are vital to the success of future human space exploration. Seal's expertise also will help determine the form those metals should be in that's most suitable for commercial 3-D printers. NASA intern Kevin Grossman, a graduate student from Seal's group, is also working on the project, which is funded by a NASA grant. Grossman said he hopes future projects in similar areas can grow the current partnership between UCF and the research groups at NASA's Kennedy Space Center. NASA is already working on sending humans to the Red Planet in the 2030s. The agency has begun developing plans for life-support systems and other technology. NASA isn't alone. Elon Musk, billionaire founder of SpaceX and Tesla Motors, is working on his own plan. Mars One, a Dutch nonprofit, is touting a plan to send dozens of volunteers from around the world on a one-way trip to colonize Mars. They all agree that for sustainable Mars exploration to work, they must be able to use resources on Mars that would otherwise require costly transportation from Earth - a concept known as in situ resource utilization. That's where Seal's research comes in. "Before you go to Mars, you have to plan it out," Seal said. "I think this is extremely exciting." UCF has a long relationship with NASA, dating back to the first research grant ever received by the university, then known as Florida Technological University. Other UCF faculty members continue researching in situ resource utilization. Phil Metzger of UCF's Florida Space Institute, is working with commercial space mining company Deep Space Industries to figure out a way to make Martian soil pliable and useful for 3D printing. The same company has tapped Metzger and UCF colleague Dan Britt to develop simulated asteroid regolith that will help them develop hardware for asteroid mining.
News Article | March 2, 2017
MCLEAN, Va.--(BUSINESS WIRE)--As the number of Americans who live and work in urban areas rises, the federal government, state and municipal authorities are making the improvement of transportation systems a top priority. The development of intelligent transportation systems, such as greater automation in cars and increased connectivity among vehicles, will help to ease the burden that transportation currently places on individuals and communities alike. To facilitate research in this key focus area and accelerate the implementation of 21st century transportation solutions, the U.S. Department of Transportation’s Intelligent Transportation Systems (ITS) Joint Program Office (JPO) has awarded Booz Allen Hamilton one of two spots on a $202 million, five-year, Indefinite Delivery, Indefinite Quantity (IDIQ) contract for non-personal technical support services. The firm will assist ITS JPO staff in executing research, development and deployment activities, as well as help ITS JPO comply with legislative and regulatory requirements. “The ITS JPO brings together a number of federal agencies to address some of the key transportation challenges facing our country today. Booz Allen is proud to continue our relationship with the ITS JPO and to support the office as it works to achieve its key objectives for the coming years,” said Dr. Christopher Hill, a principal and leader of the surface transportation business at Booz Allen. “Intelligent systems hold the potential to alleviate a wide range of issues associated with transportation, from traffic accidents and fatalities, to congestion and environmental pollution. Our team is excited about the opportunity to be at the forefront of progress in the ITS field.” Booz Allen has been supporting the Transportation Department’s efforts to make transportation systems smarter, safer and more efficient for the past five years, and has provided both technical capabilities, such as work on data sets for connected vehicles, performance measures for dedicated short range communications (DSRC) technology, and functional support including strategic planning and chronicling the history of ITS. Booz Allen also assisted ITS in the development of their 2015-2019 strategic plan. To the new contract, Booz Allen will bring a highly qualified and experienced team drawing from the firm’s transportation-focused Communities of Practice, which include Vehicle Automation and Unmanned Systems, Enterprise Data and Advanced Analytics, Vehicle Cybersecurity, and Smart Cities and Connected Society. These Communities of Practice reflect the depth of Booz Allen’s work and innovation in surface transportation, including for organizations like the Automotive Information Sharing and Analysis Center (Auto-ISAC), which with support from Booz Allen recently developed best practices for vehicle cybersecurity on behalf of the auto industry. Booz Allen Hamilton (NYSE: BAH) has been at the forefront of strategy and technology for more than 100 years. Today, the firm provides management and technology consulting and engineering services to leading Fortune 500 corporations, governments, and not-for-profits across the globe. Booz Allen partners with public and private sector clients to solve their most difficult challenges through a combination of consulting, analytics, mission operations, technology, systems delivery, cybersecurity, engineering, and innovation expertise. With international headquarters in McLean, Virginia, the firm employs more than 23,000 people globally, and had revenue of $5.41 billion for the 12 months ended March 31, 2016. To learn more, visit BoozAllen.com.
News Article | February 24, 2017
WASHINGTON, Feb. 23, 2017 /PRNewswire-USNewswire/ -- The Automotive Information Sharing and Analysis Center (Auto-ISAC) welcomes Bosch, Cooper Standard, Honeywell, Hyundai Mobis, Lear Corporation, LG Electronics and NXP Semiconductors as original equipment supplier members. The inclusion...
News Article | February 15, 2017
We used a data-assimilating ocean circulation inverse model (OCIM)2, 16 to estimate the mean ocean circulation during three different time periods: pre-1990, the decade of the 1990s, and the period 2000–2014, which we refer to respectively as the 1980s, 1990s and 2000s. For each time period, we assimilated observations of five tracers: potential temperature, salinity, the chlorofluorocarbons CFC-11 and CFC-12, and Δ14C. Potential temperature and salinity data were taken from the 2013 World Ocean Database, Ocean Station Data and Profiling Floats data sets. The observations were binned by time period and then averaged onto the model grid. Quality control was performed by removing outliers (more than four inter-quartile ranges above the upper quartile) at each depth level in the model. This removed less than 0.1% of the observations. CFC-11, CFC-12 and Δ14C observations were taken from the Global Ocean Data Analysis Project version 2 (GLODAPv2) database30. These data were already quality-controlled. We used an earlier version of the GLODAPv2 database, but checking it against the newest release we find that the correlation R2 of the fit between the CFC-11 and CFC-12 observations in each version is over 0.99. The only major difference between the version used and the newest version of GLODAPv2 is that the latter includes data from two additional cruise tracks in the Indian Ocean. The CFC-11 and CFC-12 observations were binned by time period and then averaged onto the model grid. We assimilated Δ14C observations only where they were paired with a near-zero CFC-11 or CFC-12 measurement (CFC-11 < 0.05 pmol kg−1, CFC-12 < 0.025 pmol kg−1). This was done to remove Δ14C observations that may have been contaminated by bomb-produced 14C, since we model only the ‘natural’ (pre-1955 bomb) component of Δ14C. These Δ14C observations constrain the ventilation of deep water masses, and the same Δ14C observations were used in each of the three assimilation periods. Extended Data Fig. 7 shows the spatial distribution of the CFC observations for each decadal period, as well as the temporal distribution of observations of CFCs, temperature, and salinity. The spatial distributions of temperature and salinity are not shown, but all regions are well sampled for all time periods. Almost all of the transects with CFC observations in the 1990s were re-occupied with repeat hydrographies during the 2000s. During the 1980s, in contrast, several large areas are missing CFC observations. In particular, during the 1980s there are no CFC observations in the Pacific and Indian sectors of the Southern Ocean. For these sectors, the inferred circulation changes from the 1980s to the 1990s must therefore be interpreted cautiously. Nonetheless, the model-predicted weakening of the Southern Ocean CO sink during the 1990s is in good agreement with independent studies using atmospheric inverse models10 and prognostic ocean general circulation models8, 19. This suggests that the more densely sampled temperature and salinity data, in conjunction with CFC data from elsewhere, may be able to compensate for a lack of CFC data in the Southern Ocean during the 1980s. The sporadic nature of the oceanographic observations, particularly the CFC measurements (with some transects being occupied only about once per decade) makes the data assimilation susceptible to temporal aliasing. The error bars reported here do not take into account the uncertainty due to this potential aliasing of interannual variability into the data-assimilated circulations. Aliasing errors are likely to be largest for the smallest regions, and those with the sparsest observations. This must be kept in mind when interpreting the results of the assimilation model, particularly those on smaller spatial scales (for example, regional CO fluxes of Fig. 2). On the other hand, these aliasing effects will be minimized when integrating over larger areas. Thus we would expect, for example, that the global CO fluxes diagnosed by the assimilation model will be largely free from aliasing errors. Finally, we note that in the Arctic Ocean and Mediterranean Sea, a combination of the small basin area and lack of data constraints causes the model CO simulations to exhibit some numerical artefacts. We therefore do not include these regions in our analysis. We use an inversion procedure previously used to estimate the climatological mean state of the ocean circulation2, 16, and follow the methods used in those studies with a few exceptions, as detailed here. Here we break the assimilation down into three time periods: pre-1990, 1990–1999 and 2000–2014. We use the same dynamical forcing (wind stress and baroclinic pressure gradient forcing) for each time period. Then, tracer data from each period is assimilated independently to arrive at an estimate of the mean ocean circulation state during each period. This guarantees that the diagnosed circulation differences between time periods are due solely to information carried in the oceanographic tracer fields themselves, and not to assumptions about changes in external forcing. For each assimilation time period, we adjust a set of control parameters to minimize the misfit between observed and modelled tracer concentrations2, 16. We note that this method yields a diagnostic, rather than predictive, estimate of ocean circulation within each assimilation time period. The approach therefore differs from that of standard coupled climate models such as those participating in Phase 5 of the Coupled Model Intercomparison Project (CMIP5). The CMIP5 models rely on the accuracy of external forcing and model physics to produce an accurate ocean state estimate. They therefore have relatively high spatial resolution (approximately 0.5°–1°), resolve temporal variability on sub-daily timescales, and employ relatively sophisticated model physics. The OCIM, on the other hand, does not rely so much on the accuracy of external forcing or internal physics, but rather on the assimilation of global tracer data sets to produce an accurate ocean state estimate. To make this data assimilation tractable, the OCIM has relatively coarse resolution (2°), does not resolve temporal variability within assimilation time periods, and uses simplified linearized physics2. The advantage of the OCIM relative to CMIP5 models is that the resulting circulation estimate is consistent with the observed tracer distributions, while the disadvantage is in its relatively coarse resolution and assumption of steady-state within each assimilation period. In the OCIM, tracer concentrations C are simulated by solving the transport equation where A is a matrix transport operator built from the model-estimated horizontal and vertical velocities and imposed diffusive terms, and S(C) is a source–sink term. For the tracers simulated here the only sources and sinks are due to air-sea exchange, and except for the radioactive decay of 14C they are conservative away from the surface layer. The source–sink term for these tracers takes the form which is non-zero only in the surface layer of the model (of thickness δz ). The piston velocity K and the surface saturation concentration C vary for each tracer. For potential temperature and salinity, K = δz /(30d), and C is carried as a control (optimizable) parameter16, that is allowed to vary between assimilation time periods, but is held constant within each time period. For CFC-11 and CFC-12, K is modelled as a quadratic function of wind speed 10 m above the sea surface, u (ref. 31) where a is a constant piston-velocity coefficient (consistent with a wind speed in metres per second and a piston velocity in centimetres per hour), f is the fractional sea-ice cover, and Sc is the temperature-dependent Schmidt number. The 10-m wind speed and fractional sea-ice cover are taken from NCEP reanalysis for 1948–2014 and averaged for each year. For u the annual average is computed from daily values following the OCMIP-2 procedure32, which takes into account short-term variability in wind speeds. The surface saturation (C ) concentrations for CFC-11 and CFC-12 are computed from the observed time- and latitude-dependent atmospheric CFC-11 and CFC-12 concentrations33 using a temperature- and salinity-dependent solubility34. For the solubility we use time-independent temperatures and salinities from the 2009 World Ocean Atlas annual climatology35, 36. For CFC-11, our simulation runs from 1945 to 2014, and for CFC-12 from 1936 to 2014. Values for u and f before 1948 are set to their 1948 values. Natural radiocarbon is modelled in terms of the ratio R = Δ14C/1,000 + 1. The source–sink term of R takes the form The first term on the right-hand side represents the air–sea exchange with a well-mixed atmosphere of R = 1 (that is, Δ14C = 0‰) with a timescale τ = 5 years, and is applied only in the top model layer. This simple parameterization neglects spatial variability in 14C fluxes due to varying surface DIC and/or CO fluxes, but is judged adequate for our purposes, because the Δ14C constraint is needed only to constrain the approximate ventilation age distribution of the deep ocean, so that a reasonable distribution of respired DIC can be simulated. The second term on the right-hand side of equation (4) represents the radioactive decay of 14C, with e-folding time τ = 8,266 years, and is active throughout the water column. Biological sources and sinks of Δ14C are neglected, because they have been shown to have a small effect on Δ14C (ref. 37). For most of the simulations here, we used a piston velocity coefficient of a = 0.27, following ref. 38. To test the sensitivity of our results to this value, we ran a set of assimilations with a increased by 30%, which is closer to the original OCMIP-2 value of a = 0.337 (ref. 32). In these assimilations we also reduced the value of τ for the radiocarbon simulation by 30%, to be consistent with the higher assumed piston velocity. To get a sense of the uncertainty due to prescribed diffusivities, we also ran the model with different values of the isopycnal and vertical diffusivities, K and K . In all, we ran five different models with different values of a, K , and K . Supplementary Table 1 summarizes the fit to observations for each of these models for each assimilation period. Extended Data Figs 8 and 9 show the zonally averaged difference between model-simulated and observed potential temperature (Extended Data Fig. 8) and CFC-11 (Extended Data Fig. 9) for the Atlantic and Pacific basins during each assimilation time period. The model-data residuals are small (generally less than 1 °C for potential temperature, and 0.5 pmol kg−1 for CFC-11), but there are some biases. In the Atlantic, simulated potential temperatures are slightly too high in the northern subtropical thermocline, in the Southern Ocean upwelling region, and in the region of Antarctic Intermediate Water formation. Potential temperatures are slightly too low in the North Atlantic and in most of the thermocline. In the Pacific, these patterns are similar (Extended Data Fig. 8). Cooler-than-observed high latitudes are to be expected owing to the lack of seasonal cycle in the OCIM, which biases temperatures towards end-of-winter values. The most obvious bias in the CFC-11 field is a slight (about 0.25 pmol kg−1) underprediction throughout most of the upper ocean. More negative biases (about 1 pmol kg−1) occur in the surface of the Southern Ocean, the North Atlantic and the North Pacific (Extended Data Fig. 9). These negative biases could indicate that the CFC-11 piston velocity that we used for most simulations is too small. Because the same piston velocity was used for all assimilation periods, this would not affect the inferred circulation-driven changes in the CO sink. Importantly, the spatial patterns of the model-data residuals are similar in all three assimilation time periods. This temporal coherence in the model-data residuals indicates that the inferred circulation changes do not introduce spurious biases into the assimilation. Our approach approximates the decadal variability of the ocean circulation by fitting a steady-state circulation independently for each time period. We thus neglect both interannual variability within, and temporal variations before, the assimilation period. However, the integrated effect of all previous circulation changes is encoded in the tracer distributions of the assimilation period, and therefore indirectly ascribed to an effective decadal circulation representative of the assimilation period. To test whether these separate steady-state circulations for each time period capture the effects of the time-varying circulation, we used the data-assimilated circulations to simulate ocean CFC-11 concentrations, changing the circulation on the fly from decade to decade as the CFC-11 is propagated to the period of interest. We find that this approach fits the CFC-11 observations in each period much better than an unchanging circulation (Extended Data Fig. 10), which indicates that an unchanging circulation from decade-to-decade is not consistent with the tracer data. This also indicates that changing the circulation on the fly from decade to decade, as we did in our CO simulations (see below), provides a good approximation to the effect of the continuously changing circulation of the ocean. To investigate the influence of changing ocean circulation on the oceanic CO sink we first simulated the pre-industrial carbon distribution (before 1765) by assuming that the ocean was in equilibrium with an atmospheric CO concentration of 278 parts per million. We then simulated the transient evolution of dissolved inorganic carbon (DIC) from 1765 to 2014 using observed atmospheric CO concentrations as a boundary condition2. For this simulation, the ocean circulation is assumed to be steady-state before 1990 at its 1980s estimate, and is then switched abruptly to the assimilated circulations for the 1990s and 2000s. We acknowledge the approximate nature of this approach—the real ocean circulation changes gradually. We therefore present only decadally averaged results for the 1980s, 1990s and 2000s, rather than focusing on particular years. We estimated uncertainty by varying the parameters of the carbon-cycle model over a wide range of values. In all, we ran 32 simulations with different combinations of parameters governing the production and remineralization of particulate and dissolved organic carbon and calcium carbonate (Supplementary Table 2). Combined with five separate circulation estimates, we have 160 state estimates from which the uncertainties are derived. For all simulations we used the OCMIP-2 formulation of the ocean carbon cycle39, implemented for the matrix transport model as described elsewhere40. The governing equation for the oceanic DIC concentration is where A is the matrix transport operator; J is the virtual flux of DIC due to evaporation and precipitation; J represents the air–sea gas exchange of CO ; and J are the biological transformations of DIC (uptake and remineralization of particulate and dissolved organic carbon). To compute the gas exchange fluxes of CO we must also simulate alkalinity—the equation for alkalinity follows equation (5) but without the air–sea exchange term. For our simulations, the only terms that vary from one time period to the next are A (owing to variability in ocean circulation) and J (owing to variability in the atmospheric CO concentration and in the gas exchange piston velocity). The virtual fluxes and biological fluxes of DIC are held constant over time at their pre-industrial values, so that we can isolate the effects of ocean circulation variability on the oceanic CO sink. Air–sea CO gas exchange occurs in the surface layer and is given by where the piston velocity is parameterized following equation (3). The CO saturation concentrations are computed using observed temperature and salinity and the observed atmospheric . For the results presented in the main-text figures and in Extended Data Figs 3 and 4, we ignored changes in the solubility of CO due to changes in SST and salinity, in order to isolate changes in ocean CO uptake due to ocean circulation variability. For these simulations we calculated [CO ]sat using the mean SST and salinity from the 2009 World Ocean Atlas objectively mapped climatologies35, 36. Atmospheric is taken from ref. 41 for the years 1765–2012, and from the Mauna Loa CO record for the years 2013–2014. The virtual fluxes J and the biological carbon fluxes J follow the OCMIP-2 design39, and are implemented for the matrix transport model using a Newton solver as described elsewhere40. Model parameters governing the biological cycling of carbon are listed in Supplementary Table 2. We allow for uncertainty in the parameters z (the compensation depth, above which DIC uptake is parameterized by restoring to observed PO concentrations and multiplying by the globally constant ratio of C to P, r ); the decay rate κ of labile dissolved organic phosphorus; the exponent b in the assumed power-law dependence of particle flux on depth42; the CaCO :POC ‘rain ratio’ r; and the e-folding depth d for CaCO dissolution. These parameters are varied over a wide range to account for the range of values found in the literature32, 39, 40, 43, 44, 45, 46, 47, 48, 49, 50, and are presented in Supplementary Table 2. Note that we do not vary σ, the fraction of production routed to dissolved organic phosphorus, because previous studies found that variations in κ and σ have very similar effects on DIC and alkalinity distributions40. It is therefore sufficient to vary only κ. We also do not vary r or r , as their values vary spatially in reality and are probably sensitive to the circulation which controls nutrient availability. These complexities are ignored here for expediency, and the biological cycling of DIC is assumed to be constant and unchanging, in order to isolate the direct effects of circulation changes. To isolate the effects of circulation variability on the oceanic CO sink (as in Figs 2 and 3), we ran two additional simulations which held the circulation at 1980s levels during the 1990s, and at 1990s levels during the 2000s. The anomalous CO flux attributed to changing circulation during the 1990s was calculated as the difference between the 1990s CO fluxes for the simulation in which the circulation was switched in 1990, and that in which the circulation remained at 1980s levels. Likewise, the anomalous CO flux attributed to changing circulation during the 2000s was calculated as the difference between the 2000s CO fluxes for the simulation in which the circulation was switched in 2000, and that in which the circulation remained at 1990s levels. To diagnose the contribution of thermal effects on air-sea CO fluxes, we also ran a suite of simulations in which we allowed [CO ]sat to vary from one decade to the next owing to changes in SST. For these simulations, we calculated the decadally averaged SST for the 1980s, 1990s and 2000s from two different reconstructions, the Centennial In situ Observation-Based Estimates (COBE)51 and the Extended Reconstructed Sea Surface Temperature version 4 (ERSSTv4)52. For each decade, we calculated the anomaly with respect to the 1980s, and then added this anomaly to the climatological SST used in the model during the 1990s and 2000s. This yielded two separate reconstructed SST histories, which were used to compute the CO saturation in separate simulations. Each simulation was run with each of the five different versions of our circulation model, yielding ten state estimates from which uncertainties were derived. The results of these simulations were then compared to otherwise identical simulations in which SSTs were held constant, and the difference between the two was attributed to thermal effects on CO solubility. These differences are presented in Extended Data Fig. 5. Data for the assimilation model were obtained from the World Ocean Database 2013 (temperature and salinity), available at https://www.nodc.noaa.gov/OC5/WOD13/, and the GLODAPv2 database30 (radiocarbon and CFCs) are archived at the Carbon Dioxide Information Analysis Center (CDIAC; http://cdiac.ornl.gov/oceans/GLODAPv2/). Mapped SST36 and salinity35 climatologies were obtained from the 2009 World Ocean Atlas at https://www.nodc.noaa.gov/OC5/WOA09/pr_woa09.html. The NOAA_ERSST_v452 and COBE-SST251 data are provided by the NOAA/OAR/ESRL PSD, Boulder, Colorado, USA, from their website at http://www.esrl.noaa.gov/psd/. NCEP reanalysis data were obtained from http://www.esrl.noaa.gov/psd/data/gridded/data.ncep.reanalysis.surfaceflux.html. The Mauna Loa CO record used in our carbon cycle model is available at the NOAA Earth System Research Laboratory at http://www.esrl.noaa.gov/gmd/ccgg/trends/. Data from the SOCOM project4, 5, 15, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63 are available at http://www.bgc-jena.mpg.de/SOCOM/. All data used to create the figures in this paper will be archived at CDIAC (http://cdiac.ornl.gov/). Code may be obtained by contacting T.D. (firstname.lastname@example.org).
News Article | February 20, 2017
SEOUL, South Korea--(BUSINESS WIRE)--Hyundai Mobis (KRX:012330) joins the global alliance for a coordinated response to cyber threats, such as hacking, in the age of connected cars and smart cars. Hyundai Mobis announced on January 16 that it had become a regular member of ‘Auto-ISAC,’ the Automotive Information Sharing and Analysis Center for automotive cyber security. ‘Auto-ISAC’ was a private organization founded in July 2015 by 15 global automakers. It was established to coordinate a joint-industry response to the rise of automotive hacking threats, e.g. duplicating smart keys and starting cars. Major global automakers, including Hyundai and Kia Motor Co., have signed up, alongside global automotive suppliers. Regular members can attend regular meetings, which are held quarterly, and acquire a variety of information and technological know-how related to cyber security. ‘Auto-ISAC’ is an organization driven by the collective intelligence of member companies. Its first priority is given to sharing information on vehicle security accidents, both online and offline. They collect cases, analyze current vulnerabilities and attack patterns as well as also devising new solutions; all its findings are included in manuals and distributed. Last July ‘Auto-ISAC’ disclosed seven guidelines for responding to cyber threats through its website. They included practical contents about risk assessment, and the management, detection of threats, defense against them, and responding to accidents. As a matter of fact, the member companies are sharing a variety of information in the cases of vehicle hacking attacks. It has been confirmed that there are cases of remotely accessing the in-vehicle communication system and controlling the transmission, the door lock/unlock and the cluster. These remote hacking attacks pose an ever greater threat with the increasing prevalence of smart cars and connected cars in the future automotive industry. The combination between automobiles and IT is accelerating, and inter-vehicular communication and the communication between vehicles and external networks are intensifying. As a result, the targets and scope of cyber-attacks are expanding. Automakers and automotive suppliers are reinforcing technological security from product design. Nevertheless, it cannot be denied that the system is vulnerable to new kinds of hacking. Convinced that the greatest value must be placed on the safety of drivers and passengers in a situation where the vehicles rely more on electronic components and become more high-tech, Hyundai Mobis is planning to do its best to reinforce its own security system that can proactively respond to cyber security threats.
News Article | February 24, 2017
« NSF to award $13M to projects focused on electrochemical and organic photovoltaic systems | Main | Next-Gen Ford Fiesta St with new 200 PS 3-cylinder, 1.5L Ecoboost with 3 drive modes; cylinder deactivation » Bosch, Cooper Standard, Honeywell, Hyundai Mobis, Lear Corporation, LG Electronics and NXP Semiconductors have joined the Automotive Information Sharing and Analysis Center (Auto-ISAC) as original equipment supplier members. Auto-ISAC was formed by automakers in August 2015 to establish a global information sharing community to promote vehicle cybersecurity. Auto-ISAC operates as a central hub for sharing, tracking and analyzing intelligence about potential cyber threats, vulnerabilities and incidents related to the connected vehicle; its secure intelligence sharing portal allows members to anonymously submit and receive information that helps them more effectively respond to cyber threats. In 2016, Auto-ISAC published the Automotive Cybersecurity Best Practices Executive Summary, which outlines Auto-ISAC’s informational guides that cover organizational and technical aspects of vehicle cybersecurity, including incident response, collaboration and engagement with appropriate third parties, governance, risk management, security by design, threat detection and protection, and training and awareness.
News Article | February 24, 2017
EINDHOVEN, Netherlands, Feb. 24, 2017 (GLOBE NEWSWIRE) -- Today NXP Semiconductors N.V. (NASDAQ:NXPI), the world’s largest supplier of automotive semiconductors and a leader in automotive cybersecurity, announced that it has joined the Automotive Information Sharing and Analysis Center (Auto-ISAC). Auto-ISAC was formed by automakers to establish a secure platform for sharing, tracking and analyzing intelligence about cyber threats and potential vulnerabilities around the connected vehicle. Auto-ISAC operates as a central hub that allows members to anonymously submit and receive information to help them more effectively counter cyber threats in real time. The automobile industry recognizes that the autonomous driving ecosystem — that includes wireless technologies that enable communications, telematics, digital broadcast reception, and ADAS systems — introduces risks for potential attack by hackers. NXP has joined the Auto-ISAC organization to help develop best cybersecurity practices for the automotive industry. Auto-ISAC published the Automotive Cybersecurity Best Practices Executive Summary, which outlines Auto-ISAC’s development of informational guides that cover organizational and technical aspects of vehicle cybersecurity, including governance, risk management, security by design, threat detection and incident response. ISAC implements training and promotes collaboration with third parties. In the United States, 98 percent of vehicles on the road are represented by member companies in the Auto-ISAC. “Cybersecurity for the automotive industry can only be addressed if carmakers, security experts, and government bodies join forces,” said Lars Reger, CTO of NXP Automotive. “NXP, as a market leader in cybersecurity technology for eGovernment and banking applications, will bring its deep know-how into this organization. Cars require four layers of protection; secure interfaces that connect the vehicle to the external world; secure gateways that provide domain isolation; secure networks that provide secure communication between control units (ECUs); and secure processing units that manage all the features of the connected car. NXP is the leader in these critical areas and looks forward to sharing its expertise and collaborating with our industry partners to shape a secure future for the automated car.” NXP Semiconductors N.V. (NASDAQ:NXPI) enables secure connections and infrastructure for a smarter world, advancing solutions that make lives easier, better and safer. As the world leader in secure connectivity solutions for embedded applications, NXP is driving innovation in the secure connected vehicle, end-to-end security & privacy and smart connected solutions markets. Built on more than 60 years of combined experience and expertise, the company has 31,000 employees in more than 33 countries and posted revenue of $9.5 billion in 2016. Find out more at www.nxp.com. NXP and the NXP logo are trademarks of NXP B.V. All other product or service names are the property of their respective owners. All rights reserved. © 2017 NXP B.V. Argus is the world’s largest, independent automotive cybersecurity company, whose customers include the major private and commercial OEMs, Tier 1s, aftermarket connectivity providers and fleet managers. Led by a team of cybersecurity experts and auto industry veterans, Argus solutions are built on decades of experience in both cyber security and automotive. Future proof and ready to embed today, Argus security technology promotes car connectivity with security and privacy. Headquartered in Tel Aviv, Israel, Argus has a global presence with offices in all of the world’s major automotive centers including North America, Europe and Asia.
News Article | February 15, 2017
RESTON, Va., Feb. 15, 2017 /PRNewswire/ -- The Financial Services Information Sharing and Analysis Center (FS-ISAC) has launched a new intelligence-sharing community that will help provide real-time cyber and physical threat information to the energy sector. The community, called the...
News Article | February 21, 2017
Habitats for the first astronauts to Mars could be 3D printed, by extracting and refining metals from the soil(Credit: Team Space Exploration Architecture and Clouds Architecture Office) Given the cost of transporting goods to Mars, the first human colonists of the Red Planet will need to pack lightly – but it's going to take a lot of equipment to get that settlement set up. Building habitats, tools and parts out of local resources on arrival would be an ideal solution, but Mars is a pretty barren place. So researchers from NASA and the University of Central Florida (UCF) are investigating how metals could be extracted from the Martian soil, refined, and used as "ink" to 3D print vital components. NASA has already outlined its roadmap to getting humans to Mars, which involves studying what kind of resources that the first settlers could harvest from the planet. The less we need to cart from Earth, the better, with the agency saying that finding ways to live off the land could save over US$100,000 per kilogram (2.2 lb) per launch. It's known as in situ resource utilization, and that's the goal of this new project. With that in mind, the NASA and UCF team plans to study a process called molten regolith electrolysis as a way to build structures locally. Regolith – the loose Martian soil – could be placed inside a chamber and heated to almost 1,650º C (3,000º F), before electrolysis melts down the metals and, as a bonus, produces much-needed oxygen as well. That molten metal can then be used in a 3D printer to create parts or pieces of shelter on demand, like the Sfero igloo concept. "It's essentially using additive-manufacturing techniques to make constructible blocks," says Sudipta Seal, director of UCF's Advanced Materials Processing & Analysis Center. "UCF is collaborating with NASA to understand the science behind it." Using 3D printers to construct liveable housing isn't as outlandish a concept as it seems: the world's first 3D printed office building has gone up in Dubai (where else?) and a Chinese firm used the technique to build 10 houses in a single day. Replicating that success with local Martian materials is a new challenge of course, but that's the point of the new project.