News Article | May 9, 2017
ERT, a global data and technology company that minimizes uncertainty and risk in clinical trials, today announced the acquisition of ImageIQ, originally established as Cleveland Clinic’s Biomedical Imaging and Analysis Center. The acquisition enables ERT to offer advanced, end-to-end clinical trial imaging analysis using best-in-class technology that delivers compliant data for use in clinical development. “After conducting an exhaustive industry search, we determined the cloud-based imaging technology invented at Cleveland Clinic is more advanced than traditional imaging solutions available today,” said James Corrigan, President and CEO of ERT. “At the core of this technology is the ability to capture high quality, compliant data without the dependency on human intervention and bias. Providing this level of accuracy will help our customers eliminate more risk and uncertainty from the development process.” Quantitative, objective software analysis enables ERT Imaging to deliver more accurate and verifiable imaging results than subjective readings commonly relied upon with standard scoring systems. Coupled with comprehensive clinical trial services, ERT will provide imaging solutions across key therapeutic areas to generate compliant, high-quality imaging results while reducing site and sponsor burden. “It’s extremely rewarding to extend the Cleveland Clinic’s innovative technology and expertise in custom imaging analysis to researchers around the world,” said Jack Miner, Managing Director at Cleveland Clinic Ventures. “We are pleased that ERT can now offer this next generation imaging solution to their global biopharmaceutical research and medical device clients to help them achieve high quality data and accelerate their clinical development programs.” For more information on ERT Imaging, visit ERT.com/imaging. About ERT ERT is a global data and technology company that minimizes uncertainty and risk in clinical trials so that our customers can move ahead with confidence. With more than 45 years of clinical and therapeutic experience, ERT balances knowledge of what works with a vision for what’s next, so it can adapt without compromising standards. Powered by the company’s EXPERT® technology platform, ERT’s solutions enhance trial oversight, enable site optimization, increase patient engagement, and measure the efficacy of new clinical treatments while ensuring patient safety. Over the past four years, more than half of all FDA drug approvals came from ERT-supported studies. Pharma companies, Biotechs, and CROs have relied on ERT solutions in 9,500+ studies spanning three million patients to date. By identifying trial risks before they become problems, ERT enables customers to bring clinical treatments to patients quickly – and with confidence. For more information, go to ert.com or follow us on LinkedIn and Twitter.
News Article | May 21, 2017
Dave Snowdon, CTO and founder of Metamako, said: "Australia has developed into the leading innovation and technology hub for Asia-Pacific and it's great to be shortlisted from an impressive list of FinTech specialists. Right from the outset, when Metamako launched four years ago, our goal was to bring the fastest network solutions to the global financial markets. Being a relatively new company, it's a real honour to be recognised for building a global market presence as an Australian business, and it's really special to be in the inaugural Finnie list." Metamako's clients are in the US, APAC and Europe, and in the last two years it has opened offices in New York, London and Tokyo. Its clients include financial institutions, such as banks and exchanges, the Australian Stock Exchange (ASX) and LMAX, a UK-based FX exchange, among them. Metamako has a number of global partners: the US-based Westcon Group and the Australian Pro IT. Snowdon added: "Over 95% of our business is exports, with our products being designed, developed and built in Australia, which makes us very proud of our contribution to the Australian economy. Our technology-centric team of 35 people in Sydney, based in the Stone & Chalk FinTech hub, is very diverse." In independent benchmark tests carried out by the highly-respected Securities Technology Analysis Center LLC (STAC®), Metamako set records for Layer 1 switches, averaging just 5 nanoseconds for each switch hop. Metamako is the leading specialist in deterministic ultra-low latency devices for the trading community, exchanges and telco providers. Metamako is a cutting-edge device company, founded in 2013, with a goal of simplifying networks, reducing latency and increasing flexibility. The founders, Scott Newham, Dave Snowdon and Charles Thomas, have extensive experience engineering high-performance hardware and software for financial markets, and other users for whom keeping latency to a minimum is vitally important. Metamako's solutions have built-in intelligence and are rich in features, using state-of-art of technology to keep latency to an absolute minimum. MetaConnect 96 is the latest in a range of high-performance network products which Metamako has brought to market.
News Article | May 15, 2017
The Maritime and Port Security ISAO and Wapack Labs announce a collaborative partnership to advance real-time access to sector-specific cyber threat intelligence for Maritime & Port owners and operators and the supply chains that support them. -- The Maritime and Port Security Information Sharing and Analysis Organization (MPS-ISAO) and Wapack Labs announce today a collaborative partnership to advance real-time access to sector-specific cyber threat intelligence for Maritime & Port owners and operators and the supply chains that support them.The MPS-ISAO, a non-profit organization, officially launched in May 2016, is dedicated to a mission of enabling and sustaining Maritime & Port cyber resilience. This is accomplished through the availability of MPS-ISAO real-time cyber threat intelligence including Maritime & Port community contributed information and multi-directional (cross-sector)information sharing and coordinated response working in collaboration with the U.S. Department of Homeland Security and the International Association of Certified ISAOs (IACI), and academic, technology and security strategic collaborative partners.The partnership announced today with Wapack Labs expands access to sector-specific cyber intelligence, analysis of community data via strict information sharing protocols, and response capabilities for Maritime & Port stakeholders and their supply chains.Deborah Kobza, MPS-ISAO Executive Director states, "The Maritime & Port sector is increasingly vulnerable and actively being attacked by a variety of adversaries including nation states, organized crime, hacktivists and insider threats focused on espionage, human trafficking, financial gain, supply chain disruption, identity and intellectual property theft, or to gain a competitive advantage. Many physical and cyber systems used in ports and maritime, such as navigation/GPS, physical security, communication, energy, environmental controls, industrial control systems (ICS), emergency controls, operations, cargo tracking, terminal operations, and cruise transportation, represent cyber attack targets. This partnership with Wapack Labs advances the capability of Maritime & Port stakeholders to move from a reactive to proactive cyber resilience stance."Wapack Labs joined the MPS-ISAO's invitation-only webinar in March, "Interconnectedness in the Maritime Industry? First Let Me Tell You a Story.", to present their private research which identified a financially motivated cyber adversary who has compromised thousands of port and maritime organizations and over a million user accounts. The MPS-ISAO and Wapack Labs will use this cyber intelligence research as a jumping-off point to increase industry awareness and protection.Christy Coffey, Director of Strategic Alliances, adds, "Wapack Labs is a perfectly suited partner for the MPS-ISAO. Their unique combination of cyber threat intelligence production with deep maritime and ports roots increases the level of early threat awareness that we can provide to our stakeholders. Wapack Labs have been tracking adversaries targeting this industry for a few years now, and so having them on our watch provides immediate gains."Wapack Labs' bolsters the MPS-ISAO's ability to deliver Cyber Intelligence as a combination of industry-specific and personalized cyber threat intelligence, shared multi-directional sector and cross-sector information, advanced analytics, coordinated response, and training on topics of high interest. By participating in the MPS-ISAO, Maritime & Port stakeholders grow their understanding of vulnerabilities and risk so that they can proactively protect their organizations."We are excited to be working with the MPS-ISAO", said Jeffery Stutzman, a co-founder and CEO for Wapack Labs. It's imperative that we elevate cyber awareness in this important industry, and get ahead of threat actors. The MPS-ISAO - with the help of Wapack Labs' Cyber Threat Analysis Center (CTAC) are force multipliers - real game changers in Maritime and Port industry cybersecurity."A 2016 report published by the U.S. Department of Homeland Security/Office of Cyber and Infrastructure Analysis (DHS/OCIA), "Consequences to Seaport Operations From Malicious Cyber Activity", states that a "cyber attack at a port or aboard a ship could result in lost cargo, port disruptions, and physical and environmental damage", and a disruption to U.S. ports can have a cascading affect to "Critical Manufacturing, Commercial Facilities, Food and Agriculture, Energy, Chemical, and Transportation Systems". This report includes a "Seaport Economics" section that details economic data points associated with sea trade.About the MPS-ISAO: Headquartered at the Global Situational Awareness Center (GSAC) at NASA/Kennedy Space Center, the MPS-ISAO is private sector-led working in collaboration with government to advance Port and Maritime cyber resilience. The core mission to enable and sustain a safe, secure and resilient Maritime and Port Critical Infrastructure through security situational intelligence, bi-directional information sharing, coordinated response, and best practice adoption supported by role-based education. The MPS-ISAO is a founding member of the International Association of Certified ISAOs (IACI). More information at: www.mpsisao.org.About Wapack Labs Corporation:Wapack Labs located in New Boston, NH is a privately held cyber intelligence company delivering in-depth strategic cyber threat activities, intelligence, analysis, reporting and indicators. Products are delivered through collaborative portals, private messaging and email, in multiple human readable and machine-to-machine form. Since 2011, Wapack Labs' have focused on tracking and profiling cyber adversaries, their tools, targets, attack methods, and delivering to subscribers in a way that can be quickly applied to the protection of computers, networks, and business operations. More information at: www.wapacklabs.com.
News Article | May 16, 2017
San Jose/London, May 16th, 2017: Kx Systems (Kx), a subsidiary of First Derivatives (FD) plc and provider of the industry-leading kdb+ time series database, and Vexata, the leader in high performance enterprise storage systems, announced record shattering results in independent testing by the Securities Technology Analysis Center (STAC®). With kdb+ running on Intel x86 multi-core processors using the Vexata Array with NVMe Flash SSDs, the solution set new records in 8 of 17 baseline STAC-M3™ benchmarks (the Antuco suite), and 14 of 24 benchmarks in the STAC-M3 scaling suite (Kanaga). The joint solution was able to achieve 36.8 GB/s of effective application-level throughput in bandwidth-intensive year-high bid tests. The solution also shattered existing records for several read-IO intensive queries like volume-weighted average bid, as well as balanced compute and IO workloads like statistical calculations.* In the era of Big Fast Data analytics, kdb+ has set industry benchmarks for speed and stability in high performance applications. It is widely used in the financial services industry for trading and risk management platforms and across a range of markets such as manufacturing, pharma, the Industrial Internet of Things, utilities and retail which face similarly demanding data challenges. Kx is also at the forefront of the use of predictive analytics, virtual reality, artificial intelligence and machine learning techniques to drive operational intelligence and provide actionable insights. The Vexata Array, built on Vexata’s breakthrough Real Time Architecture, enables Enterprises to realize an order of magnitude higher performance and scale from their Database and Analytics platforms. Vexata Arrays can be deployed simply and seamlessly into existing SAN environments and alongside existing storage. The Vexata Array is in production and available in a range of capacities at market leading economics. Glenn Wright, Systems Architect for Kx added: “These results show the mutual benefits gained by placing the latest technology alongside kdb+. What stood out are the streaming I/O performance results, alongside some very good results for discrete market data set queries. This should be particularly appealing to those customers wishing to consolidate their market data on a single storage device being shared between multiple analytics systems, for example between different business functions or business units of an organization.” Peter Lankford, Director of STAC, said: "Trading firms designed the STAC-M3 benchmark suite to represent a common set of performance-related challenges in financial time-series analytics. Competition requires capital markets organizations around the world to analyze more data in less time. This combination of Kx software and the Fibre Channel-based Vexata NVMe Flash Array established many new STAC-M3 records while running at large scale. “ Zahid Hussain, CEO of Vexata said: “We are very pleased to be working closely with Kx to provide a compelling high performance solution stack that enterprises can immediately deploy for their stock-ticker Analytics. The STAC-M3 results clearly showcase the unique performance benefits achievable through the Kx-Vexata solution.” Mark Sykes, COO at Kx said: “Kx’s customers are always looking to increase performance for complex analytics on very large datasets. The approach taken by Vexata allows them to do this without the need for them to invest in a large storage system in order to achieve the results they want, lowering their total cost of ownership.” * Records set in all n-year high bid bandwidth benchmarks. Result of 36.8 GB/s set in the 5-year high bid bandwidth benchmark: STAC-M3.ß1.1T.5YRHIBID.MBPS. Records set in 10 of 15 volume-weighted average bid benchmarks in the Kanaga suite (STAC-M3.ß1.*.VWAB-12D.HO.TIME). Records set in the aggregated statistics benchmark (STAC-M3.ß1.10T.STATS-AGG.TIME) and all four benchmarks of statistics over unpredictable intervals (STAC-M3.ß1.*.STATS-UI.TIME). Detailed benchmark reports are available at www.STACresearch.com/news/2017/05/03/KDB170421 For more information about Kx please visit www.kx.com. For general enquiries, write to firstname.lastname@example.org or contact: About FD and Kx FD is a global technology provider with 20 years of experience working with some of the world’s largest finance, technology and energy institutions and employs over 1,700 people worldwide. The Group’s Kx technology is a leader in high-performance, in-memory computing, streaming analytics and operational intelligence. It delivers the best possible performance and flexibility for high-volume, data-intensive analytics and applications for multiple industries including finance, pharmaceuticals and manufacturing. About Vexata Vexata offers breakthrough enterprise storage solutions that enable transformative performance and scale from your database and analytics applications. With its Vexata Array family of solid state storage systems using NVMe Flash and now 3D XPoint™ SSDs, Vexata systems deploy simply and seamlessly into existing storage environments. Vexata was founded in December 2013 by a team of experienced entrepreneurs with a proven record of delivering meaningful innovation to the enterprise. Our investors include Intel Capital, Lightspeed Ventures, Mayfield Fund and Redline Capital. Vexata’s All NVMe Arrays are in production and available at market leading economics. For more information, please visit www.vexata.com or email us at email@example.com “STAC” and all STAC names are trademarks or registered trademarks of the Securities Technology Analysis Center, LLC.
News Article | April 18, 2017
MCLEAN, Va.--(BUSINESS WIRE)--BAE Systems’ Peder Jungck has been named president of the Information Technology - Information Sharing and Analysis Center (IT-ISAC), an influential not-for-profit organization composed of member companies dedicated to enhancing cyber security by sharing threat information and collaborating on effective mitigations of cyber risk. IT-ISAC members include C-suite technology and security leaders from the world’s largest technology companies, including Intel, Oracle, and Hewlett Packard Enterprise. IT-ISAC members actively collaborate to protect their enterprises and the collective global information infrastructure. The exclusive, industry-only forum also works closely with the U.S. Department of Homeland Security to help companies around the world minimize threats, manage risk, and provide near real-time responses to real-world cybersecurity challenges. “IT-ISAC engages a global network of subject-matter experts from the world’s leading technology companies to enhance cross-industry awareness of emerging cyber threats,” said Jungck, chief technology officer of BAE Systems’ Intelligence & Security sector. “The organization is itself a cyber defense force multiplier that is helping to protect global commerce and enhance international security.” Cyber security is of paramount importance for BAE Systems. The company shares more information about cyber threats than any other member of the defense industrial base. BAE Systems also made international news in May 2016 for its strategic cyber threat intelligence (CTI) sharing partnership with Fujitsu of Japan. “As a best practice, BAE Systems harnesses all of the data surrounding cyber-attack strings, etc. that target our network,” Jungck said. “When we identify and neutralize these threats to our own network, we can share this cyber threat data with our industry partners. Industry collaboration through crowdsourcing is an effective way to share the rewards of a safer cyberspace, at a reduced cost. CTI sharing is the logical first step for any organization seeking to implement a holistic cyber defense strategy.” Jungck has more than 20 years of experience within the IT industry, dealing with information assurance, secure computing, and network security challenges. Over the course of his career, Jungck has served as a CTO of a Silicon Valley venture capital firm and led a variety of IT and cybersecurity businesses, which have developed large-scale managed service offerings providing trusted IT infrastructure and cyber defense for commercial enterprises, telecommunications carriers, and the U.S. government. In recognition of his work, Jungck has earned 26 patents in networking and security, as well as published a book and peer-reviewed works on secure computing and software defined networking. He has also spent considerable time and effort with start-ups and developing industry communities related to cyber, including being an early board member of Cyber Maryland, a member of the National Initiative for Cybersecurity Education’s NICE365 Industry Advisory Board, a Security Innovation Network (SINET) 16 Advisor, and a STARS Mentor for Mach37 (Virginia Cyber Accelerator). BAE Systems provides intelligence and security services to manage big data, inform big decisions, and support big missions. BAE Systems delivers a broad range of solutions and services including intelligence analysis, cyber operations, IT, systems development, systems integration, and operations and maintenance to enable militaries and governments to recognize, manage, and defeat threats. The company takes pride in supporting critical national security missions that protect the nation and those who serve.
News Article | May 4, 2017
ARLINGTON, Va., May 4, 2017 /PRNewswire/ -- Grimm was an Innovation Sponsor at the Financial Services - Information Sharing and Analysis Center (FS-ISAC) Annual Summit earlier this week in Florida. GRIMM, a security engineering company based out of Arlington, VA, is a leader in offering...
News Article | May 4, 2017
The Financial Sector is serious about defending its institutions from cyber attack; GRIMM is here to help Grimm was an Innovation Sponsor at the Financial Services - Information Sharing and Analysis Center (FS-ISAC) Annual Summit earlier this week in Florida. GRIMM, a security engineering company based out of Arlington, VA, is a leader in offering specialized security services for financial institutions since 2013. The Iranian DDoS of American banks in 2012 was a great awakening for the Financial Sector in knowing they need to defend their enterprises against cyber attack. But knowing is only half the battle. “The bottom line is that financial institutions are still at risk of serious security issues,” said Bryson Bort, Founder and CEO of GRIMM. “Malicious account takeovers, ATM transaction interception, fake or deceptive wire transfers — the list of possible attacks against financial institutions is only limited by the imagination. We need to stop defending against the exercises and assessments and begin defending against the threats.” GRIMM’s worked with the Financial Sector, conducting comprehensive enterprise-wide security reviews for financial institutions for several years now. Through these reviews, GRIMM provides a true understanding of security vulnerabilities, a solution to fixing the vulnerabilities, and trains the workforce to defend against them. These reviews are appropriate for firms that already have an internal, risk-based approach for addressing cybersecurity but need a “SWAT” team of experts to test their systems, find the holes, and help them fix the holes before their attackers find them. But what about firms that don’t know “where to begin” when it relates to developing and implementing a process for addressing, measuring, and tracking enterprise cybersecurity? Developing an internal, risk-based approach to address cybersecurity tends to be chaotic, resource intensive, and cumbersome — an overwhelming issue for small to mid-sized financial institutions. Because of these gaps, last summer, GRIMM expanded its Financial Sector services by offering consulting to financial institutions in order to provide them a starting point — so they have a customized repeatable plan to know how to address key cybersecurity issues that arise, and clearly understand roles and responsibilities. Also, GRIMM, working with a Fortune 50 company with a need for a more rigorous assessment, co-developed the first true enterprise risk and threat assessment framework, “CROSSBOW.” CROSSBOW assesses an enterprise at scale with complete threats made of communications, capabilities, deployment methods, and the attacker’s tactics, techniques, and procedures (TTP) rolled into one. The Financial Sector is consistently going to be at high-risk for cyber attacks — and raising the barrier of entry to attacking these smaller financial institutions is the first step toward better defending the sector as a whole. About GRIMM: GRIMM offers security engineering and consulting services backed by research and development in delivering the art of the possible in cybersecurity. The team services government and commercial clients from a diverse range of industries. For more information about our application security or consulting services, please contact firstname.lastname@example.org.
News Article | February 16, 2017
A University of Central Florida professor is working with NASA to figure out a way to extract metals from the Martian soil - metals that could be fed into a 3-D printer to produce the components of a human habitat, ship parts, tools and electronics. "It's essentially using additive-manufacturing techniques to make constructible blocks. UCF is collaborating with NASA to understand the science behind it," said Pegasus Professor Sudipta Seal, who is interim chair of UCF's Materials Science and Engineering program, and director of the university's Advanced Materials Processing & Analysis Center and NanoScience Technology Center. NASA and Seal will research a process called molten regolith electrolysis, a technique similar to how metal ores are refined here on Earth. Astronauts would be able to feed Martian soil - known as regolith - into a chamber. Once heated to nearly 3,000 degrees Fahrenheit, the electrolysis process would produce oxygen and molten metals, both of which are vital to the success of future human space exploration. Seal's expertise also will help determine the form those metals should be in that's most suitable for commercial 3-D printers. NASA intern Kevin Grossman, a graduate student from Seal's group, is also working on the project, which is funded by a NASA grant. Grossman said he hopes future projects in similar areas can grow the current partnership between UCF and the research groups at NASA's Kennedy Space Center. NASA is already working on sending humans to the Red Planet in the 2030s. The agency has begun developing plans for life-support systems and other technology. NASA isn't alone. Elon Musk, billionaire founder of SpaceX and Tesla Motors, is working on his own plan. Mars One, a Dutch nonprofit, is touting a plan to send dozens of volunteers from around the world on a one-way trip to colonize Mars. They all agree that for sustainable Mars exploration to work, they must be able to use resources on Mars that would otherwise require costly transportation from Earth - a concept known as in situ resource utilization. That's where Seal's research comes in. "Before you go to Mars, you have to plan it out," Seal said. "I think this is extremely exciting." UCF has a long relationship with NASA, dating back to the first research grant ever received by the university, then known as Florida Technological University. Other UCF faculty members continue researching in situ resource utilization. Phil Metzger of UCF's Florida Space Institute, is working with commercial space mining company Deep Space Industries to figure out a way to make Martian soil pliable and useful for 3D printing. The same company has tapped Metzger and UCF colleague Dan Britt to develop simulated asteroid regolith that will help them develop hardware for asteroid mining.
News Article | February 24, 2017
WASHINGTON, Feb. 23, 2017 /PRNewswire-USNewswire/ -- The Automotive Information Sharing and Analysis Center (Auto-ISAC) welcomes Bosch, Cooper Standard, Honeywell, Hyundai Mobis, Lear Corporation, LG Electronics and NXP Semiconductors as original equipment supplier members. The inclusion...
News Article | February 15, 2017
We used a data-assimilating ocean circulation inverse model (OCIM)2, 16 to estimate the mean ocean circulation during three different time periods: pre-1990, the decade of the 1990s, and the period 2000–2014, which we refer to respectively as the 1980s, 1990s and 2000s. For each time period, we assimilated observations of five tracers: potential temperature, salinity, the chlorofluorocarbons CFC-11 and CFC-12, and Δ14C. Potential temperature and salinity data were taken from the 2013 World Ocean Database, Ocean Station Data and Profiling Floats data sets. The observations were binned by time period and then averaged onto the model grid. Quality control was performed by removing outliers (more than four inter-quartile ranges above the upper quartile) at each depth level in the model. This removed less than 0.1% of the observations. CFC-11, CFC-12 and Δ14C observations were taken from the Global Ocean Data Analysis Project version 2 (GLODAPv2) database30. These data were already quality-controlled. We used an earlier version of the GLODAPv2 database, but checking it against the newest release we find that the correlation R2 of the fit between the CFC-11 and CFC-12 observations in each version is over 0.99. The only major difference between the version used and the newest version of GLODAPv2 is that the latter includes data from two additional cruise tracks in the Indian Ocean. The CFC-11 and CFC-12 observations were binned by time period and then averaged onto the model grid. We assimilated Δ14C observations only where they were paired with a near-zero CFC-11 or CFC-12 measurement (CFC-11 < 0.05 pmol kg−1, CFC-12 < 0.025 pmol kg−1). This was done to remove Δ14C observations that may have been contaminated by bomb-produced 14C, since we model only the ‘natural’ (pre-1955 bomb) component of Δ14C. These Δ14C observations constrain the ventilation of deep water masses, and the same Δ14C observations were used in each of the three assimilation periods. Extended Data Fig. 7 shows the spatial distribution of the CFC observations for each decadal period, as well as the temporal distribution of observations of CFCs, temperature, and salinity. The spatial distributions of temperature and salinity are not shown, but all regions are well sampled for all time periods. Almost all of the transects with CFC observations in the 1990s were re-occupied with repeat hydrographies during the 2000s. During the 1980s, in contrast, several large areas are missing CFC observations. In particular, during the 1980s there are no CFC observations in the Pacific and Indian sectors of the Southern Ocean. For these sectors, the inferred circulation changes from the 1980s to the 1990s must therefore be interpreted cautiously. Nonetheless, the model-predicted weakening of the Southern Ocean CO sink during the 1990s is in good agreement with independent studies using atmospheric inverse models10 and prognostic ocean general circulation models8, 19. This suggests that the more densely sampled temperature and salinity data, in conjunction with CFC data from elsewhere, may be able to compensate for a lack of CFC data in the Southern Ocean during the 1980s. The sporadic nature of the oceanographic observations, particularly the CFC measurements (with some transects being occupied only about once per decade) makes the data assimilation susceptible to temporal aliasing. The error bars reported here do not take into account the uncertainty due to this potential aliasing of interannual variability into the data-assimilated circulations. Aliasing errors are likely to be largest for the smallest regions, and those with the sparsest observations. This must be kept in mind when interpreting the results of the assimilation model, particularly those on smaller spatial scales (for example, regional CO fluxes of Fig. 2). On the other hand, these aliasing effects will be minimized when integrating over larger areas. Thus we would expect, for example, that the global CO fluxes diagnosed by the assimilation model will be largely free from aliasing errors. Finally, we note that in the Arctic Ocean and Mediterranean Sea, a combination of the small basin area and lack of data constraints causes the model CO simulations to exhibit some numerical artefacts. We therefore do not include these regions in our analysis. We use an inversion procedure previously used to estimate the climatological mean state of the ocean circulation2, 16, and follow the methods used in those studies with a few exceptions, as detailed here. Here we break the assimilation down into three time periods: pre-1990, 1990–1999 and 2000–2014. We use the same dynamical forcing (wind stress and baroclinic pressure gradient forcing) for each time period. Then, tracer data from each period is assimilated independently to arrive at an estimate of the mean ocean circulation state during each period. This guarantees that the diagnosed circulation differences between time periods are due solely to information carried in the oceanographic tracer fields themselves, and not to assumptions about changes in external forcing. For each assimilation time period, we adjust a set of control parameters to minimize the misfit between observed and modelled tracer concentrations2, 16. We note that this method yields a diagnostic, rather than predictive, estimate of ocean circulation within each assimilation time period. The approach therefore differs from that of standard coupled climate models such as those participating in Phase 5 of the Coupled Model Intercomparison Project (CMIP5). The CMIP5 models rely on the accuracy of external forcing and model physics to produce an accurate ocean state estimate. They therefore have relatively high spatial resolution (approximately 0.5°–1°), resolve temporal variability on sub-daily timescales, and employ relatively sophisticated model physics. The OCIM, on the other hand, does not rely so much on the accuracy of external forcing or internal physics, but rather on the assimilation of global tracer data sets to produce an accurate ocean state estimate. To make this data assimilation tractable, the OCIM has relatively coarse resolution (2°), does not resolve temporal variability within assimilation time periods, and uses simplified linearized physics2. The advantage of the OCIM relative to CMIP5 models is that the resulting circulation estimate is consistent with the observed tracer distributions, while the disadvantage is in its relatively coarse resolution and assumption of steady-state within each assimilation period. In the OCIM, tracer concentrations C are simulated by solving the transport equation where A is a matrix transport operator built from the model-estimated horizontal and vertical velocities and imposed diffusive terms, and S(C) is a source–sink term. For the tracers simulated here the only sources and sinks are due to air-sea exchange, and except for the radioactive decay of 14C they are conservative away from the surface layer. The source–sink term for these tracers takes the form which is non-zero only in the surface layer of the model (of thickness δz ). The piston velocity K and the surface saturation concentration C vary for each tracer. For potential temperature and salinity, K = δz /(30d), and C is carried as a control (optimizable) parameter16, that is allowed to vary between assimilation time periods, but is held constant within each time period. For CFC-11 and CFC-12, K is modelled as a quadratic function of wind speed 10 m above the sea surface, u (ref. 31) where a is a constant piston-velocity coefficient (consistent with a wind speed in metres per second and a piston velocity in centimetres per hour), f is the fractional sea-ice cover, and Sc is the temperature-dependent Schmidt number. The 10-m wind speed and fractional sea-ice cover are taken from NCEP reanalysis for 1948–2014 and averaged for each year. For u the annual average is computed from daily values following the OCMIP-2 procedure32, which takes into account short-term variability in wind speeds. The surface saturation (C ) concentrations for CFC-11 and CFC-12 are computed from the observed time- and latitude-dependent atmospheric CFC-11 and CFC-12 concentrations33 using a temperature- and salinity-dependent solubility34. For the solubility we use time-independent temperatures and salinities from the 2009 World Ocean Atlas annual climatology35, 36. For CFC-11, our simulation runs from 1945 to 2014, and for CFC-12 from 1936 to 2014. Values for u and f before 1948 are set to their 1948 values. Natural radiocarbon is modelled in terms of the ratio R = Δ14C/1,000 + 1. The source–sink term of R takes the form The first term on the right-hand side represents the air–sea exchange with a well-mixed atmosphere of R = 1 (that is, Δ14C = 0‰) with a timescale τ = 5 years, and is applied only in the top model layer. This simple parameterization neglects spatial variability in 14C fluxes due to varying surface DIC and/or CO fluxes, but is judged adequate for our purposes, because the Δ14C constraint is needed only to constrain the approximate ventilation age distribution of the deep ocean, so that a reasonable distribution of respired DIC can be simulated. The second term on the right-hand side of equation (4) represents the radioactive decay of 14C, with e-folding time τ = 8,266 years, and is active throughout the water column. Biological sources and sinks of Δ14C are neglected, because they have been shown to have a small effect on Δ14C (ref. 37). For most of the simulations here, we used a piston velocity coefficient of a = 0.27, following ref. 38. To test the sensitivity of our results to this value, we ran a set of assimilations with a increased by 30%, which is closer to the original OCMIP-2 value of a = 0.337 (ref. 32). In these assimilations we also reduced the value of τ for the radiocarbon simulation by 30%, to be consistent with the higher assumed piston velocity. To get a sense of the uncertainty due to prescribed diffusivities, we also ran the model with different values of the isopycnal and vertical diffusivities, K and K . In all, we ran five different models with different values of a, K , and K . Supplementary Table 1 summarizes the fit to observations for each of these models for each assimilation period. Extended Data Figs 8 and 9 show the zonally averaged difference between model-simulated and observed potential temperature (Extended Data Fig. 8) and CFC-11 (Extended Data Fig. 9) for the Atlantic and Pacific basins during each assimilation time period. The model-data residuals are small (generally less than 1 °C for potential temperature, and 0.5 pmol kg−1 for CFC-11), but there are some biases. In the Atlantic, simulated potential temperatures are slightly too high in the northern subtropical thermocline, in the Southern Ocean upwelling region, and in the region of Antarctic Intermediate Water formation. Potential temperatures are slightly too low in the North Atlantic and in most of the thermocline. In the Pacific, these patterns are similar (Extended Data Fig. 8). Cooler-than-observed high latitudes are to be expected owing to the lack of seasonal cycle in the OCIM, which biases temperatures towards end-of-winter values. The most obvious bias in the CFC-11 field is a slight (about 0.25 pmol kg−1) underprediction throughout most of the upper ocean. More negative biases (about 1 pmol kg−1) occur in the surface of the Southern Ocean, the North Atlantic and the North Pacific (Extended Data Fig. 9). These negative biases could indicate that the CFC-11 piston velocity that we used for most simulations is too small. Because the same piston velocity was used for all assimilation periods, this would not affect the inferred circulation-driven changes in the CO sink. Importantly, the spatial patterns of the model-data residuals are similar in all three assimilation time periods. This temporal coherence in the model-data residuals indicates that the inferred circulation changes do not introduce spurious biases into the assimilation. Our approach approximates the decadal variability of the ocean circulation by fitting a steady-state circulation independently for each time period. We thus neglect both interannual variability within, and temporal variations before, the assimilation period. However, the integrated effect of all previous circulation changes is encoded in the tracer distributions of the assimilation period, and therefore indirectly ascribed to an effective decadal circulation representative of the assimilation period. To test whether these separate steady-state circulations for each time period capture the effects of the time-varying circulation, we used the data-assimilated circulations to simulate ocean CFC-11 concentrations, changing the circulation on the fly from decade to decade as the CFC-11 is propagated to the period of interest. We find that this approach fits the CFC-11 observations in each period much better than an unchanging circulation (Extended Data Fig. 10), which indicates that an unchanging circulation from decade-to-decade is not consistent with the tracer data. This also indicates that changing the circulation on the fly from decade to decade, as we did in our CO simulations (see below), provides a good approximation to the effect of the continuously changing circulation of the ocean. To investigate the influence of changing ocean circulation on the oceanic CO sink we first simulated the pre-industrial carbon distribution (before 1765) by assuming that the ocean was in equilibrium with an atmospheric CO concentration of 278 parts per million. We then simulated the transient evolution of dissolved inorganic carbon (DIC) from 1765 to 2014 using observed atmospheric CO concentrations as a boundary condition2. For this simulation, the ocean circulation is assumed to be steady-state before 1990 at its 1980s estimate, and is then switched abruptly to the assimilated circulations for the 1990s and 2000s. We acknowledge the approximate nature of this approach—the real ocean circulation changes gradually. We therefore present only decadally averaged results for the 1980s, 1990s and 2000s, rather than focusing on particular years. We estimated uncertainty by varying the parameters of the carbon-cycle model over a wide range of values. In all, we ran 32 simulations with different combinations of parameters governing the production and remineralization of particulate and dissolved organic carbon and calcium carbonate (Supplementary Table 2). Combined with five separate circulation estimates, we have 160 state estimates from which the uncertainties are derived. For all simulations we used the OCMIP-2 formulation of the ocean carbon cycle39, implemented for the matrix transport model as described elsewhere40. The governing equation for the oceanic DIC concentration is where A is the matrix transport operator; J is the virtual flux of DIC due to evaporation and precipitation; J represents the air–sea gas exchange of CO ; and J are the biological transformations of DIC (uptake and remineralization of particulate and dissolved organic carbon). To compute the gas exchange fluxes of CO we must also simulate alkalinity—the equation for alkalinity follows equation (5) but without the air–sea exchange term. For our simulations, the only terms that vary from one time period to the next are A (owing to variability in ocean circulation) and J (owing to variability in the atmospheric CO concentration and in the gas exchange piston velocity). The virtual fluxes and biological fluxes of DIC are held constant over time at their pre-industrial values, so that we can isolate the effects of ocean circulation variability on the oceanic CO sink. Air–sea CO gas exchange occurs in the surface layer and is given by where the piston velocity is parameterized following equation (3). The CO saturation concentrations are computed using observed temperature and salinity and the observed atmospheric . For the results presented in the main-text figures and in Extended Data Figs 3 and 4, we ignored changes in the solubility of CO due to changes in SST and salinity, in order to isolate changes in ocean CO uptake due to ocean circulation variability. For these simulations we calculated [CO ]sat using the mean SST and salinity from the 2009 World Ocean Atlas objectively mapped climatologies35, 36. Atmospheric is taken from ref. 41 for the years 1765–2012, and from the Mauna Loa CO record for the years 2013–2014. The virtual fluxes J and the biological carbon fluxes J follow the OCMIP-2 design39, and are implemented for the matrix transport model using a Newton solver as described elsewhere40. Model parameters governing the biological cycling of carbon are listed in Supplementary Table 2. We allow for uncertainty in the parameters z (the compensation depth, above which DIC uptake is parameterized by restoring to observed PO concentrations and multiplying by the globally constant ratio of C to P, r ); the decay rate κ of labile dissolved organic phosphorus; the exponent b in the assumed power-law dependence of particle flux on depth42; the CaCO :POC ‘rain ratio’ r; and the e-folding depth d for CaCO dissolution. These parameters are varied over a wide range to account for the range of values found in the literature32, 39, 40, 43, 44, 45, 46, 47, 48, 49, 50, and are presented in Supplementary Table 2. Note that we do not vary σ, the fraction of production routed to dissolved organic phosphorus, because previous studies found that variations in κ and σ have very similar effects on DIC and alkalinity distributions40. It is therefore sufficient to vary only κ. We also do not vary r or r , as their values vary spatially in reality and are probably sensitive to the circulation which controls nutrient availability. These complexities are ignored here for expediency, and the biological cycling of DIC is assumed to be constant and unchanging, in order to isolate the direct effects of circulation changes. To isolate the effects of circulation variability on the oceanic CO sink (as in Figs 2 and 3), we ran two additional simulations which held the circulation at 1980s levels during the 1990s, and at 1990s levels during the 2000s. The anomalous CO flux attributed to changing circulation during the 1990s was calculated as the difference between the 1990s CO fluxes for the simulation in which the circulation was switched in 1990, and that in which the circulation remained at 1980s levels. Likewise, the anomalous CO flux attributed to changing circulation during the 2000s was calculated as the difference between the 2000s CO fluxes for the simulation in which the circulation was switched in 2000, and that in which the circulation remained at 1990s levels. To diagnose the contribution of thermal effects on air-sea CO fluxes, we also ran a suite of simulations in which we allowed [CO ]sat to vary from one decade to the next owing to changes in SST. For these simulations, we calculated the decadally averaged SST for the 1980s, 1990s and 2000s from two different reconstructions, the Centennial In situ Observation-Based Estimates (COBE)51 and the Extended Reconstructed Sea Surface Temperature version 4 (ERSSTv4)52. For each decade, we calculated the anomaly with respect to the 1980s, and then added this anomaly to the climatological SST used in the model during the 1990s and 2000s. This yielded two separate reconstructed SST histories, which were used to compute the CO saturation in separate simulations. Each simulation was run with each of the five different versions of our circulation model, yielding ten state estimates from which uncertainties were derived. The results of these simulations were then compared to otherwise identical simulations in which SSTs were held constant, and the difference between the two was attributed to thermal effects on CO solubility. These differences are presented in Extended Data Fig. 5. Data for the assimilation model were obtained from the World Ocean Database 2013 (temperature and salinity), available at https://www.nodc.noaa.gov/OC5/WOD13/, and the GLODAPv2 database30 (radiocarbon and CFCs) are archived at the Carbon Dioxide Information Analysis Center (CDIAC; http://cdiac.ornl.gov/oceans/GLODAPv2/). Mapped SST36 and salinity35 climatologies were obtained from the 2009 World Ocean Atlas at https://www.nodc.noaa.gov/OC5/WOA09/pr_woa09.html. The NOAA_ERSST_v452 and COBE-SST251 data are provided by the NOAA/OAR/ESRL PSD, Boulder, Colorado, USA, from their website at http://www.esrl.noaa.gov/psd/. NCEP reanalysis data were obtained from http://www.esrl.noaa.gov/psd/data/gridded/data.ncep.reanalysis.surfaceflux.html. The Mauna Loa CO record used in our carbon cycle model is available at the NOAA Earth System Research Laboratory at http://www.esrl.noaa.gov/gmd/ccgg/trends/. Data from the SOCOM project4, 5, 15, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63 are available at http://www.bgc-jena.mpg.de/SOCOM/. All data used to create the figures in this paper will be archived at CDIAC (http://cdiac.ornl.gov/). Code may be obtained by contacting T.D. (email@example.com).