United States
United States

Time filter

Source Type

News Article | March 30, 2017
Site: www.scientificcomputing.com

The San Diego Supercomputer Center (SDSC) at the University of California San Diego and the  Simons Foundation’s Flatiron Institute in New York have reached an agreement under which the majority of SDSC’s data-intensive Gordon supercomputer will be used by Simons for ongoing research following completion of the system’s tenure as a National Science Foundation (NSF) resource on March 31. Under the agreement, SDSC will provide high-performance computing (HPC) resources and services on Gordon for the Flatiron Institute to conduct computationally-based research in astrophysics, biology, condensed matter physics, materials science, and other domains. The two-year agreement, with an option to renew for a third year, takes effect April 1, 2017. Under the agreement, the Flatiron Institute will have annual access to at least 90 percent of Gordon’s system capacity. SDSC will retain the rest for use by other organizations including UC San Diego's Center for Astrophysics & Space Sciences (CASS), as well as SDSC’s OpenTopography project and various projects within the Center for Applied Internet Data Analysis (CAIDA), which is based at SDSC. “We are delighted that the Simons Foundation has given Gordon a new lease on life after five years of service as a highly sought after XSEDE resource,” said SDSC Director Michael Norman, who also served as the principal investigator for Gordon. “We welcome the Foundation as a new partner and consider this to be a solid testimony regarding Gordon’s data-intensive capabilities and its myriad contributions to advancing scientific discovery.” “We are excited to have a big boost to the processing capacity for our researchers and to work with the strong team from San Diego,” said Ian Fisk, co-director of the Scientific Computing Core (SCC), which is part of the Flatiron Institute. David Spergel, director of the Flatiron Institute’s Center for Computational Astrophysics (CCA) said, “CCA researchers will use Gordon both for simulating the evolution and growth of galaxies, as well as for the analysis of large astronomical data sets.  Gordon offers us a powerful platform for attacking these challenging computational problems.” The POLARBEAR project and successor project called The Simons Array, led by UC Berkeley and funded first by the Simons Foundation and then in 2015 by the NSF under a five-year, $5 million grant, will continue to use Gordon as a key resource. “POLARBEAR and The Simons Array, which will deploy the most powerful CMB (Cosmic Microwave Background) radiation telescope and detector system ever made, are two NSF supported astronomical telescopes that observe CMB, in essence the leftover ‘heat’ from the Big Bang in the form of microwave radiation,” said Brian Keating, a professor of physics at UC San Diego’s Center for Astrophysics & Space Sciences and a co-PI for the POLARBEAR/Simons Array project. “The POLARBEAR experiment alone collects nearly one gigabyte of data every day that must be analyzed in real time,” added Keating. “This is an intensive process that requires dozens of sophisticated tests to assure the quality of the data. Only by leveraging resources such as Gordon are we be able to continue our legacy of success.” Gordon also will be used in conjunction with the Simons Observatory, a 5-year $40 million project awarded by the Foundation in May 2016 to a consortium of universities led by UC San Diego, UC Berkeley, Princeton University, and the University of Pennsylvania. In the Simons Observatory, new telescopes will join the existing POLARBEAR/Simons Array and Atacama Cosmology Telescopes to produce an order of magnitude more data than the current POLARBEAR experiment. An all-hands meeting for the new project will take place at SDSC this summer. A video describing the project can be viewed by clicking the image below. The result of a five-year, $20 million NSF grant awarded in late 2009, Gordon entered production in early 2012 as one of the 50 fastest supercom­puters in the world, and the first one to use massive amounts of flash-based memory. That made it many times faster than conventional HPC systems, while having enough bandwidth to help researchers sift through tremendous amounts of data. Gordon also has been a key resource within NSF’s XSEDE (Extreme Science and Engineering Discovery Environment) project. The system will officially end its NSF duties on March 31 following two extensions from the agency. By the end of February 2017, Gordon had supported research and education by more than 2,000 command-line users and over 7,000 gateway users, primarily through resource allocations from XSEDE.  One of Gordon’s most data-intensive tasks was to rapidly process raw data from almost one billion particle collisions as part of a project to help define the future research agenda for the Large Hadron Collider (LHC). Gordon provided auxiliary computing capacity by processing massive data sets generated by one of the LHC’s two large general-purpose particle detectors used to find the elusive Higgs particle. The around-the-clock data processing run on Gordon was completed in about four weeks’ time, making the data available for analysis several months ahead of schedule.


Mikians J.,Polytechnic University of Catalonia | Dhamdhere A.,CAIDA | Dovrolis C.,Georgia Institute of Technology | Barlet-Ros P.,Polytechnic University of Catalonia | Sole-Pareta J.,Polytechnic University of Catalonia
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) | Year: 2012

Identifying the statistical properties of the Interdomain Traffic Matrix (ITM) is fundamental for Internet techno-economic studies but challenging due to the lack of adequate traffic data. In this work, we utilize a Europe-wide measurement infrastructure deployed at the GÉANT backbone network to examine some important spatial properties of the ITM. In particular, we analyze its sparsity and characterize the distribution of traffic generated by different ASes. Our study reveals that the ITM is sparse and that the traffic sent by an AS can be modeled as the LogNormal or Pareto distribution, depending on whether the corresponding traffic experiences congestion or not. Finally, we show that there exist significant correlations between different ASes mostly due to relatively few highly popular prefixes. © 2012 IFIP International Federation for Information Processing.


News Article | October 31, 2016
Site: www.techrepublic.com

Government, academic, and private-sector officials are collaborating on new ways to prevent and mitigate distributed denial-of-service (DDoS) attacks, based on research years in the making but kicked into high gear by the massive takedown this month of domain name system provider Dyn. SEE: Aerohive's new IoT security solution could have blocked Dyn DDoS attacks, company claims (TechRepublic) The largest attacks in summer 2015 were about 400 gigabits per second, but September 2016 saw an attack on security blogger Brian Krebs of more than 600Gbps, while Dyn said its own attack may have exceeded 1.2 terabits per second. Government-led research is focusing on the 1-terabit range but with systems that can scale higher, which is already needed due to the proliferation of vulnerable Internet of Things devices too easily commandeered by malicious hackers. But it means there's a ton of job security for Dan Massey, a computer science Ph.D. serving as program manager for the U.S. Department of Homeland Security Advanced Research Projects Agency Cyber Security Division. Massey in August 2015 began evaluating and funding new anti-DDoS efforts at the National Institute of Standards and Technology (NIST), private companies, and universities, which share the goal of getting innovative techniques into commercially feasible pilot projects no later than summer 2018. Some are already underway, Massey and others said. SEE: New World Hackers group claims responsibility for internet disruption (CBS News) Funded projects include attack information sharing methods from the University of Southern California, University of California-Los Angeles, and University of Oregon. The latter implements a unique peer-to-peer method of letting networks share information about traffic patterns, as does Portland, Ore.-based research contractor Galois. Colorado State University is making a way to distribute the task of packet filtering and intelligence gathering; the University of Delaware and others including IBM are focusing on identifying new kinds of attacks; and the University of Houston is looking at on-demand network capacity for handling attacks when they hit. In addition, Waterford, Va.-based Waverley Labs and the Cloud Security Alliance are working on whitelisting methods to make a network only accept approved traffic. NIST is collaborating with the University of California-San Diego to determine whether the software for stopping DDoS attacks would hurt network performance. Other anti-DDoS measures are already common for large companies, such as load balancing so that different parts of a network can pick up the slack if others go down, having multiple DNS providers for the same reason, and educating end users on safe internet usage, security experts at Akamai, Radware, and certification specialist (ISC)2 said. It's unclear why the Dyn network seemingly did not balance itself, although mainstream websites such as The New York Times were able to quickly handle the problem by switching to different DNS servers. Perhaps the longest-standing way to combat DDoS attacks is by using the Internet Engineering Task Force's Best Current Practice #38—BCP-38, in networking parlance—which emerged in 2000. Implementing this standard prevents a network from sending packets with forged IP addresses. However, there's no way for internet authorities to enforce the use of such standards, especially outside of the United States, officials at the UCSD's Center for Applied Internet Data Analysis (CAIDA) said. CAIDA operates the Spoofer Project, which is software based on 2005 research from the Massachusetts Institute of Technology (MIT) that lets users see if their network allows forged packets. CAIDA currently reports that 75% of 435 million tested IP addresses are unspoofable, although that's a hard-to-imagine percentage of the internet's 3.4 undecillion (34x1038) possible IP addresses, CAIDA manager of scientific projects Josh Polterock noted. Nor is there any need to update the 16-year-old BCP-38 standard, because it works fine if people would just use it, explained security expert Jay Ashworth who manages the BCP-38 Wiki. Paul Mockapetris, famous in networking circles as the father of DNS, is also thinking outside the box. "Rather than a handful of addresses for contacting [companies such as] Dyn, we need to think about creating multiple paths for getting DNS information between the creator and consumers of that information. This won't be popular with the business models of DNS providers... but we need to make attacks on the naming infrastructure per se several orders of magnitude harder, so we can depend on DNS services to aid in the defense," he stated. Charging for packets is one preventative possibility, said Mockapetris, now chief scientist of Carlsbad, Calif.-based ThreatSTOP. "We certainly need more dreams and innovation if the internet is to succeed." Mockapetris agreed that reputation systems and carrier-level filtering, as already in use by some of the world's largest networking companies, could be useful. He added that he'd love to see more use of virtualization so applications such as banking could be walled off from common web surfing. Either way, he concluded, "We certainly need more dreams and innovation if the internet is to succeed."


Yu Y.,University of California at Los Angeles | Afanasyev A.,University of California at Los Angeles | Clark D.,Massachusetts Institute of Technology | Claffy K.,CAIDA | And 2 more authors.
ICN 2015 - Proceedings of the 2nd International Conference on Information-Centric Networking | Year: 2015

Securing communication in network applications involves many complex tasks that can be daunting even for security experts. The Named Data Networking (NDN) architecture builds data authentication into the network layer by requiring all applications to sign and authenticate every data packet. To make this authentication usable, the decision about which keys can sign which data and the procedure of signature verification need to be automated. This paper explores the ability of NDN to enable such automation through the use of trust schemas. Trust schemas can provide data consumers an automatic way to discover which keys to use to authenticate individual data packets, and provide data producers an automatic decision process about which keys to use to sign data packets and, if keys are missing, how to create keys while ensuring that they are used only within a narrowly defined scope ("the least privilege principle"). We have developed a set of trust schemas for several prototype NDN applications with different trust models of varying complexity. Our experience suggests that this approach has the potential of being generally applicable to a wide range of NDN applications. © 2015 ACM.


Zseby T.,Fraunhofer Institute for Open Communication Systems | Claffy K.,CAIDA
Computer Communication Review | Year: 2012

On May 14-15, 2012, CAIDA hosted the first international Workshop on Darkspace and UnSolicited Traffic Analysis (DUST 2012) to provide a forum for discussion of the science, engineering, and policy challenges associated with darkspace and unsolicited traffic analysis. This report captures threads discussed at the workshop and lists resulting collaborations.


Motiwala M.,Georgia Institute of Technology | Dhamdhere A.,CAIDA | Feamster N.,Georgia Institute of Technology | Lakhina A.,Guavus Inc.
Computer Communication Review | Year: 2012

We develop a holistic cost model that operators can use to help evaluate the costs of various routing and peering decisions. Using real traffic data from a large carrier network, we show how network operators can use this cost model to significantly reduce the cost of carrying traffic in their networks. We find that adjusting the routing for a small fraction of total flows (and total traffic volume) significantly reduces cost in many cases. We also show how operators can use the cost model both to evaluate potential peering arrangements and for other network operations problems.


Claffy K.C.,CAIDA
Computer Communication Review | Year: 2011

Exhaustion of the Internet addressing authority's (IANA) available IPv4 address space, which occurred in February 2011, is finally exerting exogenous pressure on network operators to begin to deploy IPv6 There are two possible outcomes from this transition IPv6 may be widely adopted and embraced, causing many existing methods to measure and monitor the Internet to be ineffective A second possibility is that IPv6 languishes, transition mechanisms fail, or performance suffers Either scenario requires data, measurement, and analysis to inform technical, business, and policy decisions We survey available data that have allowed limited tracking of IPv6 deployment thus far, describe additional types of data that would support better tracking, and offer a perspective on the challenging future of IPv6 evolution.


Claffy K.C.,CAIDA
Computer Communication Review | Year: 2011

In June 2011 I participated on a panel on network neutrality hosted at the June cybersecurity meeting of the DHS/SRI Infosec Technology Transition Council (ITTC) [1], where "experts and leaders from the government, private, financial, IT, venture capitalist, and academia and science sectors came together to address the problem of identity theft and related criminal activity on the Internet." I recently wrote up some of my thoughts on that panel, including what network neutrality has to do with cybersecurity.


Claffy K.C.,CAIDA
Computer Communication Review | Year: 2011

I recently published this essay on CircleID [14] on my thoughts on ICANN's recent decision to launch .XXX and the larger new gTLD program this year. Among other observations, I describe how .XXX marks a historical inflection point, where ICANN's board formally abandoned any responsibility to present an understanding of the ramifications of probable negative externalities ("harms") in setting its policies. That ICANN chose to relinquish this responsibility puts the U.S. government in the awkward position of trying to tighten the few inadequate controls that remain over ICANN, and leaves individual and responsible corporate citizens in the unenviable yet familiar position of bracing for the consequences.


Claffy K.C.,CAIDA
Computer Communication Review | Year: 2011

On February 10-12, 2011, CAIDA hosted the third Work- shop on Active Internet Measurements (AIMS-3) as part of our series of Internet Statistics and Metrics Analysis (ISMA) workshops As with the previous two AIMS workshops, the goals were to further our understanding of the potential and limitations of active measurement research and infrastruc- ture in the wide-area Internet, and to promote cooperative solutions and coordinated strategies to address future data needs of the network and security research communities For three years, the workshop has fostered interdisciplinary con- versation among researchers, operators, and government, fo- cused on analysis of goals, means, and emerging issues in ac- tive Internet measurement projects The first workshop em- phasized discussion of existing hardware and software plat- forms for macroscopic measurement and mapping of Internet properties, in particular those related to cybersecurity The second workshop included more performance evaluation and data-sharing approaches This year we expanded the work- shop agenda to include active measurement topics of more recent interest: broadband performance; gauging IPv6 de- ployment; and measurement activities in international re- search networks.

Loading CAIDA collaborators
Loading CAIDA collaborators