ATandT Research Labs

NJ, United States

ATandT Research Labs

NJ, United States
Time filter
Source Type

Duarte E.P.,Federal University of Paraná | Hiltunen M.,ATandT Research Labs
Proceedings of the International Conference on Dependable Systems and Networks | Year: 2015

Software-Defined Networks (SDN) and Network Function Virtualization (NFV) are two technologies that have already had a deep impact on computer and telecommunication networks. Software Defined Networks (SDN) decouple network control from forwarding functions, enabling network control to become directly programmable and the underlying infrastructure to be abstracted from applications and network services. Network Function Virtualization (NFV) is a network architecture concept where IT virtualization techniques are used to implement network node functions as building blocks that may be combined, or chained, together to create communication services. SDN and NFV make it simpler and faster to deploy and manage new services, avoiding the cost and the long time frame required to design and implement hardwarebased network services. SDN and NVF introduce numerous dependability challenges. In terms of reliability, the challenges range from the design of reliable new SDN and NFV technologies to the adaptation of classical network functions to these technologies. The effective, dependable deployment of the virtual network on the physical substrate is particularly important. In terms of security, the challenges are enormous, as SDN and NFV are meant to be the very fabric of both the Internet and private networks. Threats, privacy concerns, authentication issues, and isolation - defining a truly secure virtualized network requires work on multiple fronts. The program of DISN'2015 consists of 3 technical papers and 2 keynotes, which are briefly described. © 2015 IEEE.

Gerriets T.,Justus Liebig University | Walberer M.,University of Cologne | Nedelmann M.,Justus Liebig University | Doenges S.,Justus Liebig University | And 7 more authors.
Journal of Neuroscience Methods | Year: 2010

Subtle cerebral air microembolisation (CAM) is a typical complication of various medical interventions such as open heart surgery or angiography and can cause transient or permanent neurological and neuropsychological deficits. Evaluation of the underlying pathophysiology requires animal models that allow embolisation of air bubbles of defined diameter and number. Herein we present a method for the production of gas bubbles of defined diameter and their injection into the carotid artery of rats. The number of gas microemboli injected is quantified digitally using a high speed optical image capturing system and a custom-made software.In a first pilot study, 0, 50, 100, 400 and 800 gas bubbles of 160. μm in diameter were injected into the carotid artery of rats. Offline evaluation revealed a high constancy of the bubble diameters (mean 159.95 ± 9.25. μm, range 144-188. μm) and the number of bubbles injected. First preliminary data indicate that with increasing number of bubbles embolised, more animals revealed neurological deficits and (particularly with higher bubble counts) brain infarctions on TTC-staining. Interestingly, also animals without overt infarcts on TTC-staining displayed neurological deficits in an apparently dose dependent fashion, indicating subtle brain damage by air embolism.In conclusion, the method presented allows injecting air bubbles of defined number and diameter into cerebral arteries of rats. This technique facilitates animal research in the field of air embolisation. © 2010 Elsevier B.V.

Alzoubi H.A.,Case Western Reserve University | Rabinovich M.,Case Western Reserve University | Spatscheck O.,ATandT Research Labs
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) | Year: 2013

While some IPv6-enabled Web sites such as Google require an explicit opt-in by IPv6-enabled clients before serving them over the IPv6 protocol, we quantify performance implications of unilateral enabling of IPv6 by a Web site. In this approach, the Web site enables dual-stack IPv4/6 support and resolves DNS queries for IPv6 addresses with the IPv6 addresses of its Web servers, and legacy DNS queries for IPv4 addresses with the IPv4 addresses. Thus, clients indicating the willingness to communicate over IPv6 are allowed to immediately do so. Although the existence of the end-to-end IPv6 path between these clients and the Web site is currently unlikely, we found no evidence of performance penalty (subject to 1sec. granularity of our measurement) for this unilateral IPv6 adoption. We hope our findings will help facilitate the IPv6 transition and prove useful to the sites considering their IPv6 migration strategy. © 2013 Springer-Verlag Berlin Heidelberg.

Evans W.,University of British Columbia | Gansner E.,ATandT Research Labs | Kaufmann M.,University of Tübingen | Liotta G.,University of Perugia | And 2 more authors.
Computational Geometry: Theory and Applications | Year: 2013

We introduce and study a generalization of the well-known region of influence proximity drawings, called (ε1,ε2)- proximity drawings. Intuitively, given a definition of proximity and two real numbers ε1≥0 and ε2≥0, an ( ε1,ε2)-proximity drawing of a graph is a planar straight-line drawing Γ such that: (i) for every pair of adjacent vertices u, v, their proximity region "shrunk" by the multiplicative factor 11+ε1 does not contain any vertices of Γ; (ii) for every pair of non-adjacent vertices u, v, their proximity region "expanded" by the factor (1+ε2) contains some vertices of Γ other than u and v. In particular, the locations of the vertices in such a drawing do not always completely determine which edges must be present/absent, giving us some freedom of choice. We show that this generalization significantly enlarges the family of representable planar graphs for relevant definitions of proximity drawings, including Gabriel drawings, Delaunay drawings, and β-drawings, even for arbitrarily small values of ε1 and ε2. We also study the extremal case of (0,ε2)-proximity drawings, which generalize the well-known weak proximity drawing paradigm. © 2013 Elsevier B.V.

Qin X.,University of Texas at San Antonio | Kelley B.,University of Texas at San Antonio | Saedy M.,ATandT Research Labs
2015 10th System of Systems Engineering Conference, SoSE 2015 | Year: 2015

In distributed storage for Big Data systems, there is a need for exact repair, high bandwidth codes. The challenge for exact repair in big-data storage is to simultaneously enable both very high bandwidth repair using Map-Reduce and simple coding schemes that also combine robust maximally distance separable (MDS) exact repair. MDS repair is for the rare, but exceptional outlier error patterns requiring optimum erasure code reconstruction. We construct the optimum fast bandwidth repair for big-data sources. Our system uses Map-Reduce, exact repair reconstruction. The algorithm combines MDS with a second fast decode algorithm in a cloud environment. We illustrate cloud experiments for optimum fast bandwidth reconstruction for 1-Exabyte Big Data in the cloud and demonstrate cloud results for Poisson error rate arrival models. Unlike prior methods, we jointly solve the problem of fast bandwidth repair for burst-memory error patterns and for code rates up to - in a real time error model framework for Big Data. Furthermore, simulations indicate this method outperforms prior fast bandwidth approaches for burst errors. We also illustrate Map-Reduce algorithm optimized for fast bandwidth repair in Big Data storage in clouds. © 2015 IEEE.

Panta R.K.,ATandT Research Labs | Bagchi S.,Purdue University
IEEE Transactions on Parallel and Distributed Systems | Year: 2012

Wireless reprogramming of sensor nodes is an essential requirement for long-lived networks because software functionality needs to be changed over time. The amount of information that needs to be wirelessly transmitted during reprogramming should be minimized to reduce reprogramming time and energy. In this paper, we present a multihop incremental reprogramming system called Hermes that transfers over the network the delta between the old and new software and lets the sensor nodes rebuild the new software using the received delta and the old software. It reduces the delta by using techniques to mitigate the effects of function and global variable shifts caused by the software modifications. Then it compares the binary images at the byte level with a method to create a small delta that needs to be sent over the wireless network to all the nodes. For the wide range of software change scenarios that we experimented with, we find that Hermes transfers up to 201 times less information than Deluge, the standard reprogramming system for TinyOS, and 64 times less than an existing incremental reprogramming system by Jeong and Culler. © 2012 IEEE.

Loading ATandT Research Labs collaborators
Loading ATandT Research Labs collaborators