Williamsburg, VA, United States
Williamsburg, VA, United States

Time filter

Source Type

Patent
College of William and Mary | Date: 2015-09-22

Gesture-enabled remote control is implemented using a portable device having motion sensors and a wireless transmitter. An activation signal is received when the motion sensors detect a first prescribed motion of the portable device. A neutral orientation is then assigned to the portable device. The neutral orientation is defined by a position of the portable device at the time the activation signal is received. A control signal is generated when the motion sensors detect one of a plurality of prescribed movements of the portable device occurring within a prescribed window of time. Each prescribed movement includes movement of the portable device away from the neutral orientation and then back to the neutral orientation. The control signal is formatted for wireless transmission to an electronic device for control thereof.


Patent
Nanjing University, College of William and Mary | Date: 2015-11-10

Systems, methods and techniques are provided for interacting with mobile devices using a camera-based keyboard. The system comprises a processor system including at last one processor. The processor system is configured to at least capture a plurality of images in connection with the keyboard and at least one hand typing on the keyboard via the camera. Based on the plurality of captured images, the processor system is further configured to locate the keyboard, extract at least a portion of the keys on the keyboard, extract a hand, and detect a fingertip of the extracted hand. After that, a keystroke may be detected and localized through tracking the detected fingertip in at least one of the plurality of captured images, and a character corresponding to the localized keystroke may be determined.


Carone C.D.,College of William and Mary
Physics Letters, Section B: Nuclear, Elementary Particle and High-Energy Physics | Year: 2013

The possibility of explaining the current value of the muon anomalous magnetic moment in models with an additional U(1) gauge symmetry that has kinetic mixing with hypercharge are increasingly constrained by dark photon searches at electron accelerators. Here we present a scenario in which the couplings of new, light gauge bosons to standard model leptons are naturally weak and flavor nonuniversal. A vector-like state that mixes with standard model leptons serves as a portal between the dark and standard model sectors. The flavor symmetry of the model assures that the induced couplings of the new gauge sector to leptons of the first generation are very small and lepton-flavor-violating processes are adequately suppressed. The model provides a framework for constructing ultraviolet complete theories in which new, light gauge fields couple weakly and dominantly to leptons of the second or third generations. © 2013 Elsevier B.V.


Carlson C.E.,College of William and Mary
Progress in Particle and Nuclear Physics | Year: 2015

The proton size, specifically its charge radius, was thought known to about 1% accuracy. Now a new method probing the proton with muons instead of electrons finds a radius about 4% smaller, and to boot gives an uncertainty limit of about 0.1%. We review the different measurements, some of the calculations that underlie them, some of the suggestions that have been made to resolve the conflict, and give a brief overview new related experimental initiatives. At present, however, the resolution to the problem remains unknown. © 2015 Elsevier B.V. All rights reserved.


Grant
Agency: NSF | Branch: Standard Grant | Program: | Phase: SEES Coastal | Award Amount: 407.67K | Year: 2016

This research examines the potential for achieving sustainability in coastal systems where natural resources are impacted by both climate change and human responses to climate change. Chesapeake Bay shorescapes (a shoreline zone which includes riparian, intertidal, and near-shore shallow water areas) are used as a test bed because sea level rise creates risks for shoreline property owners and shoreline marshes. Property owner perception and response to the risk varies and their choice of a shoreline protection approach (armoring, living shoreline, do nothing) has consequences for the capacity of shoreline marshes to continue to provide ecosystem services. Impacts on marsh services that include supporting fisheries and improving water quality affect the larger estuarine system, with consequences for the many users of that system. This leads to regulation by government officials, who have their own perception of all these issues. Modeling the decision-making process of shoreline property owners, the ecological consequences of shoreline management decisions, and the perceptions of officials developing or implementing natural resource policies will discover opportunities and options for managing the combined human and natural system to desired outcomes.

The goal of the project is to discover the elements of the shorescape social-ecological system that have the greatest influence on attainment of sustainable outcomes. The research describes the trajectory of Chesapeake Bay shorescapes in terms of changes in amount, distribution, and character of shoreline types. The primary drivers of these changes are rising sea levels and actions of shoreline property owners to combat erosion. An analysis of existing information on shoreline conditions, property owner characteristics and property management decisions is used to model future shoreline management choices. A series of field investigations comparatively quantify multiple ecosystem functions (habitat provision, primary production, nutrient and carbon storage) of living shorelines and natural marshes along a continuum of estuarine shorescape settings and project future shifts in function under varying management scenarios. This information is compiled in a marsh function model that can use input from the shoreline management model. Future outcomes then are forecast under scenarios with alternative sea level rise and management conditions. Surveys of government officials responsible for policy development and implementation document the operative feedbacks from the ecological consequences of property owner decisions. Synthesis of this information identifies the characteristics of shorescape socio-ecological systems that enhance or detract from their ability to achieve sustainable outcomes. Formal guidance for coastal managers and policy makers will be developed using the results of this integrative approach to sustainability in Chesapeake Bay shorescapes.


Grant
Agency: NSF | Branch: Standard Grant | Program: | Phase: MAJOR RESEARCH INSTRUMENTATION | Award Amount: 300.00K | Year: 2016

Quantum Chromodynamics (QCD) is the theory of strong interactions, the fundamental force in Nature that binds quarks and gluons into subatomic particles (hadrons), such as protons and neutrons. The study of QCD properties is one of the central scientific challenges in nuclear physics. Nuclear physics scientists use computational methods and large-scale simulations to further the detailed understanding of hadronic structure. These simulations make use of state-of-the-art computational resources and researchers continue to improve their high-performance computing (HPC) methods to take advantage of the latest breakthrough in computer technology. This project will allow the acquisition of a parallel computer cluster incorporating the next-generation Intel multi-core processors (Knights Landing) and a high-performance communication network. The Knights Landing architecture has the potential to revolutionize scientific computing, by providing large computing resources at low cost and using significantly less power than conventional computer clusters. This computer cluster will be housed in a state-of-the-art computing center in the new Integrated Science Center at the College of William & Mary and will form the centerpiece of high-performance computing on the W&M campus.

The primary scientific driver for the system is to use lattice QCD methods to build a solid foundation for nuclear physics based on the Standard Model of elementary particle physics. With increased computational power and recent algorithmic developments, the research carried out with this system will advance the Lattice QCD program. This system will be used as a test bed for the development of new algorithms for lattice QCD calculations. These algorithms will be designed in collaboration with computer scientists specifically for the modern multi-core architectures that are currently the trent in high-performance computing. Furthermore, this computer cluster will serve as the center of interdisciplinary research between computer scientists and physicists and will catalyze the integration of research and education through the training of students and postdoctoral researchers in the groups of the senior investigators and in the wider academic community at William and Mary.

This project advances the objectives of the National Strategic Computing Initiative (NSCI), an effort aimed at sustaining and enhancing the U.S. scientific, technological, and economic leadership position in High-Performance Computing (HPC) research, development, and deployment.


Grant
Agency: NSF | Branch: Standard Grant | Program: | Phase: COMPUTER SYSTEMS | Award Amount: 299.99K | Year: 2016

Large, distributed systems are nowadays ubiquitous and part of sustainable IT solutions to a broad range of customers and applications. Data centers in the private or public cloud and high performance computing systems are two examples of complex, highly distributed systems: the former are used by almost everyone on a daily basis, the latter are used by computational scientists for advancing science and engineering. High availability and reliability of these complex systems are important for the quality of user experience. Efficient management of such systems contributes to their availability and reliability, and relies on a priori knowledge of the timing of the collective demands of users and a priori knowledge of certain performance measures (e.g., usage, temperature, power) of various systems components.

This project aims to provide a systematic methodology to improve the operational efficiency of complex, distributed systems by developing neural networks that can efficiently and accurately predict the incoming workload within fine and coarse time scales. Such workload prediction can dramatically improve the operational efficiency of data centers and high performance systems by driving proactive management strategies that specifically aim to enhance reliability. For datacenters, the focus is on actively reducing performance tickets that are automatically triggered by pro-actively managing virtual machine resizing and migration. For high performance computing systems the focus is on predicting hardware faults to autonomically improve the schedulers efficiency, direct cooling, and improve performance and memory bandwidth.


Grant
Agency: NSF | Branch: Standard Grant | Program: | Phase: COMPUTER SYSTEMS | Award Amount: 249.55K | Year: 2016

Memory is a key component of computers. Modern parallel architectures begin to employ heterogeneous memory to improve both latency and bandwidth of the memory subsystem. Typically, a heterogeneous memory system consists of a fast and a slow component and requires explicit software management. It puts unique burdens on programmers and compilers, especially in the domains of high-performance computing (HPC) and big data analytics. These challenges are crucial to the evolution of computer systems and therefore are key to future scientific discovery, economic prosperity, and national security.

To address these challenges in exploiting heterogeneous memory, this project targets co-designing the compiler and the operating system (OS): it explores i) a new compiler design that supports heterogeneity-aware data management and ii) new OS facilities for efficient data move and fair memory sharing. Overall, this project serves as a stepping stone towards a long-term vision -- taming emerging heterogeneous hardware by tightly integrating programming and OS support.


Grant
Agency: NSF | Branch: Standard Grant | Program: | Phase: CRII CISE Research Initiation | Award Amount: 175.00K | Year: 2017

Graphics Processing Units (GPUs) are becoming an inevitable part of every computing system because of their ability to enable orders of magnitude faster and energy-efficient execution. However, the necessary and continuous scaling of GPUs in terms of performance and energy efficiency will not be an easy task. Prior works have shown that two biggest impediments towards this scaling are the limited memory bandwidth and the excessive data movement across different levels of the memory hierarchy. In order to alleviate these two issues, die-stacking technology is gaining momentum in the realm of high-performance energy-efficient GPU computing. This technology not only enables very high memory bandwidth for better performance but also provides support for processing-near-memory (PNM) to reduce data movement, access latencies, and energy consumption. Although these technologies seem promising, the architectural support and execution models for PNM-based GPUs and their implications on the entire system design have largely been unexplored. This project takes a fresh look at the design and execution model of a PNM-enabled GPU, which consists of multiple memory stacks and each memory stack incorporates a 3D-stacked logic layer that can consist of multiple PNM GPU cores and other uncore components. Considering that GPUs are becoming an inevitable part of every computing system ranging from warehouse-scale computers to wearable devices, the insights resulting from this research can have a long-term positive impact on the GPU-based computing. The findings of this research will be incorporated to existing and new undergraduate and graduate courses, which will directly help in educating and training students, including women and students from diverse backgrounds and minority groups.

First, a detailed design space exploration will be performed, which will involve the study of the impact and interactions of different design choices related to PNM cores (e.g., register file, SIMD width, pipeline components, warp occupancy), uncore components at the logic layer (e.g., caches) and stacked memory (e.g., number of stacked memories). Second, a computation distribution framework (CDF) will be developed that will answer: a) when is it preferable to map computations to PNM cores, b) which PNM cores and computations they should be?, and c) how can we effectively take advantage of both PNM and regular GPU cores? The CDF will leverage different static and runtime strategies to address many of such similar questions to push the envelopes of energy efficiency and performance even further. The proposed research components will be evaluated via a wide-range of GPGPU applications.  If successful, the findings of this research would better equip PNM-enabled GPUs to effectively alleviate the two major bottlenecks: memory bandwidth and energy.


Grant
Agency: NSF | Branch: Standard Grant | Program: | Phase: Secure &Trustworthy Cyberspace | Award Amount: 204.35K | Year: 2016

Common smartphone authentication mechanisms such as PINs, graphical passwords, and fingerprint scans offer limited security. They are relatively easy to guess or spoof, and are ineffective when the smartphone is captured after the user has logged in. Multi-modal active authentication addresses these challenges by frequently and unobtrusively authenticating the user via behavioral biometric signals, such as touchscreen interaction, hand movements, gait, voice, and phone location. However, these techniques raise significant privacy and security concerns because the behavioral signals used for authentication represents personal identifiable data, and often expose private information such as user activity, health, and location. Because smartphones can be easily lost or stolen, it is paramount to protect all sensitive behavioral information collected and processed on these devices. One approach for securing behavioral data is to perform off-device authentication via privacy-preserving protocols. However, our experiments show that the energy required to execute these protocols, implemented using state-of-the-art techniques, is unsustainably high, and leads to very quick depletion of the smartphones battery.

This research advances the state of the art of privacy-preserving active authentication by devising new techniques that significantly reduce the energy cost of cryptographic authentication protocols on smartphones. Further, this research takes into account signals that indicate that the user has lost possession of the smartphone, in order to trigger user authentication only when necessary. The focus of this project is in sharp contrast with existing techniques and protocols, which have been largely agnostic to energy consumption patterns and to the user1s possession of the smartphone post-authentication. The outcome of this project is a suite of new cryptographic techniques and possession-aware protocols that enable secure energy-efficient active authentication of smartphone users. These cryptographic techniques advance the state of the art of privacy-preserving active authentication by re-shaping individual protocol components to take into account complex energy tradeoffs and network heterogeneity, integral to modern smartphones. Finally, this project will focus on novel techniques to securely offload computation related to active authentication from the smartphone to a (possibly untrusted) cloud, further reducing the energy footprint of authentication. The proposed research will thus make privacy-preserving active authentication practical on smartphones, from both an energy and performance perspective.

Loading College of William and Mary collaborators
Loading College of William and Mary collaborators