Center for Computation and Technology

Baton Rouge, LA, United States

Center for Computation and Technology

Baton Rouge, LA, United States
SEARCH FILTERS
Time filter
Source Type

Khurana S.,Center for Computation and Technology | Brener N.,Center for Computation and Technology | Karki B.,Center for Computation and Technology | Benger W.,Institute for Astro and Particle Physics | Ritter M.,Institute of Basic science in Civil Engineering
20th International Conference in Central Europe on Computer Graphics, Visualization and Computer Vision, WSCG 2012 - Conference Proceedings | Year: 2012

This paper presents a multi scale color coding technique which enhances the visualization of scalar fields along integration lines in vector fields. In particular, this multi scale technique enables one to see detailed variations in selected small ranges of the scalar field while at the same time allowing one to observe the entire range of values of the scalar field. This type of visualization, observing small variations as well as the entire range of values, is usually not possible with uniform color coding. This multi scale approach, which is linear within each division of the scale (piecewise linear), is a general visualization technique that can be applied to many different scalar fields of interest. As an example, in this paper we apply it to the visualization of fluid flow mixing indicators along pathlines in computational fluid dynamics (CFD) simulations. Pathlines are the trajectories of fluid particles over time in the CFD simulations, and applying multi scale color coding on the pathlines brings out quantities of interest in the flow such as curvature, torsion and specific measure of length generation, which are indicators of the degree of mixing in the fluid system. In contrast to uniform color coding, this multi scale scheme can display small variations in the mixing indicators and still show the entire range of values of these indicators. In this paper, the mixing indicators are computed and displayed only along line structures (pathlines) in the flow field rather than at all points in the flow field.


Shams S.,Center for Computation and Technology | Platania R.,Center for Computation and Technology | Lee K.,Center for Computation and Technology | Park S.-J.,Center for Computation and Technology
Proceedings - International Conference on Distributed Computing Systems | Year: 2017

Recent advances in deep learning have enabled researchers across many disciplines to uncover new insights about large datasets. Deep neural networks have shown applicability to image, time-series, textual, and other data, all of which are available in a plethora of research fields. However, their computational complexity and large memory overhead requires advanced software and hardware technologies to train neural networks in a reasonable amount of time. To make this possible, there has been an influx in development of deep learning software that aim to leverage advanced hardware resources. In order to better understand the performance implications of deep learning frameworks over these different resources, we analyze the performance of three different frameworks, Caffe, TensorFlow, and Apache SINGA, over several hardware environments. This includes scaling up and out with single-and multi-node setups using different CPU and GPU technologies. Notably, we investigate the performance characteristics of NVIDIA's state-of-the-art hardware technology, NVLink, and also Intel's Knights Landing, the most advanced Intel product for deep learning, with respect to training time and utilization. To our best knowledge, this is the first work concerning deep learning bench-marking with NVLink and Knights Landing. Through these experiments, we provide analysis of the frameworks' performance over different hardware environments in terms of speed and scaling. As a result of this work, better insight is given towards both using and developing deep learning tools that cater to current and upcoming hardware technologies. © 2017 IEEE.


Ullmer B.,Louisiana State University | Ullmer B.,Center for Computation and Technology | Dell C.,Louisiana State University | Dell C.,Center for Computation and Technology | And 25 more authors.
Proceedings of the 5th International Conference on Tangible Embedded and Embodied Interaction, TEI'11 | Year: 2011

Casiers are a class of tangible interface elements that structure the physical and functional composition of tangibles and complementary interactors (e.g., buttons and sliders). Casiers allow certain subsets of interactive functionality to be accessible across diverse interactive systems (with and without graphical mediation, employing varied sensing capabilities and supporting software). We illustrate examples of casiers in use, including iterations around a custom walk-up-and-use kiosk, as well as casiers operable across commercial platforms of widely varying cost and capability. © 2011 ACM.


Thota A.,Center for Computation and Technology | Thota A.,Louisiana State University | Luckow A.,Center for Computation and Technology | Jha S.,Center for Computation and Technology | Jha S.,Rutgers University
Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences | Year: 2011

Replica-exchange (RE) algorithms are used to understand physical phenomena-ranging from protein folding dynamics to binding affinity calculations. They represent a class of algorithms that involve a large number of loosely coupled ensembles, and are thus amenable to using distributed resources. We develop a framework for RE that supports different replica pairing (synchronous versus asynchronous) and exchange coordination mechanisms (centralized versus decentralized) and which can use a range of production cyberinfrastructures concurrently. We characterize the performance of both RE algorithms at an unprecedented number of cores employed-the number of replicas and the typical number of cores per replica-on the production distributed infrastructure. We find that the asynchronous algorithms outperform the synchronous algorithms, even though details of the specific implementations are important determinants of performance. © 2011 The Royal Society.


Miceli C.,Center for Computation and Technology | Miceli M.,Center for Computation and Technology | Rodriguez-Milla B.,Center for Computation and Technology | Jha S.,Center for Computation and Technology | Jha S.,Louisiana State University
Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences | Year: 2010

Grids, clouds and cloud-like infrastructures are capable of supporting a broad range of data-intensive applications. There are interesting and unique performance issues that appear as the volume of data and degree of distribution increases. New scalable data-placement and management techniques, as well as novel approaches to determine the relative placement of data and computational workload, are required. We develop and study a genome sequence matching application that is simple to control and deploy, yet serves as a prototype of a data-intensive application. The application uses a SAGA-based implementation of the All-Pairs pattern. This paper aims to understand some of the factors that influence the performance of this application and the interplay of those factors. We also demonstrate how the SAGA approach can enable data-intensive applications to be extensible and interoperable over a range of infrastructure. This capability enables us to compare and contrast two different approaches for executing distributed data-intensive applications-simple application-level data-placement heuristics versus distributed file systems. © 2010 The Royal Society.


Korobkin O.,Center for Computation and Technology | Allen G.,Center for Computation and Technology | Brandt S.R.,Center for Computation and Technology | Bentivegna E.,Max Planck Institute for Physics | And 6 more authors.
Proceedings of the TeraGrid 2011 Conference: Extreme Digital Discovery, TG'11 | Year: 2011

This paper describes the Alpaca runtime tools. These tools leverage the component infrastructure of the Cactus Framework in a novel way to enable runtime steering, monitoring, and interactive control of a simulation. Simulation data can be observed graphically, or by inspecting values of variables. When GPUs are available, images can be generated using volume ray casting on the live data. In response to observed error conditions or automatic triggers, users can pause the simulation to modify or repair data, or change runtime parameters. In this paper we describe the design of our implementation of these features and illustrate their value with three use cases. © 2011 ACM.


Ko S.-H.,National Supercomputing Center | Kim N.,Center for Computation and Technology | Jha S.,Center for Computation and Technology
ASME-JSME-KSME 2011 Joint Fluids Engineering Conference, AJK 2011 | Year: 2011

We propose numerical approaches to reduce the sampling noise of a hybrid computational fluid dynamics (CFD)-molecular dynamics (MD) solution. A hybrid CFD-MD approach provides higher-resolution solution near the solid obstacle and better efficiency than a pure particle-based simulation technique. However, applications up to now are limited to extreme velocity conditions, since the magnitude of statistical error in sampling particles' velocity is very large compared to the continuum velocity. Considering technical difficulties of infinitely increasing MD domain size, we propose and experiment a number of numerical alternatives to suppress the excessive sampling noise in solving moderatevelocity flow field. They are the sampling of multiple replicas, virtual stretching of sampling layers in space, and linear fitting of multiple temporal samples. We discuss the pros and cons of each technique in view of solution accuracy and computational cost. Copyright © 2011 by ASME.


Li X.,Center for Computation and Technology | Iyengar S.S.,Florida International University
ACM Computing Surveys | Year: 2015

We review the computation of 3D geometric data mapping, which establishes one-to-one correspondence between or among spatial/spatiotemporal objects. Effectivemapping benefitsmany scientific and engineering tasks that involve the modeling and processing of correlated geometric or image data. We model mapping computation as an optimization problem with certain geometric constraints and go through its general solving pipeline. Different mapping algorithms are discussed and compared according to their formulations of objective functions, constraints, and optimization strategies. © 2014 ACM.

Loading Center for Computation and Technology collaborators
Loading Center for Computation and Technology collaborators