Scientific Computing and Imaging Institute

Salt Lake City, UT, United States

Scientific Computing and Imaging Institute

Salt Lake City, UT, United States
SEARCH FILTERS
Time filter
Source Type

Fiederer L.D.J.,University Hospital Freiburg | Fiederer L.D.J.,Albert Ludwigs University of Freiburg | Vorwerk J.,University of Munster | Lucka F.,University of Munster | And 15 more authors.
NeuroImage | Year: 2016

Reconstruction of the electrical sources of human EEG activity at high spatio-temporal accuracy is an important aim in neuroscience and neurological diagnostics. Over the last decades, numerous studies have demonstrated that realistic modeling of head anatomy improves the accuracy of source reconstruction of EEG signals. For example, including a cerebro-spinal fluid compartment and the anisotropy of white matter electrical conductivity were both shown to significantly reduce modeling errors. Here, we for the first time quantify the role of detailed reconstructions of the cerebral blood vessels in volume conductor head modeling for EEG. To study the role of the highly arborized cerebral blood vessels, we created a submillimeter head model based on ultra-high-field-strength (7 T) structural MRI datasets. Blood vessels (arteries and emissary/intraosseous veins) were segmented using Frangi multi-scale vesselness filtering. The final head model consisted of a geometry-adapted cubic mesh with over 17×106 nodes. We solved the forward model using a finite-element-method (FEM) transfer matrix approach, which allowed reducing computation times substantially and quantified the importance of the blood vessel compartment by computing forward and inverse errors resulting from ignoring the blood vessels. Our results show that ignoring emissary veins piercing the skull leads to focal localization errors of approx. 5 to 15mm. Large errors (>2cm) were observed due to the carotid arteries and the dense arterial vasculature in areas such as in the insula or in the medial temporal lobe. Thus, in such predisposed areas, errors caused by neglecting blood vessels can reach similar magnitudes as those previously reported for neglecting white matter anisotropy, the CSF or the dura - structures which are generally considered important components of realistic EEG head models. Our findings thus imply that including a realistic blood vessel compartment in EEG head models will be helpful to improve the accuracy of EEG source analyses particularly when high accuracies in brain areas with dense vasculature are required. © 2016 The Authors.


Nichols J.A.,University of Utah | Roach K.E.,University of Utah | Fiorentino N.M.,University of Utah | Anderson A.E.,University of Utah | Anderson A.E.,Scientific Computing and Imaging Institute
Annals of Biomedical Engineering | Year: 2017

Use of subject-specific axes of rotation may improve predictions generated by kinematic models, especially for joints with complex anatomy, such as the tibiotalar and subtalar joints of the ankle. The objective of this study was twofold. First, we compared the axes of rotation between generic and subject-specific ankle models for ten control subjects. Second, we quantified the accuracy of generic and subject-specific models for predicting tibiotalar and subtalar joint motion during level walking using inverse kinematics. Here, tibiotalar and subtalar joint kinematics measured in vivo by dual-fluoroscopy served as the reference standard. The generic model was based on a cadaver study, while the subject-specific models were derived from each subject’s talus reconstructed from computed tomography images. The subject-specific and generic axes of rotation were significantly different. The average angle between the modeled axes was 12.9° ± 4.3° and 24.4° ± 5.9° at the tibiotalar and subtalar joints, respectively. However, predictions from both models did not agree well with dynamic dual-fluoroscopy data, where errors ranged from 1.0° to 8.9° and 0.6° to 7.6° for the generic and subject-specific models, respectively. Our results suggest that methods that rely on talar morphology to define subject-specific axes may be inadequate for accurately predicting tibiotalar and subtalar joint kinematics. © 2017 Biomedical Engineering Society


SEATTLE, WA--(Marketwired - Nov 15, 2016) - Qumulo, the leader in data-aware scale-out NAS, showcased continued company momentum, with impressive year over year growth, customer traction and continuous product innovation. Today, the company introduced Qumulo Core 2.5, a new version of its award-winning software. In addition, Qumulo announced it has joined with Hewlett Packard Enterprise (HPE)'s OEM Partner Program and made Qumulo Core software available on HPE Apollo servers. "New technologies are rendering last-generation infrastructure obsolete," said Peter Godman, co-founder and CEO of Qumulo. "Qumulo's incredible customer traction, rapid pace of innovation and rock solid product give enterprises a superior option for large scale unstructured data. Whether on-premises or cloud, customers trust Qumulo with their most valuable asset, which is data." Qumulo is Recognized for Growth, Customer Traction and Industry Leadership Since its inception, Qumulo has doubled customer count and bookings year-over-year. Qumulo customers across media and entertainment, life and earth sciences, higher education, telecommunications and other industries are making breakthroughs with data-intensive and mission critical workloads. Today, more than 100 enterprise customers trust Qumulo with their mission-critical workflows, including Atomic Fiction, Carnegie Institute for Science, Deluxe, Densho, FotoKem, Institute for Health Metrics and Evaluation (IHME) at the University of Washington, Hyundai MOBIS, MSG Networks, Sinclair Oil, Sportvision, TELUS Studios, UConn Health, University of Utah Scientific Computing and Imaging Institute, Vaisala, ZOIC Studios, and many more. Qumulo has amassed more than 10 industry awards for leadership and company culture, commitment to the channel, and product innovation. These awards include Puget Sound Business Journal Washington's Best Workplaces, CRN Channel Chiefs, TechTarget SearchStorage/Storage Magazine Products of the Year, CRN Emerging Technology Vendors, and Seattle Business Magazine's Tech Impact Award, amongst others. Qumulo Continues Rapid Pace of Innovation, Introduces Qumulo Core 2.5 and Availability of Qumulo Core on HPE Apollo Servers Today, Qumulo is announcing Qumulo Core 2.5 scale-out file and object storage software on HPE Apollo servers, availability of snapshots for extended enterprise data protection, erasure coding improvements delivering 80% efficiency, and additional data-aware software features with throughput analytics and intelligent caching of metadata on SSD. Qumulo Core is now available on HPE Apollo servers, offering flexibility for enterprise customers that want next-generation scale-out file and object storage software on-premises or for private cloud workloads. Built for the highest levels of performance and efficiency, Qumulo Core on HPE Apollo servers is a future-proof solution for storing and managing hundreds of petabytes and tens of billions of files and objects. Now that Qumulo has demonstrated the flexibility of Qumulo through hardware independence, portability of Qumulo Core into private and public clouds will provide enterprises with a complete software-only scale-out NAS solution for storing and managing data anywhere, while gaining complete awareness of their data footprint at incredible scale. Qumulo Core 2.5 delivers snapshots, providing even greater data protection and allows end users to recover quickly from mistakes. Customers can create fine-grained policies and take billions of snapshots with easy access via NFS and SMB. Qumulo Core 2.5 makes it possible for enterprises to understand system throughput through the lens of data. Customers can easily get a graphical representation of the file system layout with indicators of throughput and IOPS heat and an at-a-glance view of capacity and activity, helping system administrators understand how their file storage is being used. Drill down access to see what paths and clients are "hot" in real time provides easy troubleshooting of performance issues. Qumulo Core Throughput Hotspots is a graphical representation of hot paths in the system illuminating the load on the file system to help customers better understand system usage and troubleshoot performance issues. Qumulo Takes the Lead with Torrid Pace of Innovation Qumulo's embrace of agile development methodology has resulted in 24 rock-solid versions of its software per year. Every software release is the product of over three-hundred builds, one-thousand code check-ins and five million automated tests. Qumulo's upgrade methodology allows the company to respond quickly to customer requirements as their needs change and seamlessly deliver new features to the market faster. The company has introduced more than 60 production-ready releases of Qumulo Core scale-out file and object storage software to date. Supporting Quotes "Building software that can handle large-scale data storage and management challenges is no easy task, and very few companies operate at that level. Qumulo is truly accomplishing this feat in a world that is drowning in vast amounts of unstructured data. Qumulo benefits from the rich experience and innovation of its founders and has now engineered a brand new state-of-the-art file system that runs on commodity-based x86 hardware and is capable of working at this rarefied level. Along with massively scalable storage and high-performance processing, they included real-time analytic/data-aware functions so enterprises can easily understand and manage the data that is their business's lifeblood." - Jeff Kato, Senior Analyst & Consultant, Taneja Group "Managing data with Qumulo is so simple it's hard to describe the impact. It's given us tremendous ROI in terms of time saved and problems eliminated, and having that reliable storage we can finally trust makes us eager to use it more broadly throughout the company." - John Beck, IT Manager, Hyundai MOBIS Connect with Qumulo at SuperComputing 2016 Today, Qumulo also announced its presence at SuperComputing 2016, taking place November 14 - 17 in Salt Lake City. Qumulo will be sponsoring, exhibiting, and demonstrating Qumulo Core 2.5 at booth #743. To schedule one-on-one meetings with Qumulo representatives at SuperComputing fill out this form. Follow the company on Twitter at https://twitter.com/qumulo About Qumulo Qumulo, headquartered in Seattle, pioneered data-aware scale-out NAS, enabling enterprises to manage and store enormous numbers of digital assets through real-time analytics built directly into the file system. Qumulo Core is a software-only solution designed to leverage the price/performance of commodity hardware coupled with the modern technologies of flash, virtualization and cloud. Qumulo was founded in 2012 by the inventors of scale-out NAS, and has attracted a team of storage innovators from Isilon, Amazon Web Services, Google, and Microsoft. Qumulo has raised $100 million in three rounds of funding from leading investors. For more information, visit www.qumulo.com


Tierny J.,Telecom ParisTech | Daniels II J.,New York University | Nonato L.G.,University of Sao Paulo | Pascucci V.,Scientific Computing and Imaging Institute | Silva C.T.,New York University
IEEE Transactions on Visualization and Computer Graphics | Year: 2012

Creating high-quality quad meshes from triangulated surfaces is a highly nontrivial task that necessitates consideration of various application specific metrics of quality. In our work, we follow the premise that automatic reconstruction techniques may not generate outputs meeting all the subjective quality expectations of the user. Instead, we put the user at the center of the process by providing a flexible, interactive approach to quadrangulation design. By combining scalar field topology and combinatorial connectivity techniques, we present a new framework, following a coarse to fine design philosophy, which allows for explicit control of the subjective quality criteria on the output quad mesh, at interactive rates. Our quadrangulation framework uses the new notion of Reeb atlas editing, to define with a small amount of interactions a coarse quadrangulation of the model, capturing the main features of the shape, with user prescribed extraordinary vertices and alignment. Fine grain tuning is easily achieved with the notion of connectivity texturing, which allows for additional extraordinary vertices specification and explicit feature alignment, to capture the high-frequency geometries. Experiments demonstrate the interactivity and flexibility of our approach, as well as its ability to generate quad meshes of arbitrary resolution with high-quality statistics, while meeting the user's own subjective requirements. © 1995-2012 IEEE.


He Y.,Scientific Computing and Imaging Institute | Hussaini M.Y.,Florida State University
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) | Year: 2014

This paper presents an optimal unified combination rule within the framework of the Dempster-Shafer theory of evidence to combine multiple bodies of evidence. It is optimal in the sense that the resulting combined m-function has the least dissimilarity with the individual m-functions and therefore represents the greatest amount of information similar to that represented by the original m-functions. Examples are provided to illustrate the proposed combination rule. © Springer International Publishing Switzerland 2014.


Philip S.,Scientific Computing and Imaging Institute | Summa B.,Scientific Computing and Imaging Institute | Pascucci V.,Scientific Computing and Imaging Institute | Bremer P.-T.,Lawrence Livermore National Laboratory
Proceedings of the International Conference on Parallel and Distributed Systems - ICPADS | Year: 2011

Gradient domain processing is a computationally expensive image processing technique. Its use for processing massive images, giga or terapixels in size, can take several hours with serial techniques. To address this challenge, parallel algorithms are being developed to make this class of techniques applicable to the largest images available with running times that are more acceptable to the users. To this end we target the most ubiquitous form of computing power available today, which is small or medium scale clusters of commodity hardware. Such clusters are continuously increasing in scale, not only in the number of nodes, but also in the amount of parallelism available within each node in the form of multicore CPUs and GPUs. In this paper we present a hybrid parallel implementation of gradient domain processing for seamless stitching of gigapixel panoramas that utilizes MPI, threading and a CUDA based GPU component. We demonstrate the performance and scalability of our implementation by presenting results from two GPU clusters processing two large data sets. © 2011 IEEE.


Dey T.K.,Ohio State University | Levine J.A.,Scientific Computing and Imaging Institute | Slatton A.,Ohio State University
Computer Graphics Forum | Year: 2010

The technique of Delaunay refinement has been recognized as a versatile tool to generate Delaunay meshes of a variety of geometries. Despite its usefulness, it suffers from one lacuna that limits its application. It does not scale well with the mesh size. As the sample point set grows, the Delaunay triangulation starts stressing the available memory space which ultimately stalls any effective progress. A natural solution to the problem is to maintain the point set in clusters and run the refinement on each individual cluster. However, this needs a careful point insertion strategy and a balanced coordination among the neighboring clusters to ensure consistency across individual meshes. We design an octtree based localized Delaunay refinement method for meshing surfaces in three dimensions which meets these goals. We prove that the algorithm terminates and provide guarantees about structural properties of the output mesh. Experimental results show that the method can avoid memory thrashing while computing large meshes and thus scales much better than the standard Delaunay refinement method. Journal compilation © 2010 The Eurographics Association and Blackwell Publishing Ltd.


Dannhauer M.,Scientific Computing and Imaging Institute
Conference proceedings : ... Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Conference | Year: 2012

The current work presents a computational pipeline to simulate transcranial direct current stimulation from image based models of the head with SCIRun [15]. The pipeline contains all the steps necessary to carry out the simulations and is supported by a complete suite of open source software tools: image visualization, segmentation, mesh generation, tDCS electrode generation and efficient tDCS forward simulation.


Wong E.,Scientific Computing and Imaging Institute | Awate S.P.,Scientific Computing and Imaging Institute | Fletcher P.T.,Scientific Computing and Imaging Institute
30th International Conference on Machine Learning, ICML 2013 | Year: 2013

An effective approach to structure learning and parameter estimation for Gaussian graphical models is to impose a sparsity prior, such as a Laplace prior, on the entries of the precision matrix. Such an approach involves a hyperparameter that must be tuned to control the amount of sparsity. In this paper, we introduce a parameter-free method for estimating a precision matrix with sparsity that adapts to the data automatically. We achieve this by formulating a hierarchical Bayesian model of the precision matrix with a non-informative Jeffreys' hyperprior. We also naturally enforce the symmetry and positive-definiteness constraints on the precision matrix by parameterizing it with the Cholesky decomposition. Experiments on simulated and real (cell signaling) data demonstrate that the proposed approach not only automatically adapts the sparsity of the model, but it also results in improved estimates of the precision matrix compared to the Laplace prior model with sparsity parameter chosen by cross-validation. Copyright 2013 by the author(s).


Bronson J.R.,Scientific Computing and Imaging Institute | Levine J.A.,Scientific Computing and Imaging Institute | Whitaker R.T.,Scientific Computing and Imaging Institute
Proceedings of the 21st International Meshing Roundtable, IMR 2012 | Year: 2013

We introduce a new algorithm for generating tetrahedral meshes that conform to physical boundaries in volumetric domains consisting of multiple materials. The proposed method allows for an arbitrary number of materials, produces high-quality tetrahedral meshes with upper and lower bounds on dihedral angles, and guarantees geometric fidelity. Moreover, the method is combinatoric so its implementation enables rapid mesh construction. These meshes are structured in a way that also allows grading, in order to reduce element counts in regions of homogeneity.

Loading Scientific Computing and Imaging Institute collaborators
Loading Scientific Computing and Imaging Institute collaborators