Salt Lake City, UT, United States
Salt Lake City, UT, United States

Time filter

Source Type

SEATTLE, WA--(Marketwired - Nov 15, 2016) - Qumulo, the leader in data-aware scale-out NAS, showcased continued company momentum, with impressive year over year growth, customer traction and continuous product innovation. Today, the company introduced Qumulo Core 2.5, a new version of its award-winning software. In addition, Qumulo announced it has joined with Hewlett Packard Enterprise (HPE)'s OEM Partner Program and made Qumulo Core software available on HPE Apollo servers. "New technologies are rendering last-generation infrastructure obsolete," said Peter Godman, co-founder and CEO of Qumulo. "Qumulo's incredible customer traction, rapid pace of innovation and rock solid product give enterprises a superior option for large scale unstructured data. Whether on-premises or cloud, customers trust Qumulo with their most valuable asset, which is data." Qumulo is Recognized for Growth, Customer Traction and Industry Leadership Since its inception, Qumulo has doubled customer count and bookings year-over-year. Qumulo customers across media and entertainment, life and earth sciences, higher education, telecommunications and other industries are making breakthroughs with data-intensive and mission critical workloads. Today, more than 100 enterprise customers trust Qumulo with their mission-critical workflows, including Atomic Fiction, Carnegie Institute for Science, Deluxe, Densho, FotoKem, Institute for Health Metrics and Evaluation (IHME) at the University of Washington, Hyundai MOBIS, MSG Networks, Sinclair Oil, Sportvision, TELUS Studios, UConn Health, University of Utah Scientific Computing and Imaging Institute, Vaisala, ZOIC Studios, and many more. Qumulo has amassed more than 10 industry awards for leadership and company culture, commitment to the channel, and product innovation. These awards include Puget Sound Business Journal Washington's Best Workplaces, CRN Channel Chiefs, TechTarget SearchStorage/Storage Magazine Products of the Year, CRN Emerging Technology Vendors, and Seattle Business Magazine's Tech Impact Award, amongst others. Qumulo Continues Rapid Pace of Innovation, Introduces Qumulo Core 2.5 and Availability of Qumulo Core on HPE Apollo Servers Today, Qumulo is announcing Qumulo Core 2.5 scale-out file and object storage software on HPE Apollo servers, availability of snapshots for extended enterprise data protection, erasure coding improvements delivering 80% efficiency, and additional data-aware software features with throughput analytics and intelligent caching of metadata on SSD. Qumulo Core is now available on HPE Apollo servers, offering flexibility for enterprise customers that want next-generation scale-out file and object storage software on-premises or for private cloud workloads. Built for the highest levels of performance and efficiency, Qumulo Core on HPE Apollo servers is a future-proof solution for storing and managing hundreds of petabytes and tens of billions of files and objects. Now that Qumulo has demonstrated the flexibility of Qumulo through hardware independence, portability of Qumulo Core into private and public clouds will provide enterprises with a complete software-only scale-out NAS solution for storing and managing data anywhere, while gaining complete awareness of their data footprint at incredible scale. Qumulo Core 2.5 delivers snapshots, providing even greater data protection and allows end users to recover quickly from mistakes. Customers can create fine-grained policies and take billions of snapshots with easy access via NFS and SMB. Qumulo Core 2.5 makes it possible for enterprises to understand system throughput through the lens of data. Customers can easily get a graphical representation of the file system layout with indicators of throughput and IOPS heat and an at-a-glance view of capacity and activity, helping system administrators understand how their file storage is being used. Drill down access to see what paths and clients are "hot" in real time provides easy troubleshooting of performance issues. Qumulo Core Throughput Hotspots is a graphical representation of hot paths in the system illuminating the load on the file system to help customers better understand system usage and troubleshoot performance issues. Qumulo Takes the Lead with Torrid Pace of Innovation Qumulo's embrace of agile development methodology has resulted in 24 rock-solid versions of its software per year. Every software release is the product of over three-hundred builds, one-thousand code check-ins and five million automated tests. Qumulo's upgrade methodology allows the company to respond quickly to customer requirements as their needs change and seamlessly deliver new features to the market faster. The company has introduced more than 60 production-ready releases of Qumulo Core scale-out file and object storage software to date. Supporting Quotes "Building software that can handle large-scale data storage and management challenges is no easy task, and very few companies operate at that level. Qumulo is truly accomplishing this feat in a world that is drowning in vast amounts of unstructured data. Qumulo benefits from the rich experience and innovation of its founders and has now engineered a brand new state-of-the-art file system that runs on commodity-based x86 hardware and is capable of working at this rarefied level. Along with massively scalable storage and high-performance processing, they included real-time analytic/data-aware functions so enterprises can easily understand and manage the data that is their business's lifeblood." - Jeff Kato, Senior Analyst & Consultant, Taneja Group "Managing data with Qumulo is so simple it's hard to describe the impact. It's given us tremendous ROI in terms of time saved and problems eliminated, and having that reliable storage we can finally trust makes us eager to use it more broadly throughout the company." - John Beck, IT Manager, Hyundai MOBIS Connect with Qumulo at SuperComputing 2016 Today, Qumulo also announced its presence at SuperComputing 2016, taking place November 14 - 17 in Salt Lake City. Qumulo will be sponsoring, exhibiting, and demonstrating Qumulo Core 2.5 at booth #743. To schedule one-on-one meetings with Qumulo representatives at SuperComputing fill out this form. Follow the company on Twitter at About Qumulo Qumulo, headquartered in Seattle, pioneered data-aware scale-out NAS, enabling enterprises to manage and store enormous numbers of digital assets through real-time analytics built directly into the file system. Qumulo Core is a software-only solution designed to leverage the price/performance of commodity hardware coupled with the modern technologies of flash, virtualization and cloud. Qumulo was founded in 2012 by the inventors of scale-out NAS, and has attracted a team of storage innovators from Isilon, Amazon Web Services, Google, and Microsoft. Qumulo has raised $100 million in three rounds of funding from leading investors. For more information, visit

Tierny J.,Telecom ParisTech | Daniels II J.,New York University | Nonato L.G.,University of Sao Paulo | Pascucci V.,Scientific Computing and Imaging Institute | Silva C.T.,New York University
IEEE Transactions on Visualization and Computer Graphics | Year: 2012

Creating high-quality quad meshes from triangulated surfaces is a highly nontrivial task that necessitates consideration of various application specific metrics of quality. In our work, we follow the premise that automatic reconstruction techniques may not generate outputs meeting all the subjective quality expectations of the user. Instead, we put the user at the center of the process by providing a flexible, interactive approach to quadrangulation design. By combining scalar field topology and combinatorial connectivity techniques, we present a new framework, following a coarse to fine design philosophy, which allows for explicit control of the subjective quality criteria on the output quad mesh, at interactive rates. Our quadrangulation framework uses the new notion of Reeb atlas editing, to define with a small amount of interactions a coarse quadrangulation of the model, capturing the main features of the shape, with user prescribed extraordinary vertices and alignment. Fine grain tuning is easily achieved with the notion of connectivity texturing, which allows for additional extraordinary vertices specification and explicit feature alignment, to capture the high-frequency geometries. Experiments demonstrate the interactivity and flexibility of our approach, as well as its ability to generate quad meshes of arbitrary resolution with high-quality statistics, while meeting the user's own subjective requirements. © 1995-2012 IEEE.

He Y.,Scientific Computing and Imaging Institute | Hussaini M.Y.,Florida State University
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) | Year: 2014

This paper presents an optimal unified combination rule within the framework of the Dempster-Shafer theory of evidence to combine multiple bodies of evidence. It is optimal in the sense that the resulting combined m-function has the least dissimilarity with the individual m-functions and therefore represents the greatest amount of information similar to that represented by the original m-functions. Examples are provided to illustrate the proposed combination rule. © Springer International Publishing Switzerland 2014.

PubMed | University of Missouri, University of Utah and Scientific Computing and Imaging Institute
Type: | Journal: Journal of orthopaedic research : official publication of the Orthopaedic Research Society | Year: 2016

The proximal femur is abnormally shaped in patients with cam-type femoroacetabular impingement (FAI). Impingement may elicit bone remodeling at the proximal femur, causing increases in cortical bone thickness. We used correspondence-based shape modeling to quantify and compare cortical thickness between cam patients and controls for the location of the cam lesion and the proximal femur. Computed tomography images were segmented for 45 controls and 28 cam-type FAI patients. The segmentations were input to a correspondence-based shape model to identify the region of the cam lesion. Median cortical thickness data over the region of the cam lesion and the proximal femur were compared between mixed-gender and gender-specific groups. Median [interquartile range] thickness was significantly greater in FAI patients than controls in the cam lesion (1.47 [0.64] vs. 1.13 [0.22]mm, respectively; p<0.001) and proximal femur (1.28 [0.30] vs. 0.97 [0.22]mm, respectively; p<0.001). Maximum thickness in the region of the cam lesion was more anterior and less lateral (p<0.001) in FAI patients. Male FAI patients had increased thickness compared to male controls in the cam lesion (1.47 [0.72] vs. 1.10 [0.19]mm, respectively; p<0.001) and proximal femur (1.25 [0.29] vs. 0.94 [0.17]mm, respectively; p<0.001). Thickness was not significantly different between male and female controls.Studies of non-pathologic cadavers have provided guidelines regarding safe surgical resection depth for FAI patients. However, our results suggest impingement induces cortical thickening in cam patients, which may strengthen the proximal femur. Thus, these previously established guidelines may be too conservative. 2016 Orthopaedic Research Society. Published by Wiley Periodicals, Inc. J Orthop Res.

Philip S.,Scientific Computing and Imaging Institute | Summa B.,Scientific Computing and Imaging Institute | Pascucci V.,Scientific Computing and Imaging Institute | Bremer P.-T.,Lawrence Livermore National Laboratory
Proceedings of the International Conference on Parallel and Distributed Systems - ICPADS | Year: 2011

Gradient domain processing is a computationally expensive image processing technique. Its use for processing massive images, giga or terapixels in size, can take several hours with serial techniques. To address this challenge, parallel algorithms are being developed to make this class of techniques applicable to the largest images available with running times that are more acceptable to the users. To this end we target the most ubiquitous form of computing power available today, which is small or medium scale clusters of commodity hardware. Such clusters are continuously increasing in scale, not only in the number of nodes, but also in the amount of parallelism available within each node in the form of multicore CPUs and GPUs. In this paper we present a hybrid parallel implementation of gradient domain processing for seamless stitching of gigapixel panoramas that utilizes MPI, threading and a CUDA based GPU component. We demonstrate the performance and scalability of our implementation by presenting results from two GPU clusters processing two large data sets. © 2011 IEEE.

Dey T.K.,Ohio State University | Levine J.A.,Scientific Computing and Imaging Institute | Slatton A.,Ohio State University
Computer Graphics Forum | Year: 2010

The technique of Delaunay refinement has been recognized as a versatile tool to generate Delaunay meshes of a variety of geometries. Despite its usefulness, it suffers from one lacuna that limits its application. It does not scale well with the mesh size. As the sample point set grows, the Delaunay triangulation starts stressing the available memory space which ultimately stalls any effective progress. A natural solution to the problem is to maintain the point set in clusters and run the refinement on each individual cluster. However, this needs a careful point insertion strategy and a balanced coordination among the neighboring clusters to ensure consistency across individual meshes. We design an octtree based localized Delaunay refinement method for meshing surfaces in three dimensions which meets these goals. We prove that the algorithm terminates and provide guarantees about structural properties of the output mesh. Experimental results show that the method can avoid memory thrashing while computing large meshes and thus scales much better than the standard Delaunay refinement method. Journal compilation © 2010 The Eurographics Association and Blackwell Publishing Ltd.

Dannhauer M.,Scientific Computing and Imaging Institute
Conference proceedings : ... Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Conference | Year: 2012

The current work presents a computational pipeline to simulate transcranial direct current stimulation from image based models of the head with SCIRun [15]. The pipeline contains all the steps necessary to carry out the simulations and is supported by a complete suite of open source software tools: image visualization, segmentation, mesh generation, tDCS electrode generation and efficient tDCS forward simulation.

Bronson J.R.,Scientific Computing and Imaging Institute | Levine J.A.,Scientific Computing and Imaging Institute | Whitaker R.T.,Scientific Computing and Imaging Institute
Engineering with Computers | Year: 2012

We present a particle-based approach for generating adaptive triangular surface and tetrahedral volume meshes from computer-aided design models. Input shapes are treated as a collection of smooth, parametric surface patches that can meet non-smoothly on boundaries. Our approach uses a hierarchical sampling scheme that places particles on features in order of increasing dimensionality. These particles reach a good distribution by minimizing an energy computed in 3D world space, with movements occurring in the parametric space of each surface patch. Rather than using a pre-computed measure of feature size, our system automatically adapts to both curvature as well as a notion of topological separation. It also enforces a measure of smoothness on these constraints to construct a sizing field that acts as a proxy to piecewise-smooth feature size. We evaluate our technique with comparisons against other popular triangular meshing techniques for this domain. © 2012 Springer-Verlag London Limited.

Wong E.,Scientific Computing and Imaging Institute | Awate S.P.,Scientific Computing and Imaging Institute | Fletcher P.T.,Scientific Computing and Imaging Institute
30th International Conference on Machine Learning, ICML 2013 | Year: 2013

An effective approach to structure learning and parameter estimation for Gaussian graphical models is to impose a sparsity prior, such as a Laplace prior, on the entries of the precision matrix. Such an approach involves a hyperparameter that must be tuned to control the amount of sparsity. In this paper, we introduce a parameter-free method for estimating a precision matrix with sparsity that adapts to the data automatically. We achieve this by formulating a hierarchical Bayesian model of the precision matrix with a non-informative Jeffreys' hyperprior. We also naturally enforce the symmetry and positive-definiteness constraints on the precision matrix by parameterizing it with the Cholesky decomposition. Experiments on simulated and real (cell signaling) data demonstrate that the proposed approach not only automatically adapts the sparsity of the model, but it also results in improved estimates of the precision matrix compared to the Laplace prior model with sparsity parameter chosen by cross-validation. Copyright 2013 by the author(s).

Bronson J.R.,Scientific Computing and Imaging Institute | Levine J.A.,Scientific Computing and Imaging Institute | Whitaker R.T.,Scientific Computing and Imaging Institute
Proceedings of the 21st International Meshing Roundtable, IMR 2012 | Year: 2013

We introduce a new algorithm for generating tetrahedral meshes that conform to physical boundaries in volumetric domains consisting of multiple materials. The proposed method allows for an arbitrary number of materials, produces high-quality tetrahedral meshes with upper and lower bounds on dihedral angles, and guarantees geometric fidelity. Moreover, the method is combinatoric so its implementation enables rapid mesh construction. These meshes are structured in a way that also allows grading, in order to reduce element counts in regions of homogeneity.

Loading Scientific Computing and Imaging Institute collaborators
Loading Scientific Computing and Imaging Institute collaborators