HP Vertica

Cambridge, MA, United States

HP Vertica

Cambridge, MA, United States

Time filter

Source Type

Pedersen T.B.,University of Aalborg | Castellanos M.,HP Vertica | Dayal U.,Hitachi Labs
SIGMOD Record | Year: 2014

The article reports on the 7th International Workshop on Business Intelligence for the Real Time Enterprise (BIRTE 2013), co-located with the VLDB 2013 conference. The BIRTE workshop series aims at providing a forum for presentation of the latest research results, new technology developments, and new applications in the areas of business intelligence and real time enterprises. The compelling applications discussed included CRM, brand sentiment, predictive maintenance, network optimization, security, fraud detection, text analytics and smart content navigation, the last two in an SAP paper. Major issues are discovering trends early and finding outliers. A new type of applications concern cyber-physical systems producing huge amounts of data and events. One type of cyber-physical system is the emerging smart grid.


Mior M.J.,University of Waterloo | Salem K.,University of Waterloo | Aboulnaga A.,Qatar Computing Research Institute | Liu R.,HP Vertica
2016 IEEE 32nd International Conference on Data Engineering, ICDE 2016 | Year: 2016

Database design is critical for high performance in relational databases and many tools exist to aid application designers in selecting an appropriate schema. While the problem of schema optimization is also highly relevant for NoSQL databases, existing tools for relational databases are inadequate for this setting. Application designers wishing to use a NoSQL database instead rely on rules of thumb to select an appropriate schema. We present a system for recommending database schemas for NoSQL applications. Our cost-based approach uses a novel binary integer programming formulation to guide the mapping from the application's conceptual data model to a database schema. We implemented a prototype of this approach for the Cassandra extensible record store. Our prototype, the NoSQL Schema Evaluator (NoSE) is able to capture rules of thumb used by expert designers without explicitly encoding the rules. Automating the design process allows NoSE to produce efficient schemas and to examine more alternatives than would be possible with a manual rule-based approach. © 2016 IEEE.


Tran N.,HP Vertica | Lamb A.,Nutonian Inc. | Shrinivas L.,Nutonian Inc. | Bodagala S.,HP Vertica | Dave J.,HP Vertica
Proceedings - International Conference on Data Engineering | Year: 2014

The Vertica SQL Query Optimizer was written from the ground up for the Vertica Analytic Database. Its design and the tradeoffs we encountered during its implementation argue that the full power of novel database systems can only be realized with a carefully crafted custom Query Optimizer written specifically for the system in which it operates. © 2014 IEEE.


Arlitt M.,Hewlett - Packard | Marwah M.,Hewlett - Packard | Bellala G.,Hewlett - Packard | Shah A.,Hewlett - Packard | And 2 more authors.
ICPE 2015 - Proceedings of the 6th ACM/SPEC International Conference on Performance Engineering | Year: 2015

The commoditization of sensors and communication networks is enabling vast quantities of data to be generated by and collected from cyber-physical systems. This "Internetof- Things" (IoT) makes possible new business opportunities, from usage-based insurance to proactive equipment maintenance. While many technology vendors now offer "Big Data" solutions, a challenge for potential customers is understanding quantitatively how these solutions will work for IoT use cases. This paper describes a benchmark toolkit called IoTAbench for IoT Big Data scenarios. This toolset facilitates repeatable testing that can be easily extended to multiple IoT use cases, including a user's specific needs, interests or dataset. We demonstrate the benchmark via a smart metering use case involving an eight-node cluster running the HP Vertica analytics platform. The use case involves generating, loading, repairing and analyzing synthetic meter readings. The intent of IoTAbench is to provide the means to perform "apples-to-apples" comparisons between different sensor data and analytics platforms. We illustrate the capabilities of IoTAbench via a large experimental study, where we store 22.8 trillion smart meter readings totaling 727 TB of data in our eight-node cluster. Copyright © 2015 ACM.


Tran N.,HP Vertica | Bodagala S.,HP Vertica | Dave J.,HP Vertica
Proceedings of the VLDB Endowment | Year: 2013

The Vertica SQL Query Optimizer was written from the ground up for the Vertica Analytic Database. Its design, and the tradeoffs we encountered during implementation, support the case that the full power of novel database sys-tems can be realized only with a custom Query Optimizer, carefully crafted exclusively for the system in which it oper-ates. © 2013 VLDB Endowment.


Prasad S.,HP Vertica | Fard A.,HP Vertica | Gupta V.,HP Vertica | Martine J.,HP Vertica | And 4 more authors.
Proceedings of the ACM SIGMOD International Conference on Management of Data | Year: 2015

A typical predictive analytics workow will pre-process data in a database, transfer the resulting data to an external statistical tool such as R, create machine learning models in R, and then apply the model on newly arriving data. Today, this workow is slow and cumbersome. Extracting data from databases, using ODBC connectors, can take hours on multi-gigabyte datasets. Building models on single-threaded R does not scale. Finally, it is nearly impossible to use R or other common tools, to apply models on terabytes of newly arriving data. We solve all the above challenges by integrating HP Vertica with Distributed R, a distributed framework for R. This paper presents the design of a high performance data transfer mechanism, new data-structures in Distributed R to maintain data locality with database table segments, and extensions to Vertica for saving and deploying R models. Our experiments show that data transfers from Vertica are 6× faster than using ODBC connections. Even complex predictive analysis on 100s of gigabytes of database tables can complete in minutes, and is as fast as in-memory systems like Spark running directly on a distributed file system.


Chen S.,Carnegie Mellon University | Varma R.,Carnegie Mellon University | Sandryhaila A.,HP Vertica | Kovacevic J.,Carnegie Mellon University
IEEE Transactions on Signal Processing | Year: 2015

We propose a sampling theory for signals that are supported on either directed or undirected graphs. The theory follows the same paradigm as classical sampling theory. We show that perfect recovery is possible for graph signals bandlimited under the graph Fourier transform. The sampled signal coefficients form a new graph signal, whose corresponding graph structure preserves the first-order difference of the original graph signal. For general graphs, an optimal sampling operator based on experimentally designed sampling is proposed to guarantee perfect recovery and robustness to noise; for graphs whose graph Fourier transforms are frames with maximal robustness to erasures as well as for Erdo{combining double acute accent}s-Rényi graphs, random sampling leads to perfect recovery with high probability. We further establish the connection to the sampling theory of finite discrete-time signal processing and previous work on signal recovery on graphs. To handle full-band graph signals, we propose a graph filter bank based on sampling theory on graphs. Finally, we apply the proposed sampling theory to semi-supervised classification of online blogs and digit images, where we achieve similar or better performance with fewer labeled samples compared to previous work. © 1991-2012 IEEE.


Chen S.,Carnegie Mellon University | Sandryhaila A.,HP Vertica | Moura J.M.F.,Carnegie Mellon University | Kovacevic J.,Carnegie Mellon University
2014 IEEE Global Conference on Signal and Information Processing, GlobalSIP 2014 | Year: 2014

Signal recovery from noisy measurements is an important task that arises in many areas of signal processing. In this paper, we consider this problem for signals represented with graphs using a recently developed framework of discrete signal processing on graphs. We formulate graph signal denoising as an optimization problem and derive an exact closed-form solution expressed by an inverse graph filter, as well as an approximate iterative solution expressed by a standard graph filter. We evaluate the obtained algorithms by applying them to measurement denoising for temperature sensors and opinion combination for multiple experts. © 2014 IEEE.


Chen S.,Carnegie Mellon University | Sandryhaila A.,HP Vertica | Kovacevic J.,Carnegie Mellon University
ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings | Year: 2015

We present a distributed and decentralized algorithm for graph signal inpainting. The previous work obtained a closed-form solution with matrix inversion. In this paper, we ease the computation by using a distributed algorithm, which solves graph signal inpainting by restricting each node to communicate only with its local nodes. We show that the solution of the distributed algorithm converges to the closed-form solution with the corresponding convergence speed. Experiments on online blog classification and temperature prediction suggest that the convergence speed of the proposed distributed algorithm is competitive with that of the centralized algorithm, especially when a graph tends to be regular. Since a distributed algorithm does not require to collect data to a center, it is more practical and efficient. © 2015 IEEE.


News Article | February 16, 2017
Site: www.24-7pressrelease.com

SAN FRANCISCO, CA, February 16, 2017 /24-7PressRelease/ -- Today, DataVirtuality, a data integration company specialized in building agile data infrastructures, announced the launch of Pipes. Within 5 minutes, developer can now access data from more than 150 databases and APIs and securely schedule data imports to analytical storages, such as Amazon Redshift, Google Bigquery, MySQL, Exasol or HP Vertica. No coding or maintenance of APIs are required. "After years of helping companies with data integration, Pipes is a big step towards our vision to enable any developer in any small, medium and large business to build a secure, agile and scalable data infrastructure in minutes." said DataVirtuality CEO and founder Nick Golovin. "Starting in the low hundred Euros per month, developers can instantly integrate any data without the need to manually build data pipelines and maintain API's. Pipes is a huge time and cost saver helping companies on their way to become data champions." Pipes is not only a quick and easy way to build a data infrastructure. It is the most scaleable. While it offers Business Intelligence starters a powerful solution to consolidate data, it is upgradable to become a fully managed data warehouse solution. Upgraded, users can access data in real-time, model data in a virtual data layer and define their own extraction logic. When it comes to data security Pipes has a unique offering: Hosting can be provided locally or in the Amazon Web Services (AWS) cloud hosted in USA and Europe (Germany). DataVirtuality will not store customer's data. Learn more about DataVirtuality Pipes at: http://datavirtuality.com/products/pipes/ ABOUT DATAVIRTUALITY As the fastest growing big data company in Germany, DataVirtuality enables companies world-wide to instantly integrate a huge variety of data sources and APIs and analyze data in real-time. With 150+ ready-to-use connectors DataVirtuality maximizes performance with a minimal administrative effort. Customers include global operating corporations and digital businesses. For their innovative business services DataVirtuality was named a Gartner Cool Vendor.

Loading HP Vertica collaborators
Loading HP Vertica collaborators