San Diego, CA, United States
San Diego, CA, United States

Time filter

Source Type

News Article | June 14, 2017

In the report, Forrester evaluated Teradata among twelve RTIM vendors, and had this to say, "We spoke with customer references with tens to hundreds of millions of customer records who have used Teradata to power data-driven CRM and/or RTIM for many years, and they are actively implementing or piloting the new journey capabilities. One financial services reference described Teradata as the 'real-time brain and customer memory' that powers RTIM across digital channels, contact centers, and bank branches." Forrester believes that the RTIM market is growing because marketing and customer experience professionals "see it as a way to address expectations for personalized customer experiences … Marketers increasingly trust RTIM providers to act as strategic partners, advising them on key enterprise marketing technology (EMT) investments." "Our customers know that Real Time Interaction Management is the fast lane to business value because it gives marketers immediate visibility into critical moments throughout the shopping experience," said Chris Twogood, senior vice president, marketing, Teradata. "I believe that Teradata's position as a leader in Forrester's 2017 evaluation makes clear that our Customer Journey consultants and technologies are trusted in the analytic science of optimizing customer engagements, relationships, and profitable business growth. That's why Teradata is considered a strategic partner by our customers." Marketing teams are increasingly relying on predictive analytics, AI, and real time decisioning to maximize customer satisfaction and engagement, personalize offers, and align shopper behavior with business objectives. They are collaborating with CIO organizations to integrate data, refine processes, exploit the full range of analytics approaches, and even reshape entire business models to enhance customer experience. This makes Teradata's Customer Journey Solution an ideal fit, as it combines technology with consulting services to provide marketers critical revenue-boosting analytic insights. Teradata continues to enhance the solution, incorporating deep expertise in data integration, advanced multi-genre analytics and cross-channel orchestration. About Teradata Teradata empowers companies to achieve high-impact business outcomes. Our focus on business solutions for analytics, coupled with our industry leading technology and architecture expertise, can unleash the potential of great companies. Visit Get to know Teradata: Teradata, Aster, and the Teradata logo are registered trademarks of Teradata Corporation and/or its affiliates in the U.S. and worldwide. To view the original version on PR Newswire, visit:

News Article | July 13, 2017

ATLANTA, July 13, 2017 /PRNewswire/ -- Teradata Corp. (NYSE: TDC) today announced that it will release its 2017 second quarter financial results before the market opens on Thursday, July 27, 2017.   Teradata will host a conference call and live web broadcast at 8:30 a.m. ET the same day to discuss the results. The live web broadcast and replay will be available on the Teradata website at investor.teradata.com

In addition to technology assets, the acquisition also includes StackIQ's talented team of engineers, who will join Teradata's R&D organization to help accelerate the company's ability to automate software deployment in operations, engineering and end-user customer ecosystems. "Teradata prides itself on building and investing in solutions that make life easier for our customers," said Oliver Ratzesberger, Executive Vice President and Chief Product Officer for Teradata. "Only the best, most innovative and applicable technology is added to our ecosystem, and StackIQ delivers with products that excel in their field. Adding StackIQ technology to IntelliFlex, IntelliBase and IntelliCloud will strengthen our capabilities and enable Teradata to redefine how systems are deployed and managed globally." "Our incredibly high standards also apply to the people we hire," continued Ratzesberger. "As Teradata continues to expand its engineering (R&D) skills to drive ongoing technology innovation, we are seeking qualified, talented individuals to join our team. Once again, StackIQ has set the bar with stellar engineers who we are honored to now call Teradata employees." Under terms of the deal, Teradata will now own StackIQ's unique IP that automates and accelerates software deployment across large clusters of servers (both physical and virtual/in the cloud). This increase in automation will occur across all Teradata Everywhere deployments, dramatically reducing build and delivery times for complex business analytics solutions and adding the capability to manage software-only "appliances" across hybrid cloud infrastructure. The speed of Teradata's new integrated solution also allows for rapid re-provisioning of internal test or benchmarking hardware, as well as swift redeployment between technologies to match a customer's changing workload requirements. "Joining Teradata, the market leader in analytic data solutions, truly validates the importance of StackIQ's engineering and the talent we have cultivated over the years," said Tim McIntire, Co-Founder at StackIQ. "We are looking forward to bringing a bit of San Diego's start-up culture to Teradata, and working together to simplify Teradata's customer experience for system software deployment and upgrades." The terms of the acquisition agreement were not disclosed. About Teradata  Teradata helps companies achieve high-impact business outcomes. With a portfolio of business analytics solutions, architecture consulting, and industry leading big data and analytics technology, Teradata unleashes the potential of great companies.  Visit Get to know Teradata: Twitter: Facebook: LinkedIn: YouTube: Teradata and the Teradata logo are trademarks or registered trademarks of Teradata Corporation and/or its affiliates in the U.S. and worldwide.

Xu Y.,Teradata | Kostamaa P.,Teradata | Qi Y.,Teradata | Wen J.,University of California at Riverside | Zhao K.K.,University of California at San Diego
Proceedings of the ACM SIGMOD International Conference on Management of Data | Year: 2011

One critical part of building and running a data warehouse is the ETL (Extraction Transformation Loading) process. In fact, the growing ETL tool market is already a multi-billion-dollar market. Getting data into data warehouses has been a hindering factor to wider potential database applications such as scientific computing, as discussed in recent panels at various database conferences. One particular problem with the current load approaches to data warehouses is that while data are partitioned and replicated across all nodes in data warehouses powered by parallel DBMS(PDBMS), load utilities typically reside on a single node which face the issues of i) data loss/data availability if the node/hard drives crash; ii) file size limit on a single node; iii) load performance. All of these issues are mostly handled manually or only helped to some degree by tools. We notice that one common thing between Hadoop and Teradata Enterprise Data Warehouse (EDW) is that data in both systems are partitioned across multiple nodes for parallel computing, which creates parallel loading opportunities not possible for DBMSs running on a single node. In this paper we describe our approach of using Hadoop as a distributed load strategy to Teradata EDW. We use Hadoop as the intermediate load server to store data to be loaded to Teradata EDW. We gain all the benefits from HDFS (Hadoop Distributed File System): i) significantly increased disk space for the file to be loaded; ii) once the data is written to HDFS, it is not necessary for the data sources to keep the data even before the file is loaded to Teradata EDW; iii) MapReduce programs can be used to transform and add structures to unstructured or semi-structured data; iv) more importantly since a file is distributed in HDFS, the file can be loaded more quickly in parallel to Teradata EDW, which is the main focus in this paper. When both Hadoop and Teradata EDW coexist on the same hardware platform, as being increasingly required by customers because of reduced hardware and system administration costs, we have another optimization opportunity to directly load HDFS data blocks to Teradata parallel units on the same nodes. However, due to the inherent non-uniform data distribution in HDFS, rarely we can avoid transferring HDFS blocks to remote Teradata nodes. We designed a polynomial time optimal algorithm and a polynomial time approximate algorithm to assign HDFS blocks to Teradata parallel units evenly and minimize network traffic. We performed experiments on synthetic and real data sets to compare the performances of the algorithms. © 2011 ACM.

News Article | November 14, 2016

SAN DIEGO, Nov. 14, 2016 /PRNewswire/ -- Teradata Corp. (NYSE: TDC) ("Teradata") will host its previously announced Analyst Day for financial analysts and institutional investors at its research and development facility in Rancho Bernardo, California, on Thursday, November 17, 2016 from 8...

SAN DIEGO, Feb. 28, 2017 /PRNewswire/ -- Teradata is positioned as a leader in the Gartner, Inc. 2017 Magic Quadrant for Data Management Solutions for Analytics1 (DMSA) issued February 20, 2017, by Gartner analysts Roxane Edjlali, Adam M. Ronthal, Rick Greenwald, Mark A. Beyer, and Donald...

News Article | November 10, 2016

SAN DIEGO, Nov. 10, 2016 /PRNewswire/ -- Teradata (NYSE: TDC), a leading analytics solutions company, has announced that Teradata Everywhere™, introduced in September, is the winner of the 2016 Ventana Research Technology Innovation Award for Information Management. The Ventana awards,...

SAN DIEGO, Nov. 14, 2016 /PRNewswire/ -- Teradata (NYSE: TDC), a leading analytics solutions company, today announced the immediate availability of Teradata Consulting and Managed Services for Amazon Web Services (AWS), increasing the company's ability to accelerate positive business...

Xu Y.,Teradata | Kostamaa P.,Teradata
Proceedings - International Conference on Data Engineering | Year: 2010

Large enterprises have been relying on parallel database management systems (PDBMS) to process their ever-increasing data volume and complex queries. Business intelligence tools used by enterprises frequently generate a large number of outer joins and require high performance from the underlying database systems. A common type of outer joins in business applications is the small-large table outer join studied in this paper where one table is relatively small and the other is large. We present an efficient and easy to implement algorithm called DER (Duplication and Efficient Redistribution) for small and large table outer joins. Our experimental results show that the DER algorithm significantly speeds up query elapsed time and scales linearly. © 2010 IEEE.

Xu Y.,Teradata | Kostamaa P.,Teradata | Gao L.,Teradata
Proceedings of the ACM SIGMOD International Conference on Management of Data | Year: 2010

Teradata's parallel DBMS has been successfully deployed in large data warehouses over the last two decades for large scale business analysis in various industries over data sets ranging from a few terabytes to multiple petabytes. However, due to the explosive data volume increase in recent years at some customer sites, some data such as web logs and sensor data are not managed by Teradata EDW (Enterprise Data Warehouse), partially because it is very expensive to load those extreme large volumes of data to a RDBMS, especially when those data are not frequently used to support important business decisions. Recently the MapReduce programming paradigm, started by Google and made popular by the open source Hadoop implementation with major support from Yahoo!, is gaining rapid momentum in both academia and industry as another way of performing large scale data analysis. By now most data warehouse researchers and practitioners agree that both parallel DBMS and MapReduce paradigms have advantages and disadvantages for various business applications and thus both paradigms are going to coexist for a long time [16]. In fact, a large number of Teradata customers, especially those in the e-business and telecom industries have seen increasing needs to perform BI over both data stored in Hadoop and data in Teradata EDW. One common thing between Hadoop and Teradata EDW is that data in both systems are partitioned across multiple nodes for parallel computing, which creates integration optimization opportunities not possible for DBMSs running on a single node. In this paper we describe our three efforts towards tight and efficient integration of Hadoop and Teradata EDW. Copyright 2010 ACM.

Loading Teradata collaborators
Loading Teradata collaborators