Hortonworks is a business computer software company based in Palo Alto, California. The company focuses on the development and support of Apache Hadoop, a framework that allows for the distributed processing of large data sets across clusters of computers. Wikipedia.
News Article | May 8, 2017
Big data promises much in terms of business value, but it can be difficult for businesses to determine how to go about deploying the architecture and tools needed to take advantage of it. Everything from descriptive statistics to predictive modeling to artificial intelligence is powered by big data. And what an organization wants to accomplish with big data will determine the tools it needs to rollout. SEE: Open source big data and DevOps tools: A fast path to analytics applications (Tech Pro Research) At the 2017 Dell EMC World conference on Monday, Cory Minton, a principal systems engineer for data analytics at Dell EMC, gave a presentation explaining the biggest decisions an organization must make when deploying big data. Here are six questions that every business must ask before getting started in the space: The first question to ask is whether your organization wants to buy a big data system or build one from scratch. Popular products from Teradata, SAS, SAP, and Splunk can be bought and simply implemented, while Hortonworks, Cloudera, Databricks, Apache Flink can be used to build out a big data system. Buying offers a shorter time to value, Minton said, as well as simplicity and good value for commodity use cases. However, that simplicity usually comes with a higher price, and these tools usually work best with low diversity data. If your organization has an existing relationship with a vendor, it can be easier to phase in new products and try out big data tools. Many of the popular tools for building a big data system are cheap or free to use, and they make it easier to capitalize on a unique value stream. The building path provides opportunities for massive scale and variety, but these tools can be very complex. Interoperability is often one of the biggest issues faced by admins who go this route. Batch data, offered by products like Oracle, Hadoop MapReduce, and Apache Spark, are descriptive and can handle large volumes of data, Minton said. They can also be scheduled, and are often used to build out a playground of sorts for data scientists to experiment. Products like Apache Kafka, Splunk, and Flink provide streaming data capabilities that can be captured to create potentially predictive models. With streaming data, speed trumps data fidelity, Minton said, but it also offers massive scale and variety. It's also more useful for organizations that subscribe to DevOps culture. Twitter is one example of lambda architecture. Data is split into two paths, one of which is fed to a speed layer for quick insights, while the other path leads to batch and service layers. Minton said that this model gives an organization access to both batch and streaming insights, and balances lossy streams well. The challenge here, he said, is that you have to manage two code and app bases. Kappa architecture treats everything as a stream, but it's a stream that aims to maintain data fidelity and process in real time. All data is written to an immutable log that changes are checked against. It is hardware efficient, with less code, and it is the model that Minton recommends for an organization that is starting fresh with big data. Public and private cloud for big data require many of the same considerations. For starters, an organization must consider what environment their talent is most comfortable working in. Also, data provenance, security and compliance needs, and elastic consumption models should also be thought of. Years ago, the debate around virtual vs. physical infrastructure was much more heated, Minton said. However, virtualization has grown to become competitive with physical hardware in a way that they have become similar in regards to big data deployments. It boils down to what your administrators are more comfortable with and what works for your existing infrastructure. Direct-attached storage (DAS) used to be the only way to deploy a Hadoop cluster, Minton said. However, now that IP networks have increased their bandwidth, the network-attached storage (NAS) option is more feasible for big data. With DAS, it is easy to get started, and the model works well with software-defined concepts. It's driven to handle linear growth in performance and storage, and it does well with streaming data. NAS handles multi-protocol needs well, provides efficiency at scale, and it can address security and compliance needs as well.
News Article | May 11, 2017
Intel has backed some notable companies over the years - investing in Red Hat and VMware - two firms that helped effect major shifts in the IT industry. The chipmaker is hoping Cloudera will generate similar momentum in the field of big-data analytics, and in doing so open new avenues for growth in a stagnant enterprise IT market. To this end Intel has invested $740m in Cloudera, giving it an 18 percent stake in the company. Cloudera builds and supports tools to run on top of Apache Hadoop, the open-source software framework that allows data to be processed by clusters of commodity hardware for data warehousing and big-data analytics. Cloudera's distribution of Hadoop (CDH) and its subscription offering, Cloudera Enterprise, include various integrated tools to help businesses store and analyse data in Hadoop clusters, offering improved security and availability. Cloudera provides software to support real-time SQL and search-engine queries, machine learning, security, and stream and batch data processing, as well as to manage Hadoop clusters. The firm is one of several competing to offer the Hadoop distribution of choice for businesses. Each of the companies behind major Hadoop distributions - Hortonworks, IBM, MapR and Pivotal - provides different tools to manage, secure and exploit data stored on Hadoop clusters. But usage figures indicate that Cloudera's distribution is the most popular. Intel had released its own distribution of Hadoop but this will now be withdrawn. Intel engineers will instead work on Cloudera's distro, which will be enhanced with features from Intel's platform. While analysts estimate that Cloudera's paying user base may be tiny at present - about 350-strong and growing at about 50 new customers per quarter - Intel said it is buying into future potential. "It's not really a technology play but it really is about overall business value. If you look at Intel's datacentre business over the past few years, the cloud service provider segment, the telecommunications and even the high-performance computing segments have all grown quite handsomely. But the enterprise segment has been a little bit stagnant," Boyd Davis, general manager of Intel's datacentre software division, said. "What you see with big data is a different phenomenon occurring. It's injecting more investment into the IT world because there's such huge business value that gets derived from it, and that's the way I expect to see dramatic growth in our business." But why did Intel decide against exploiting that growth with its own Hadoop distribution and instead chose to back Cloudera? Davis said Intel wanted to boost Cloudera's already strong standing in the Hadoop market and reassure companies unsure which distribution to deploy that Cloudera will be a good long-term investment. "The Hadoop ecosystem is still relatively nascent, when you compare it with the $100bn data-management market, and it's really important for us to take the risk out for customers," he said. "Enterprises like to know this is the right path, so they don't have to sit on the sidelines and wait to see how the market plays out. That was important for us as well because we want to see this market grow." Intel's is now Cloudera's largest strategic investor, defined by Cloudera as investors where there is "alignment between corporate initiatives". The $740m investment by Intel was preceded by a cash injection of $160m into Cloudera by a variety of firms, including Google's investment arm. About 60 percent of the combined $900m investment will end up in Cloudera's pockets, according to Cloudera CEO Tom Reilly, as some of the money will go to existing investors in Cloudera. "We've raised more than half a billion dollars that goes into Cloudera," Reilly said. Initially, Cloudera will use the funding to help organisations move from Intel's Hadoop distribution to its own. "We're hiring up engineers on our side to interface and integrate with Intel's engineering team, so we have the staff to support the partnership on the technical side of things," Reilly said. "We're going to be transitioning all Intel customers to our new distribution, which combines the best of our distributions." Reilly sees the partnership with Intel as a springboard to accelerate its ambitions for global expansion. "Intel has a tremendous presence in China and India. The next thing we're going to do is to staff up and build up resources in those geographies to support the customers and continue to grow those big markets." Both firms plan to increase their contributions to open-source projects related to Hadoop, with Reilly expressing interest in projects focused on in-memory processing, such as Apache Spark, and security. Finally, the company will also use the money to help it acquire companies, "to accelerate our growth", according to Reilly. The company still plans to go public but Reilly said it is not "setting an expectation" as to when an IPO might occur. Unsurprisingly, Intel's investment will result in engineers from both companies focusing on optimising Cloudera's toolset, as well as the core open-source Hadoop platform, to run on Intel's 64-bit x86 chip architecture. "Hadoop will continue to work on all platforms, but the optimisations will occur on Intel sooner and faster," Reilly said. "Intel has 94 percent market share in the datacentre. We believe the Intel platform is going to outperform other platforms." The stance is something of a departure from a public statement made by a co-founder of Cloudera last year, when the company's CTO praised low-power ARM chips for being more efficient than competing silicon from other companies. In a discussion about ARM-based processors at the time, Cloudera co-founder and CTO Amr Awadallah was reported as saying: "Cores from other vendors - without saying their name - consume significantly more power in the idle state, hence we're relieved that ARM is moving into this space." Intel and Cloudera have a "multi-year roadmap" of features in Intel hardware that will be exploited by Cloudera's distribution of Hadoop, and Intel's Davis said the first fruits of this collaboration are likely to be revealed in the near future. "A really good example of one of the areas where we are collaborating that will show up in Cloudera products very soon is around hardware-accelerated security," he said. "In our own distribution we took advantage of instructions in the Xeon chip that accelerate encryption, so that customers could encrypt the data in a Hadoop environment without necessarily having the performance overhead of many of the solutions out there. "We had that intimate knowledge of the instructions that could accelerate the security algorithms. We built that into our distribution and are actively working to get that into Cloudera's product as quickly as we can." When Intel launched its own Hadoop distribution last year, it promised that extensions to instruction sets in its chips would boost performance in various ways: improving data encryption speed via AES-NI and compression using AVX and SSE 4.2. Various optimisations from Intel's Hadoop distribution will begin to be incorporated into CDH, following the release of version 3.1 of the Intel distro, the final outing for the platform. Reilly said the firms' engineering collaboration and the absorption of Intel's distribution into Cloudera's platform will yield enhancements to Cloudera's offering "not just five years from now but in the coming months". Davis expects the bulk of the collaboration between the companies will be on improvements to the open-source, core Hadoop platform, but added they will also work to improve Cloudera's proprietary tools on top of Hadoop. "It's one of our fundamental objectives to maintain an open ecosystem, and Intel's going to continue to do engineering work and contribute to the open-source community," Davis said. "We'll also continue to innovate in some of the areas around Hadoop that are not open source, on things like the management and data governance that are around Hadoop but not in the core platform. A lot of people have unique technologies there, and we will work with Cloudera on those." On rare occasions there may also be other considerations that prevent their combined engineering teams from open-sourcing technologies, he said. "There are certain cases where open source has some downsides. Security is an example. I don't have a specific example but sometimes you want to do something to take advantage of security capabilities in the chip that if you were to make open source would actually open up security holes. But the vast majority of the innovations that we drive are going to end up in open source." Reilly said there was a natural crossover between the capabilities of the Hadoop platform to handle large volumes of data and Intel's investment in the internet of things, which is expected to fuel an explosion in data collection and analytics.
News Article | May 10, 2017
Hortonworks is an industry leading innovator that creates, distributes and supports enterprise-ready open data platforms and modern data applications that deliver actionable intelligence from all data: data-in-motion and data-at-rest. Hortonworks is focused on driving innovation in open source communities such as Apache Hadoop, Apache NiFi and Apache Spark. Along with its 2,100+ partners, Hortonworks provides the expertise, training and services that allow customers to unlock transformational value for their organizations across any line of business. Hortonworks, Powering the Future of Data, HDP and HDF are registered trademarks or trademarks of Hortonworks, Inc. and its subsidiaries in the United States and other jurisdictions. For more information, please visit www.hortonworks.com. All other trademarks are the property of their respective owners. To view the original version on PR Newswire, visit:http://www.prnewswire.com/news-releases/hortonworks-to-participate-in-upcoming-investor-conferences-300454966.html
News Article | May 3, 2017
The Internet of Things (IoT) and the cloud are impossible to separate. Only about a third of the data collected by the growing army of sensors is analysed at source, but as the IoT grows, that is going to need to change. “Sensors and the data they create are the next big thing,” says Scott Gnau, CTO of data management platform firm Hortonworks. “Compute can now happen at the sensor level, and not solely in a centralised data centre or cloud footprint, and it's sometimes referred to as fog computing.” Fog computing and edge computing are basically the same thing. “Fog computing is the process of computing data that is not in the cloud or at the branch, but at the extreme edge of the network, enabling analytics of that data close to its source,” says Sarah Eccleston, Director of Architecture Sales, Cisco UKI. It’s Cisco itself that has pushed the term ‘fog computing’, but others understand it slightly differently. “We see ‘fog’ computing as synonymous with ‘edge’ computing, and the terms are often used interchangeably,” says Neil Postlethwaite, Director, IBM Watson IoT Platform and Device Ecosystem. IBM’s Watson IoT Platform helps IoT customers transition work from the cloud to the edge. “It’s reflective of operations performed in the network layer i.e. ‘at the edge’, with provision of compute, data and storage capabilities,” he says. If the IoT is primarily about cost-saving for industry, then fog computing is part of that – it's about conducting analytics in the most efficient way possible. “Businesses must work out the best place in the system to perform the computation that is needed to deliver the required outcome and value,” says Graeme Wright, CTO for Manufacturing, Utilities, and Services at Fujitsu UK. This is about combining edge, fog and cloud computing. “Edge computing may be used to control the device that is being monitored by a sensor, and only send data back when something changes,” says Wright, who offers an IoT example. “This could then be complemented by fog computing, to alert other sensors or devices of the status change, and take appropriate action.” The cloud can then be used to perform analytics on the system as a whole, alerting staff about maintenance issues. “This setup can not only provide real-time analysis of the data, but also lower data storage, and more importantly improve efficiency,” observes Wright. Moving some of the data storage and processing to the edge of a network means using edge gateways. “Ideally, you should be able to seamlessly move computing from cloud to edge as and when workload dictates,” says Postlethwaite, who thinks that having intelligence at the edge means decisions can be made closer to the actual IoT sensors and devices. One example is image processing: visual analytics close to a manufacturing line to check quality – so ‘at the edge’ – saves sending large amounts of data to the cloud for processing.
News Article | May 8, 2017
Brillix brings extensive expertise in NoSQL and relational database platforms, data security and Big Data environments for more than 250 customers in Aerospace & Defense, Banking, Technology, Startups and Telecommunications. Brillix now offers consulting and professional services for Aerospike's latest release and will help customers accelerate time to value through a new Aerospike competency center featuring on-site training, hands-on seminars and best practices sessions. Aerospike powers the world's most demanding customer engagement applications, from personalized marketing offers and dynamic pricing to fraud prevention. Aerospike environments process terabytes of data and deliver immediate response times, a superior approach validated by customer benchmarks and mentioned in numerous industry analyst reports. "Brillix has an impeccable reputation for its database expertise and quality training and consulting for customers," said Jim LoDestro, Chief Revenue Officer, Aerospike. "Our collaboration is pivotal in expanding Aerospike's footprint in this important region and guiding customer success with Aerospike's breakthrough approach for structuring, accessing and analyzing data." "We're excited to deliver Aerospike's market-leading NoSQL solution and champion its flexible and cost-effective approach for harnessing mountains of data," said Ami Aharonovich, CEO, Brillix. "We're already hearing enthusiastic feedback from customers who are eager to benefit from Aerospike's outstanding performance, scalability and value." Brillix' data management world recognized expert team is comprised of Oracle ACEs, Oracle ACE Directors and Certified Professional DBAs who are active leaders in the Israeli database community. Brillix joins Aerospike's fast-growing community of partners, including Cloudera, DataTorrent, Dell, Hortonworks, Think Big Analytics, Thumbtack Technology, CleverLEAF Technology, and Crestpointe. View the full list at http://bit.ly/2eoNQkm. Aerospike's enterprise-class, NoSQL database solution powers customer analytics, advertising optimization, fraud prevention and other decisioning workloads for the world's leading brands in AdTech, eCommerce, Gaming, Telecommunications and Financial Services. Customers include Nielsen Marketing Cloud, AppLovin, InMobi, Kayak and AppNexus. Aerospike delivers predictable performance at scale, superior uptime, and high availability – at the lowest TCO (Total Cost of Ownership). Aerospike is a privately-held company based in Mountain View, CA, USA and is backed by New Enterprise Associates, Alsop Louie Partners and CNTP. www.aerospike.com @aerospikedb Founded in 2007 in Tel Aviv, Israel, Brillix plans, develops and deploys best-of-breed innovative technologies and solutions for database platforms, big data technologies and comprehensive data security solutions. Customers of Brillix and its affiliate DBAces, a Brillix company specializing in 24/7 remote database services, include Bank Hapoalim, Union Bank, Bloomberg, Riverbed, Amdocs, Teva, GM and Isracard. http://www.brillix.co.il/ Aerospike is a registered trademark of Aerospike, Inc. Other brands mentioned herein may be trademarks of their respective owners.
News Article | May 8, 2017
Aerospike powers the world's most demanding customer engagement applications, from personalized marketing offers and dynamic pricing to fraud prevention. Aerospike environments process terabytes of data and deliver immediate response times, a superior approach validated by customer benchmarks and mentioned in numerous industry analyst reports. "Brillix has an impeccable reputation for its database expertise and quality training and consulting for customers," said Jim LoDestro, Chief Revenue Officer, Aerospike. "Our collaboration is pivotal in expanding Aerospike's footprint in this important region and guiding customer success with Aerospike's breakthrough approach for structuring, accessing and analyzing data." "We're excited to deliver Aerospike's market-leading NoSQL solution and champion its flexible and cost-effective approach for harnessing mountains of data," said Ami Aharonovich, CEO, Brillix. "We're already hearing enthusiastic feedback from customers who are eager to benefit from Aerospike's outstanding performance, scalability and value." Brillix' data management world recognized expert team is comprised of Oracle ACEs, Oracle ACE Directors and Certified Professional DBAs who are active leaders in the Israeli database community. Brillix joins Aerospike's fast-growing community of partners, including Cloudera, DataTorrent, Dell, Hortonworks, Think Big Analytics, Thumbtack Technology, CleverLEAF Technology, and Crestpointe. View the full list at http://bit.ly/2eoNQkm. Aerospike's enterprise-class, NoSQL database solution powers customer analytics, advertising optimization, fraud prevention and other decisioning workloads for the world's leading brands in AdTech, eCommerce, Gaming, Telecommunications and Financial Services. Customers include Nielsen Marketing Cloud, AppLovin, InMobi, Kayak and AppNexus. Aerospike delivers predictable performance at scale, superior uptime, and high availability – at the lowest TCO (Total Cost of Ownership). Aerospike is a privately-held company based in Mountain View, CA, USA and is backed by New Enterprise Associates, Alsop Louie Partners and CNTP. www.aerospike.com @aerospikedb Founded in 2007 in Tel Aviv, Israel, Brillix plans, develops and deploys best-of-breed innovative technologies and solutions for database platforms, big data technologies and comprehensive data security solutions. Customers of Brillix and its affiliate DBAces, a Brillix company specializing in 24/7 remote database services, include Bank Hapoalim, Union Bank, Bloomberg, Riverbed, Amdocs, Teva, GM and Isracard. http://www.brillix.co.il/ Aerospike is a registered trademark of Aerospike, Inc. Other brands mentioned herein may be trademarks of their respective owners. To view the original version on PR Newswire, visit:http://www.prnewswire.com/news-releases/aerospike-signs-israeli-data-management-leader-brillix-as-distributor-to-expand-global-reach-300452832.html
News Article | May 4, 2017
In response to this need, the CRN editorial team has identified the IT vendors at the forefront of data management, business analytics and infrastructure technologies and services. The resulting list is a valuable guide for solution providers seeking out key big data technology suppliers. "Businesses everywhere are faced with managing information streams of unprecedented volume and complexity, requiring more powerful and efficient tools than ever before for capturing, storing, organizing, securing and analyzing data," said Robert Faletra, CEO of The Channel Company. "CRN is pleased to present the 2017 Big Data 100, a list of vendors whose ingenuity and creative problem-solving have introduced remarkable new ways to help solution providers tackle this mammoth task. Congratulations to these big data aces, who have not only kept pace with the rapidly evolving demands of the data management field, but also innovated and challenged the status quo." Kyvos is a massively scalable, self-service BI on Hadoop analytics solution designed to make big data lakes ready for BI analysts. The patent-pending OLAP on Hadoop solution includes enterprise-grade functionality to improve analysts' access to the data lake, advanced security support, improved performance and scalability, and integration with additional BI tools. Kyvos allows companies to build a BI Consumption Layer directly on Hadoop, which enables analysts to transform their existing BI tools. The BI Consumption Layer also gives analysts instant, interactive access to multi-dimensional analytics at big data scale across the enterprise, with no learning curve or programming required. "We are honored to be selected for the CRN Big Data 100 list for the second year in a row. The recognition underscores the significance of our unique solution in the big data analytics market, as enterprises increasingly recognize the challenges of extracting value from their data lakes since existing tools and methods don't scale and are too slow," said Ajay Anand, vice president of products at Kyvos Insights. "With Kyvos, customers can find solutions to problems that were previously unsolvable. The solution gives them instant access to the important insights they need directly on Hadoop for more meaningful customer understanding, targeted marketing, efficient operations and increased profitability." The 2017 Big Data 100 list is available online at www.crn.com/bigdata100. About The Channel Company The Channel Company enables breakthrough IT channel performance with our dominant media, engaging events, expert consulting and education, and innovative marketing services and platforms. As the channel catalyst, we connect and empower technology suppliers, solution providers and end users. Backed by more than 30 years of unequaled channel experience, we draw from our deep knowledge to envision innovative new solutions for ever-evolving challenges in the technology marketplace. www.thechannelco.com About Kyvos Insights Kyvos Insights is committed to unlocking the power of big data analytics with its unique "OLAP on Hadoop" technology. Backed by years of analytics expertise and a passion for big data, the company aims to revolutionize big data analytics by providing business users with the ability to visualize, explore and analyze big data interactively, working directly on Hadoop. Headquartered in Los Gatos, California, Kyvos Insights was formed by a team of veterans from Yahoo!, Impetus and Intellicus Technologies. The company has partnered with companies including Cloudera, Hortonworks, MapR and Tableau. For more information, visit www.kyvosinsights.com or connect with us on Twitter @kyvosinsights and LinkedIn at: http://linkd.in/1Fg3lNr. ©2017. The Channel Company, LLC. CRN is a registered trademark of The Channel Company, LLC. All rights reserved. To view the original version on PR Newswire, visit:http://www.prnewswire.com/news-releases/kyvos-insights-named-to-crn-big-data-100-list-second-year-in-a-row-300451769.html
News Article | May 4, 2017
A reconciliation of GAAP to non-GAAP financial measures has been provided in the financial statement tables included in this press release. As of May 4, 2017, Hortonworks is providing the following financial outlook for its second quarter and full year 2017: For the second quarter of 2017, we expect: GAAP operating margin between negative 106 percent and negative 101 percent, which includes stock-based compensation and related expenses and amortization of purchased intangibles of approximately $27 million. Non-GAAP operating margin between negative 57 percent and negative 52 percent, which excludes stock-based compensation and related expenses and amortization of purchased intangibles of approximately $27 million. For the full year 2017, we expect: GAAP operating margin between negative 85 percent and negative 80 percent, which includes stock-based compensation and related expenses and amortization of purchased intangibles of approximately $105 million. Non-GAAP operating margin between negative 50 percent and negative 45 percent, which excludes stock-based compensation and related expenses and amortization of purchased intangibles of approximately $105 million. GAAP operating margin outlook includes estimates of stock-based compensation and related expenses and amortization of purchased intangibles in future periods and assumes, among other things, the occurrence of no additional acquisitions, investments or restructuring and no further revisions to stock-based compensation and related expenses. Hortonworks will hold a conference call and webcast to discuss the Q1 2017 results, Q2 and FY 2017 outlook and related matters at 1:30 p.m. Pacific Time (4:30 p.m. Eastern Time) on Thursday, May 4, 2017. Interested parties may access the call by dialing (877) 930-7786 in the U.S. or (253) 336-7423 from international locations. In addition, a live audio webcast of the conference call will be available on the Hortonworks Investor Relations website at http://investors.hortonworks.com. Shortly after the conclusion of the conference call, a replay of the audio webcast will be available on the Hortonworks Investor Relations website for approximately seven days. Statement Regarding Use of Non-GAAP Financial Measures Hortonworks reports non-GAAP results for gross profit and margins, operating loss and margins, net loss, basic and diluted net loss per share and expenses in addition to, and not as a substitute for, or superior to, financial measures calculated in accordance with GAAP. Hortonworks' financial measures under GAAP include stock-based compensation expense, acquisition-related items, amortization of intangible assets, depreciation expense, and other income/expense, net. Management believes the presentation of operating results that exclude these items provides useful supplemental information to investors and facilitates the analysis of the Company's core operating results and comparison of operating results across reporting periods. Management also believes that this supplemental non-GAAP information is therefore useful to investors in analyzing and assessing the Company's past and future operating performance. Non-GAAP cost of revenue is calculated as GAAP cost of revenue less stock-based compensation expense and amortization of intangibles. Management believes non-GAAP cost of revenue offers investors useful supplemental information regarding the performance of our business, and will help investors better understand our business. Non-GAAP gross profit is calculated as GAAP revenue less our non-GAAP cost of revenue. Management believes non-GAAP gross profit offers investors useful supplemental information to help compare our recurring core business operating results over multiple periods. Non-GAAP gross margin is calculated as non-GAAP gross profit divided by GAAP revenue. Management believes that non-GAAP gross margin offers investors useful supplemental information in evaluating our ongoing operational performance, and will help investors better understand our underlying business. Non-GAAP expenses is calculated as GAAP cost of revenue less stock-based compensation expense and amortization of intangibles plus GAAP operating expenses less stock-based compensation expense and amortization of intangibles. Management believes non-GAAP expenses offers investors useful supplemental information regarding the cost structure of our business, and will help investors better understand our business. Non-GAAP operating loss is calculated as GAAP operating loss plus non-GAAP cost of revenue and operating expense adjustments. The Company believes that non-GAAP operating loss is a useful metric for management and investors because it excludes the effect of stock-based compensation expense, acquisition-related retention bonus, amortization of intangibles and other nonrecurring items so that our management and investors have a greater visibility to the underlying performance of the business operations. Non-GAAP operating margin is calculated as non-GAAP operating loss divided by GAAP revenue. Management believes that non-GAAP operating margin offers investors useful supplemental information in evaluating our operating performance because it provides them with an additional tool to compare business performance across companies and across periods. Non-GAAP net loss is calculated as GAAP net loss plus non-GAAP cost of revenue and operating expense adjustments. Management believes non-GAAP net loss offers investors useful supplemental information to help identify trends in our underlying business and perform related trend analyses. Non-GAAP net loss per basic and diluted share is calculated as non-GAAP net loss divided by the weighted-average shares outstanding for the period. Management believes non-GAAP net loss per basic and diluted share offers investors useful supplemental information, and will help investors better understand our performance and return to shareholders. This press release contains "forward-looking statements" regarding our performance within the meaning of Section 27A of the Securities Act of 1933, as amended, and Section 21E of the Securities Exchange Act of 1934, as amended. Such statements contain words such as "may," "will," "might," "expect," "believe," "anticipate," "could," "would," "estimate," "continue," "pursue," or the negative thereof or comparable terminology, and may include (without limitation) information regarding our expectations, goals or intentions regarding future performance, expenses or activity in international markets, including the forward-looking statements, in the section titled "Financial Outlook." Forward-looking statements are subject to known and unknown risks and uncertainties and are based on potentially inaccurate assumptions that could cause actual results to differ materially from those expected or implied by the forward-looking statements. If any such risks or uncertainties materialize or if any of the assumptions prove incorrect, our results could differ materially from the results expressed or implied by the forward-looking statements we make. The important factors that could cause actual results to differ materially from those in any forward-looking statements include, but are not limited to, the following: (i) we have a history of losses, and we may not become profitable in the future, (ii) we have a limited operating history, which makes it difficult to predict our future results of operations, and (iii) we do not have an adequate history with our support subscription offerings or pricing models to accurately predict the long-term rate of support subscription customer renewals or adoption, or the impact these renewals and adoption will have on our revenues or results of operations. Further information on these and other factors that could affect our financial results and the forward-looking statements in this press release are included in our Form 10-K filed on March 15, 2017, or in other filings we make with the Securities Exchange Commission from time to time, particularly under the caption Risk Factors. All forward-looking statements in this press release are made as of the date hereof, based on information available to us as of the date hereof, and we undertake no obligation, and do not intend, to update these forward-looking statements. Hortonworks is an industry-leading innovator that creates, distributes and supports enterprise-ready open data platforms and modern data applications that deliver actionable intelligence from all data: data-in-motion and data-at-rest. Hortonworks is focused on driving innovation in open source communities such as Apache Hadoop, Apache NiFi and Apache Spark. Along with its 2,100+ partners, Hortonworks provides the expertise, training and services that allow customers to unlock transformational value for their organizations across any line of business. Hortonworks, Powering the Future of Data, HDP and HDF are registered trademarks or trademarks of Hortonworks, Inc. and its subsidiaries in the United States and other jurisdictions. For more information, please visit www.hortonworks.com. All other trademarks are the property of their respective owners. To view the original version on PR Newswire, visit:http://www.prnewswire.com/news-releases/hortonworks-reports-first-quarter-2017-revenue-of-560-million-up-35-percent-year-over-year-300451855.html
News Article | April 28, 2017
In this report, the global Hadoop Hardware market is valued at USD XX million in 2016 and is expected to reach USD XX million by the end of 2022, growing at a CAGR of XX% between 2016 and 2022. Geographically, this report split global into several key Regions, with sales (K Units), revenue (Million USD), market share and growth rate of Hadoop Hardware for these regions, from 2012 to 2022 (forecast), covering United States China Europe Japan Southeast Asia India For more information or any query mail at email@example.com Global Hadoop Hardware market competition by top manufacturers/players, with Hadoop Hardware sales volume, Price (USD/Unit), revenue (Million USD) and market share for each manufacturer/player; the top players including Cloudera Hortonworks MAPR TECHNOLOGIES Cisco Datameer IBM Microsoft Oracle Pivotal Teradata On the basis of product, this report displays the sales volume (K Units), revenue (Million USD), product price (USD/Unit), market share and growth rate of each type, primarily split into Servers Equipments Storage Equipments Network Equipments On the basis on the end users/applications, this report focuses on the status and outlook for major applications/end users, sales volume, market share and growth rate of Hadoop Hardware for each application, including Healthcare Banking & Finance Telecommunication Other If you have any special requirements, please let us know and we will offer you the report as you want. Global Hadoop Hardware Sales Market Report 2017 1 Hadoop Hardware Market Overview 1.1 Product Overview and Scope of Hadoop Hardware 1.2 Classification of Hadoop Hardware by Product Category 1.2.1 Global Hadoop Hardware Market Size (Sales) Comparison by Type (2012-2022) 1.2.2 Global Hadoop Hardware Market Size (Sales) Market Share by Type (Product Category) in 2016 1.2.3 Servers Equipments 1.2.4 Storage Equipments 1.2.5 Network Equipments 1.3 Global Hadoop Hardware Market by Application/End Users 1.3.1 Global Hadoop Hardware Sales (Volume) and Market Share Comparison by Application (2012-2022) 1.3.2 Healthcare 1.3.3 Banking & Finance 1.3.4 Telecommunication 1.3.5 Other 1.4 Global Hadoop Hardware Market by Region 1.4.1 Global Hadoop Hardware Market Size (Value) Comparison by Region (2012-2022) 1.4.2 United States Hadoop Hardware Status and Prospect (2012-2022) 1.4.3 China Hadoop Hardware Status and Prospect (2012-2022) 1.4.4 Europe Hadoop Hardware Status and Prospect (2012-2022) 1.4.5 Japan Hadoop Hardware Status and Prospect (2012-2022) 1.4.6 Southeast Asia Hadoop Hardware Status and Prospect (2012-2022) 1.4.7 India Hadoop Hardware Status and Prospect (2012-2022) 1.5 Global Market Size (Value and Volume) of Hadoop Hardware (2012-2022) 1.5.1 Global Hadoop Hardware Sales and Growth Rate (2012-2022) 1.5.2 Global Hadoop Hardware Revenue and Growth Rate (2012-2022) 2 Global Hadoop Hardware Competition by Players/Suppliers, Type and Application 2.1 Global Hadoop Hardware Market Competition by Players/Suppliers 2.1.1 Global Hadoop Hardware Sales and Market Share of Key Players/Suppliers (2012-2017) 2.1.2 Global Hadoop Hardware Revenue and Share by Players/Suppliers (2012-2017) 2.2 Global Hadoop Hardware (Volume and Value) by Type 2.2.1 Global Hadoop Hardware Sales and Market Share by Type (2012-2017) 2.2.2 Global Hadoop Hardware Revenue and Market Share by Type (2012-2017) 2.3 Global Hadoop Hardware (Volume and Value) by Region 2.3.1 Global Hadoop Hardware Sales and Market Share by Region (2012-2017) 2.3.2 Global Hadoop Hardware Revenue and Market Share by Region (2012-2017) 2.4 Global Hadoop Hardware (Volume) by Application 9 Global Hadoop Hardware Players/Suppliers Profiles and Sales Data 9.1 Cloudera 9.1.1 Company Basic Information, Manufacturing Base and Competitors 9.1.2 Hadoop Hardware Product Category, Application and Specification 188.8.131.52 Product A 184.108.40.206 Product B 9.1.3 Cloudera Hadoop Hardware Sales, Revenue, Price and Gross Margin (2012-2017) 9.1.4 Main Business/Business Overview 9.2 Hortonworks 9.2.1 Company Basic Information, Manufacturing Base and Competitors 9.2.2 Hadoop Hardware Product Category, Application and Specification 220.127.116.11 Product A 18.104.22.168 Product B 9.2.3 Hortonworks Hadoop Hardware Sales, Revenue, Price and Gross Margin (2012-2017) 9.2.4 Main Business/Business Overview 9.3 MAPR TECHNOLOGIES 9.3.1 Company Basic Information, Manufacturing Base and Competitors 9.3.2 Hadoop Hardware Product Category, Application and Specification 22.214.171.124 Product A 126.96.36.199 Product B 9.3.3 MAPR TECHNOLOGIES Hadoop Hardware Sales, Revenue, Price and Gross Margin (2012-2017) 9.3.4 Main Business/Business Overview 9.4 Cisco 9.4.1 Company Basic Information, Manufacturing Base and Competitors 9.4.2 Hadoop Hardware Product Category, Application and Specification 188.8.131.52 Product A 184.108.40.206 Product B 9.4.3 Cisco Hadoop Hardware Sales, Revenue, Price and Gross Margin (2012-2017) 9.4.4 Main Business/Business Overview 9.5 Datameer 9.5.1 Company Basic Information, Manufacturing Base and Competitors 9.5.2 Hadoop Hardware Product Category, Application and Specification 220.127.116.11 Product A 18.104.22.168 Product B 9.5.3 Datameer Hadoop Hardware Sales, Revenue, Price and Gross Margin (2012-2017) 9.5.4 Main Business/Business Overview 9.6 IBM 9.6.1 Company Basic Information, Manufacturing Base and Competitors 9.6.2 Hadoop Hardware Product Category, Application and Specification 22.214.171.124 Product A 126.96.36.199 Product B 9.6.3 IBM Hadoop Hardware Sales, Revenue, Price and Gross Margin (2012-2017) 9.6.4 Main Business/Business Overview 9.7 Microsoft 9.7.1 Company Basic Information, Manufacturing Base and Competitors 9.7.2 Hadoop Hardware Product Category, Application and Specification 188.8.131.52 Product A 184.108.40.206 Product B 9.7.3 Microsoft Hadoop Hardware Sales, Revenue, Price and Gross Margin (2012-2017) 9.7.4 Main Business/Business Overview 9.8 Oracle 9.8.1 Company Basic Information, Manufacturing Base and Competitors 9.8.2 Hadoop Hardware Product Category, Application and Specification For more information or any query mail at firstname.lastname@example.org ABOUT US: Wise Guy Reports is part of the Wise Guy Consultants Pvt. Ltd. and offers premium progressive statistical surveying, market research reports, analysis & forecast data for industries and governments around the globe. Wise Guy Reports features an exhaustive list of market research reports from hundreds of publishers worldwide. We boast a database spanning virtually every market category and an even more comprehensive collection of rmaket research reports under these categories and sub-categories. For more information, please visit https://www.wiseguyreports.com
News Article | April 18, 2017
It's no secret that machine learning and its kissing cousin AI are all the rage. So much so, in fact, that companies are increasingly dressing up dumb apps as smart, and Cloudera is justifying a hefty IPO valuation in part on its ability to turn a Hadoop past into a machine learning future. A more pertinent question, however, is whether the same cloud companies that are displacing enterprise data centers and taking over big data deployments will be the most likely winners in the machine learning war. Early signs suggest the answer is 'yes.' For those that still think of Cloudera, Hortonworks, and MapR as Hadoop companies, that's old school thinking. As the market has evolved beyond generic "big data," so have they. The most obvious (and potentially fertile) place to evolve, as Cloudera cofounder and chief strategy officer Mike Olson wrote in the company's S-1 filing to go public, is machine learning: "[T]he same system built for managing big data in the cloud also unlocks the power of machine learning for enterprises." SEE: Why AI and machine learning are so hard, Facebook and Google weigh in (TechRepublic) Machine learning has been around for decades, but only recently have we had the software and systems at low enough cost to make machine learning a mass-market enterprise phenomenon. Cloudera, for its part, is all in: Its S-1 filing mentions machine learning 83 times. Hadoop? Just 14. This shift makes sense, given that Cloudera wants to sell business value, not technology. At any rate, it's increasingly pointless to pitch Hadoop anyway, given that Hadoop is no longer Hadoop. Take a look inside Hadoop and you'll find lots of Spark, Kafka, Impala, and other new(ish) components, but no "Hadoop," as Gwen Shapira has highlighted. The real question for Cloudera, Hortonworks, MapR, IBM, and every other would-be machine learning aspirant isn't, as Ovum analyst Tony Baer declared, about "Spark vs. Hadoop," or some other way of asking questions of our data. Rather, he said, it's a matter of "cloud vs. Hadoop," or, in the context of machine learning, it's a question of where that data will live, and which vendors are best positioned to deliver. Given data gravity—which is the idea that services and applications will gravitate to where the data is "born"—it's reasonable to assume that the more terrestrial vendors like Cloudera and Hortonworks will have a big part to play in the future of machine learning and AI. Why? Because most enterprise data sits inside corporate data centers, not in the cloud. Not yet, anyway. For data stuck in data centers, AWS offers Snowmobile, an 18-wheeler truck to move 100 petabytes of data at a time. If this seems bizarre (it sort of is), not to worry: Apps increasingly live in the cloud, and data will live there, too. SEE: The cloud war moves to machine learning: Does Google have an edge? (TechRepublic) That's a clear argument for the public cloud vendors to own machine learning long term. Or it would be, except companies like Cloudera argue that its products were "designed for public cloud infrastructure." In Cloudera's case, 18% of its customers run its software in the public cloud already. At Hortonworks, it's 20 to 25% that run in public or hybrid cloud environments, according to CEO Rob Bearden. There is, however, a difference between what a Cloudera can provide vs. what AWS offers. The former delivers software that runs in the cloud, but leaves an enterprise's IT department to "actively deploy, patch, and manage the cloud instances just like they do in the data center," as Baer pointed out. For an AWS or Microsoft Azure with "home court advantage," he argued, the machine learning services are "fully managed—eliminating headaches like patching." This means that over time, the public cloud vendors are likely to reap more from machine learning's rise than those that can't match their native, cloud-based services. In this full-cloud world, there's stiff competition. According to Algorithmia CEO Diego Oppenheimer, "Google has the most credibility based on tools they have; Microsoft is the one that will actually be able to convince the enterprises to do it; and Amazon has the advantage in that most corporate data in the cloud is in AWS. It's anybody's game." "Anybody" also includes Cloudera and Hortonworks to be sure, but they're likely going to have to find ways to match the native cloud capabilities AWS, Microsoft Azure, and Google Cloud offer, just as MongoDB ultimately decided to offer its own "as a Service" product. This shift to the public cloud will take time—up to two decades, by AWS chief Andy Jassy's reckoning, but the time to act on it is now.