Can, Turkey
Can, Turkey

Platform Computing was a privately held software company primarily known for its job scheduling product, Load Sharing Facility . It was founded in 1992 in Toronto, Ontario, Canada and headquartered in Markham, Ontario with 11 branch offices across the United States, Europe and Asia.In January 2012, Platform Computing was acquired by IBM. Wikipedia.


Time filter

Source Type

News Article | April 7, 2016
Site: www.greencarcongress.com

« Global Bioenergies’ bio‐isobutene now at 99.77% purity; polymer-grade level | Main | CARB research seminar on new car buyers’ valuation of ZEVs » The University of Michigan is collaborating with IBM to develop and deliver “data-centric” supercomputing systems designed to increase the pace of scientific discovery in fields as diverse as aircraft and rocket engine design, cardiovascular disease treatment, materials physics, climate modeling and cosmology. The system is designed to enable high performance computing applications for physics to interact, in real time, with big data in order to improve scientists’ ability to make quantitative predictions. IBM’s systems use a GPU-accelerated, data-centric approach, integrating massive datasets seamlessly with high performance computing power, resulting in new predictive simulation techniques that promise to expand the limits of scientific knowledge. The collaboration was announced this week in San Jose at the second annual OpenPOWER Summit 2016. The OpenPOWER Foundation, which U-M recently joined, is an open, collaborative, technical community based on IBM’s POWER architecture. Several other Foundation members contributed to the development of this new high performance computing system, which has the potential to reduce computing costs by accelerating statistical inference and machine learning. Working with IBM, U-M researchers have designed a computing resource called ConFlux to enable high performance computing clusters to communicate directly and at interactive speeds with data-intensive operations. The ConFlux cluster will be built with ~43 IBM Power8 CPU two-socket “Firestone” S822LC compute nodes providing 20 cores in each, and fifteen Power8 CPU two-socket “Garrison” compute nodes providing an additional 20 cores each. Each of the Garrison nodes will also host four NVIDIA Pascal GPUs connected via NVIDIA’s NVLink technology to the Power8 system bus. Each node has a local high-speed flash memory for random access. All compute and storage is connected via a 100 Gb/s InfiniBand fabric. The IBM and NVLink connectivity, combined with IBM CAPI Technology will provide an unprecedented data transfer throughput required for the data-driven computational physics researchers will be conducting. Hosted at U-M, the project establishes a hardware and software ecosystem to enable large-scale data-driven modeling of complex physical problems, such as the performance of an aircraft engine, which consists of trillions of molecular interactions. ConFlux, funded by a grant from the National Science Foundation, aims to advance predictive modeling in several fields of computational science. IBM is providing servers and software solutions. ConFlux meshes well with IBM’s recent focus on data-centric computing systems. Advanced technologies such as data-centric computing systems are at the forefront of tackling big data challenges and advancing the pace of innovation. By moving computing power to where the data resides, organizations of all sizes can maximize performance and minimize latency in their systems, enabling them to gain deeper insights from research. These data-centric solutions are accelerated through open innovation and IBM’s work with other members of the OpenPOWER Foundation. The incorporation of OpenPOWER technologies into a modular integrated system will enable U-M to configure the systems for their specific needs. ConFlux incorporates IBM Power Systems LC servers, which were designed based on technologies and development efforts contributed by OpenPOWER Foundation members including Mellanox, NVIDIA and Tyan. It is also powered by the latest additions to the NVIDIA Tesla Accelerated Computing Platform: NVIDIA Tesla P100 GPU accelerators with the NVLink high-speed interconnect technology. (Earlier post.) Additional data-centric solutions U-M is using include IBM Elastic Storage Server, IBM Spectrum Scale software (scale-out, parallel access network attached storage), and IBM Platform Computing software. In an internal comparison test conducted by U-M, the POWER8 system significantly outperformed a competing architecture by providing low latency networks and a novel architecture that allows for the integrated use of central and graphics processing units. As one of the first projects U-M will undertake with its advanced supercomputing system, researchers are working with NASA to use cognitive techniques to simulate turbulence around aircraft and rocket engines. They’re combining large amounts of data from wind tunnel experiments and simulations to build computing models that are used to predict the aerodynamics around new configurations of an aircraft wing or engine. With ConFlux, U-M can more accurately model and study turbulence, helping to speed development of more efficient airplane designs. It will also improve weather forecasting, climate science and other fields that involve the flow of liquids or gases. U-M is also studying cardiovascular disease for the National Institutes of Health. By combining noninvasive imaging such as results from MRI and CT scans with a physical model of blood flow, U-M hopes to help doctors estimate artery stiffness within an hour of a scan, serving as an early predictor of diseases such as hypertension. Studies are also planned to better understand climate science such as how clouds interact with atmospheric circulation, the origins of the universe and stellar evolution, and predictions of the behavior of biologically inspired materials.


News Article | April 6, 2016
Site: www.scientificcomputing.com

San Jose, CA — The University of Michigan (U-M) announced it has selected IBM to develop and deliver “data-centric” supercomputing systems designed to increase the pace of scientific discovery in fields as diverse as aircraft and rocket engine design, cardiovascular disease treatment, materials physics, climate modeling and cosmology. Traditionally, scientific computations have been performed on high performance computing (HPC) infrastructure while modern data parallel architectures have mostly focused on web analytics and business intelligence applications. Systems that enable HPC applications for physics to interact in real time with big data to improve quantitative predictability have not yet been developed. IBM’s systems use a data-centric approach, integrating massive datasets seamlessly with HPC computing power resulting in new predictive simulation techniques that will expand the limits of scientific knowledge. Working with IBM, U-M researchers have designed a computing resource, called ConFlux, to enable HPC clusters to communicate seamlessly and at interactive speeds with data-intensive operations. Hosted at U-M, the project establishes a hardware and software ecosystem to enable large scale data-driven modeling of complex physical problems, such as the performance of an aircraft engine which consists of trillions of molecular interactions. ConFlux will produce advances in predictive modeling in several fields of computational science, and is funded by a grant from the National Science Foundation. “There is a pressing need for data-driven predictive modeling to help re-envision traditional computing models in our pursuit to bring forth groundbreaking research,” said Karthik Duraisamy, Assistant Professor, Department of Aerospace Engineering and Director, Center for Data-driven Computational Physics, U-M. “The recent acceleration in computational power and measurement resolution has made possible the availability of extreme scale simulations and data sets. ConFlux allows us to bring together large scale scientific computing and machine learning for the first time to accomplish research that was previously impossible.” ConFlux meshes well with IBM’s focus on data-centric computing systems. “Scientific research is now at the crossroads of big data and high performance computing,” said Sumit Gupta, Vice President, High Performance Computing and Data Analytics, IBM. “The explosion of data requires systems and infrastructures based on POWER8 plus accelerators that can both stream and manage the data and quickly synthesize and make sense of data to enable faster insights.” "U-M grasped the significance of IBM's shift to data-centric systems during our first discussion,” said Michael J. Henesey, Vice President Business Development, Data Centric Systems and Innovation Centers. “They were enthusiastic about the application of this architecture to problems that are essential to the University and to the country. We will stay close to U-M to help inform our future system designs." Progress in a wide spectrum of fields ranging from medicine to transportation relies critically on the ability to gather, store, search and analyze big data and construct truly predictive models of complex, multi-scale systems. Advanced technologies like data-centric computing systems are at the forefront of tackling these big data challenges and advancing the pace of innovation. By moving computing power to where the data resides, organizations of all sizes can maximize performance and minimize latency in their systems, enabling them to gain deeper insights from research. These data-centric solutions are accelerated through open innovation and IBM’s work with other members of the OpenPOWER Foundation. The incorporation of OpenPOWER technologies into a modular integrated system will enable U-M to configure the systems for their specific needs. ConFlux incorporates IBM Power Systems LC servers, which were designed based on technologies and development efforts contributed by OpenPOWER Foundation members including Mellanox, NVIDIA, Tyan and Wistron. Additional data-centric solutions U-M is using include IBM Elastic Storage Server, IBM Spectrum Scale software (scale-out, parallel access network attached storage), and IBM Platform Computing software. In an internal comparison test conducted by U-M, the POWER8 system significantly outperformed a competing architecture by providing low latency networks and a novel architecture that allows for the integrated use of central and graphics processing units. One of the first projects U-M will undertake with its advanced supercomputing system is working with NASA to simulate turbulence around aircraft and rocket engines through cognitive techniques. Large amounts of data from wind tunnel experiments and simulations are combined to build computing models that are used to predict the aerodynamics around new configurations, such as an aircraft wing or engine. With ConFlux, U-M can more accurately model and study turbulence, helping to speed development of more efficient airplane designs. It will also improve weather forecasting, climate science and other fields that involve the flow of liquids or gases. U-M is also studying cardiovascular disease for the National Institutes of Health. By combining noninvasive imaging such as MRI and CT scan results with a physical model of blood flow, U-M hopes to help doctors estimate artery stiffness within an hour of a scan, serving as an early predictor of diseases such as hypertension. Studies are also planned to better understand climate science such as how clouds interact with atmospheric circulation, the origins of the universe and stellar evolution, and predictions of the behavior of biologically inspired materials. "The ConFlux project aligns with the University of Michigan's comprehensive strategy of investment in research computing and data science across disciplines," said Eric Michielssen, U-M's Associate Vice President for Research Computing. “For example, our $100 million Data Science Initiative is advancing faculty driven research in engineering and the social and health sciences by building connections between the worlds of big data and HPC. ConFlux epitomizes this forward-looking vision.”


According to Stratistics MRC, the Big Data Analytics & Hadoop Market accounted for $8.48 billion in 2015 and is expected to reach $99.31 billion by 2022 growing at a CAGR of 42.1% from 2015 to 2022. Rise of big data & growing need for big data analytics and rapid growth in consumer data are some of the factors fueling the market growth. Lack of skilled workers and Lack of security features in the Hadoop framework are restraining the market growth. Venture capital funding is the major opportunity for vendors in big data analytics and hadoop market. Consulting services segment commanded the market due to the enterprise wide implementation of this technology. Application software is the leading segment in the market due to rising deployment by developers to develop real time applications. Storage segment leads the hardware market in terms of revenue. North America is dominating the global market, while Asia-Pacific region is anticipated to observe a significant growth in the big data analytics and hadoop market due to its huge IT services industry stand. Some of the key players in the market include Dell Inc., Karmasphere Inc., Talend, Inc., DataDirect Networks, Inc., Amazon Web Service LLC, HORTONWORKS, INC., Appistry, Inc., NetApp, Inc., Teradata Corporation, Cloudera Inc., The Hewlett-Packard Company, Greenplum, Inc., Datameer, Inc., Zettaset, Inc., Fujitsu Ltd., Pentaho Corporation, DataStax, Inc., Platform Computing, HStreaming LLC , MapR Technologies, Inc., IBM and Hadapt Inc. End Users Covered: • Retail • Healthcare & Life Sciences • Banking, Financial Service & Insurance • Government & Public Utilities • Bioinformatics • Web • IT & Security • Manufacturing • Transportation • Media & Entertainment • Gaming • University Research & Education • Telecommunication • Natural Resources • Other End User Regions Covered: • North America o US o Canada o Mexico • Europe o Germany o France o Italy o UK  o Spain      o Rest of Europe  • Asia Pacific o Japan        o China        o India        o Australia        o New Zealand       o Rest of Asia Pacific       • Rest of the World o Middle East o Brazil o Argentina o South Africa o Egypt What our report offers: - Market share assessments for the regional and country level segments - Market share analysis of the top industry players - Strategic recommendations for the new entrants - Market forecasts for a minimum of 7 years of all the mentioned segments, sub segments and the regional markets - Market Trends (Drivers, Constraints, Opportunities, Threats, Challenges, Investment Opportunities, and recommendations) - Strategic recommendations in key business segments based on the market estimations - Competitive landscaping mapping the key common trends - Company profiling with detailed strategies, financials, and recent developments - Supply chain trends mapping the latest technological advancements


News Article | December 1, 2016
Site: www.newsmaker.com.au

Big Data Analytics & Hadoop Market accounted for $8.48 billion in 2015 and is expected to reach $99.31 billion by 2022 growing at a CAGR of 42.1% from 2015 to 2022. Rise of big data & growing need for big data analytics and rapid growth in consumer data are some of the factors fueling the market growth. Lack of skilled workers and Lack of security features in the Hadoop framework are restraining the market growth. Venture capital funding is the major opportunity for vendors in big data analytics and hadoop market. Consulting services segment commanded the market due to the enterprise wide implementation of this technology. Application software is the leading segment in the market due to rising deployment by developers to develop real time applications. Storage segment leads the hardware market in terms of revenue. North America is dominating the global market, while Asia-Pacific region is anticipated to observe a significant growth in the big data analytics and hadoop market due to its huge IT services industry stand. Some of the key players in the market include Dell Inc., Karmasphere Inc., Talend, Inc., DataDirect Networks, Inc., Amazon Web Service LLC, HORTONWORKS, INC., Appistry, Inc., NetApp, Inc., Teradata Corporation, Cloudera Inc., The Hewlett-Packard Company, Greenplum, Inc., Datameer, Inc., Zettaset, Inc., Fujitsu Ltd., Pentaho Corporation, DataStax, Inc., Platform Computing, HStreaming LLC , MapR Technologies, Inc., IBM and Hadapt Inc. End Users Covered:  • Retail  • Healthcare & Life Sciences  • Banking, Financial Service & Insurance  • Government & Public Utilities  • Bioinformatics  • Web  • IT & Security  • Manufacturing  • Transportation  • Media & Entertainment  • Gaming  • University Research & Education  • Telecommunication  • Natural Resources  • Other End User Regions Covered:  • North America  o US  o Canada  o Mexico  • Europe  o Germany  o France  o Italy  o UK  o Spain  o Rest of Europe  • Asia Pacific  o Japan  o China  o India  o Australia  o New Zealand  o Rest of Asia Pacific  • Rest of the World  o Middle East  o Brazil  o Argentina  o South Africa  o Egypt What our report offers:  - Market share assessments for the regional and country level segments  - Market share analysis of the top industry players  - Strategic recommendations for the new entrants  - Market forecasts for a minimum of 7 years of all the mentioned segments, sub segments and the regional markets  - Market Trends (Drivers, Constraints, Opportunities, Threats, Challenges, Investment Opportunities, and recommendations)  - Strategic recommendations in key business segments based on the market estimations  - Competitive landscaping mapping the key common trends  - Company profiling with detailed strategies, financials, and recent developments  - Supply chain trends mapping the latest technological advancements About Us Wise Guy Reports is part of the Wise Guy Consultants Pvt. Ltd. and offers premium progressive statistical surveying, market research reports, analysis & forecast data for industries and governments around the globe. Wise Guy Reports understand how essential statistical surveying information is for your organization or association. Therefore, we have associated with the top publishers and research firms all specialized in specific domains, ensuring you will receive the most reliable and up to date research data available.


Grant
Agency: European Commission | Branch: H2020 | Program: CSA | Phase: FETHPC-2-2014 | Award Amount: 2.55M | Year: 2015

The three most significant HPC bodies in Europe, PRACE, ETP4HPC and EESI, have come together within EXDCI to coordinate the strategy of the European HPC Ecosystem in order to deliver its objectives. In particular, the project will harmonize the road-mapping and performance monitoring activities of the ecosystem to produce tools for coherent strategy-making and its implementation by: Producing and aligning roadmaps for HPC Technology and HPC Applications Measuring the implementation of the European HPC strategy Building and maintaining relations with other international HPC activities and regions Supporting the generation of young talent as a crucial element of the development of European HPC In this process, EXDCI will complement the Horizon 2020 calls and projects in the achievement of a globally competitive HPC Ecosystem in Europe. This ecosystem is based on three pillars: HPC Technology Provision, HPC Infrastructure and HPC Application Resources. EXDCI will make sure that: The three pillars are developed in synergy in order to achieve the strategic goals of the entire Ecosystem Tools exist for strategy review and definition for all the three pillars the project will operate a process for the creation of relevant roadmaps and the review of project results in the context of the entire environment The project consortium represents the stakeholders and expertise of all three pillars.


According to one aspect of the present disclosure, a method and technique for facilitating the exchange of information between interconnected computing entities is disclosed. The method includes: receiving from a client, by a workload manager, a workload unit of data in need of processing by the client; initiating by the workload manager a persistent storage of the workload unit of data received from the client; without waiting for the initiated storage of the workload unit of data to complete, sending by the workload manager the workload unit of data to a plurality of compute nodes; and responsive to receiving a result of a processing of the workload unit of data by one of the plurality compute nodes, canceling processing by the workload manager of the workload unit of data by a remainder of the plurality of compute nodes.


Patent
Platform Computing | Date: 2010-04-19

Presented herein are systems and methods for checking the integrity of data transmissions between or within one or more digital processing systems by identifying a data characteristic that is likely to change if there is an error in transmission. According to one embodiment, data messages are modified to achieve a selected characteristic according to a predetermined protocol, and changes to the data are recorded in a longitudinal check code (LCC) word, which is used by the receiver to decode the data message and restore the original data.


Patent
Platform Computing | Date: 2012-06-20

According to one aspect of the present disclosure, a method and technique for job distribution within a grid environment is disclosed. The method includes: receiving jobs at a submission cluster for distribution of the jobs to at least one of a plurality of execution clusters, each execution cluster comprising one or more execution hosts; determining resource attributes corresponding to each execution host of the execution clusters; grouping, for each execution cluster, execution hosts based on the resource attributes of the respective execution hosts; defining, for each grouping of execution hosts, a mega-host for the respective execution cluster, the mega-host for a respective execution cluster defining resource attributes based on the resource attributes of the respective grouped execution hosts; determining resource requirements for the jobs; and identifying candidate mega-hosts for the jobs based on the resource attributes of the respective mega-hosts and the resource requirements of the jobs.


Patent
Platform Computing | Date: 2012-06-20

According to one aspect of the present disclosure, a method and technique for job distribution within a grid environment is disclosed. The method includes: receiving jobs at a submission cluster for distribution of the jobs to at least one of a plurality of execution clusters, each execution cluster comprising one or more execution hosts; determining resource capacity corresponding to each execution cluster; determining resource requirements for the jobs; dynamically determining a pending job queue length for each execution cluster based on the resource capacity of the respective execution clusters and the resource requirements of the jobs; and forwarding jobs to the respective execution clusters according the determined pending job queue length for the respective execution cluster.


According to one aspect of the present disclosure, a method and technique for data processing in a distributed computing system having a service-oriented architecture is disclosed. The method includes: receiving, by a workload input interface, workloads associated with an application from one or more clients for execution on the distributed computing system; identifying, by a resource management interface, available service hosts or service instances for computing the workloads received from the one or more clients; responsive to receiving an allocation request for the one or more hosts or service instances by the workload input interface, providing, by the resource management interface, address information of one or more workload output interfaces; and sending, by the one or more workload output interfaces, workloads received from the workload input interface to the one or more service instances.

Loading Platform Computing collaborators
Loading Platform Computing collaborators