Entity

Time filter

Source Type

Can, Turkey

Platform Computing was a privately held software company primarily known for its job scheduling product, Load Sharing Facility . It was founded in 1992 in Toronto, Ontario, Canada and headquartered in Markham, Ontario with 11 branch offices across the United States, Europe and Asia.In January 2012, Platform Computing was acquired by IBM. Wikipedia.


Patent
Platform Computing | Date: 2012-06-20

According to one aspect of the present disclosure, a method and technique for job distribution within a grid environment is disclosed. The method includes: receiving jobs at a submission cluster for distribution of the jobs to at least one of a plurality of execution clusters, each execution cluster comprising one or more execution hosts; determining resource attributes corresponding to each execution host of the execution clusters; grouping, for each execution cluster, execution hosts based on the resource attributes of the respective execution hosts; defining, for each grouping of execution hosts, a mega-host for the respective execution cluster, the mega-host for a respective execution cluster defining resource attributes based on the resource attributes of the respective grouped execution hosts; determining resource requirements for the jobs; and identifying candidate mega-hosts for the jobs based on the resource attributes of the respective mega-hosts and the resource requirements of the jobs.


Patent
Platform Computing | Date: 2012-06-20

According to one aspect of the present disclosure, a method and technique for job distribution within a grid environment is disclosed. The method includes: receiving jobs at a submission cluster for distribution of the jobs to at least one of a plurality of execution clusters, each execution cluster comprising one or more execution hosts; determining resource capacity corresponding to each execution cluster; determining resource requirements for the jobs; dynamically determining a pending job queue length for each execution cluster based on the resource capacity of the respective execution clusters and the resource requirements of the jobs; and forwarding jobs to the respective execution clusters according the determined pending job queue length for the respective execution cluster.


Grant
Agency: Cordis | Branch: H2020 | Program: CSA | Phase: FETHPC-2-2014 | Award Amount: 2.55M | Year: 2015

The three most significant HPC bodies in Europe, PRACE, ETP4HPC and EESI, have come together within EXDCI to coordinate the strategy of the European HPC Ecosystem in order to deliver its objectives. In particular, the project will harmonize the road-mapping and performance monitoring activities of the ecosystem to produce tools for coherent strategy-making and its implementation by: Producing and aligning roadmaps for HPC Technology and HPC Applications Measuring the implementation of the European HPC strategy Building and maintaining relations with other international HPC activities and regions Supporting the generation of young talent as a crucial element of the development of European HPC In this process, EXDCI will complement the Horizon 2020 calls and projects in the achievement of a globally competitive HPC Ecosystem in Europe. This ecosystem is based on three pillars: HPC Technology Provision, HPC Infrastructure and HPC Application Resources. EXDCI will make sure that: The three pillars are developed in synergy in order to achieve the strategic goals of the entire Ecosystem Tools exist for strategy review and definition for all the three pillars the project will operate a process for the creation of relevant roadmaps and the review of project results in the context of the entire environment The project consortium represents the stakeholders and expertise of all three pillars.


According to one aspect of the present disclosure, a method and technique for facilitating the exchange of information between interconnected computing entities is disclosed. The method includes: receiving from a client, by a workload manager, a workload unit of data in need of processing by the client; initiating by the workload manager a persistent storage of the workload unit of data received from the client; without waiting for the initiated storage of the workload unit of data to complete, sending by the workload manager the workload unit of data to a plurality of compute nodes; and responsive to receiving a result of a processing of the workload unit of data by one of the plurality compute nodes, canceling processing by the workload manager of the workload unit of data by a remainder of the plurality of compute nodes.


« Global Bioenergies’ bio‐isobutene now at 99.77% purity; polymer-grade level | Main | CARB research seminar on new car buyers’ valuation of ZEVs » The University of Michigan is collaborating with IBM to develop and deliver “data-centric” supercomputing systems designed to increase the pace of scientific discovery in fields as diverse as aircraft and rocket engine design, cardiovascular disease treatment, materials physics, climate modeling and cosmology. The system is designed to enable high performance computing applications for physics to interact, in real time, with big data in order to improve scientists’ ability to make quantitative predictions. IBM’s systems use a GPU-accelerated, data-centric approach, integrating massive datasets seamlessly with high performance computing power, resulting in new predictive simulation techniques that promise to expand the limits of scientific knowledge. The collaboration was announced this week in San Jose at the second annual OpenPOWER Summit 2016. The OpenPOWER Foundation, which U-M recently joined, is an open, collaborative, technical community based on IBM’s POWER architecture. Several other Foundation members contributed to the development of this new high performance computing system, which has the potential to reduce computing costs by accelerating statistical inference and machine learning. Working with IBM, U-M researchers have designed a computing resource called ConFlux to enable high performance computing clusters to communicate directly and at interactive speeds with data-intensive operations. The ConFlux cluster will be built with ~43 IBM Power8 CPU two-socket “Firestone” S822LC compute nodes providing 20 cores in each, and fifteen Power8 CPU two-socket “Garrison” compute nodes providing an additional 20 cores each. Each of the Garrison nodes will also host four NVIDIA Pascal GPUs connected via NVIDIA’s NVLink technology to the Power8 system bus. Each node has a local high-speed flash memory for random access. All compute and storage is connected via a 100 Gb/s InfiniBand fabric. The IBM and NVLink connectivity, combined with IBM CAPI Technology will provide an unprecedented data transfer throughput required for the data-driven computational physics researchers will be conducting. Hosted at U-M, the project establishes a hardware and software ecosystem to enable large-scale data-driven modeling of complex physical problems, such as the performance of an aircraft engine, which consists of trillions of molecular interactions. ConFlux, funded by a grant from the National Science Foundation, aims to advance predictive modeling in several fields of computational science. IBM is providing servers and software solutions. ConFlux meshes well with IBM’s recent focus on data-centric computing systems. Advanced technologies such as data-centric computing systems are at the forefront of tackling big data challenges and advancing the pace of innovation. By moving computing power to where the data resides, organizations of all sizes can maximize performance and minimize latency in their systems, enabling them to gain deeper insights from research. These data-centric solutions are accelerated through open innovation and IBM’s work with other members of the OpenPOWER Foundation. The incorporation of OpenPOWER technologies into a modular integrated system will enable U-M to configure the systems for their specific needs. ConFlux incorporates IBM Power Systems LC servers, which were designed based on technologies and development efforts contributed by OpenPOWER Foundation members including Mellanox, NVIDIA and Tyan. It is also powered by the latest additions to the NVIDIA Tesla Accelerated Computing Platform: NVIDIA Tesla P100 GPU accelerators with the NVLink high-speed interconnect technology. (Earlier post.) Additional data-centric solutions U-M is using include IBM Elastic Storage Server, IBM Spectrum Scale software (scale-out, parallel access network attached storage), and IBM Platform Computing software. In an internal comparison test conducted by U-M, the POWER8 system significantly outperformed a competing architecture by providing low latency networks and a novel architecture that allows for the integrated use of central and graphics processing units. As one of the first projects U-M will undertake with its advanced supercomputing system, researchers are working with NASA to use cognitive techniques to simulate turbulence around aircraft and rocket engines. They’re combining large amounts of data from wind tunnel experiments and simulations to build computing models that are used to predict the aerodynamics around new configurations of an aircraft wing or engine. With ConFlux, U-M can more accurately model and study turbulence, helping to speed development of more efficient airplane designs. It will also improve weather forecasting, climate science and other fields that involve the flow of liquids or gases. U-M is also studying cardiovascular disease for the National Institutes of Health. By combining noninvasive imaging such as results from MRI and CT scans with a physical model of blood flow, U-M hopes to help doctors estimate artery stiffness within an hour of a scan, serving as an early predictor of diseases such as hypertension. Studies are also planned to better understand climate science such as how clouds interact with atmospheric circulation, the origins of the universe and stellar evolution, and predictions of the behavior of biologically inspired materials.

Discover hidden collaborations