U.S. Department of Energy’s Argonne Leadership Computing Facility (ALCF) and HPE Expand High-Performance Computing (HPC) Storage Capacity for Exascale
Hewlett Packard Company (HPE) and the Argonne Management Computing Facility (ALCF), a U.S. Section of Strength (DOE) Business office of Science Consumer Facility, today announced that ALCF will deploy the new Cray ClusterStor Eone thousand, the most effective parallel storage alternative, as its latest storage process. The new collaboration supports ALCF’s scientific investigation in spots this kind of as earthquake seismic activity, aerospace turbulence and shock-waves, phys ical genomics and more. The most current deployment improvements storage capability for ALCF’s workloads that have to have converged modeling, simulation, artificial intelligence (AI) and analytics workloads, in preparation for Aurora, ALCF’s forthcoming exascale supercomputer, driven by HPE and Intel, and the very first-of-its-variety expected to be delivered in the U.S. in 2021.
The Cray ClusterStor Eone thousand system makes use of intent-developed software package and components characteristics to fulfill higher-effectiveness storage requirements of any measurement with drastically much less drives. Built to help the Exascale Period, which is characterized by the explosion of data and converged workloads, the Cray ClusterStor Eone thousand will power ALCF’s foreseeable future Aurora supercomputer to goal a multitude of data-intensive workloads required to make breakthrough discoveries at unprecedented pace.
“ALCF is fully commited to developing new activities with Exascale Period systems by deploying infrastructure required for converged workloads in modeling, simulation, AI and analytics,” mentioned Peter Ungaro, senior vice president and basic manager, HPC and AI, at HPE. “Our new introduction of the Cray ClusterStor Eone thousand is delivering ALCF unmatched scalability and effectiveness to fulfill up coming-generation HPC storage needs to help emerging, data-intensive workloads. We appear ahead to continuing our collaboration with ALCF and empower its investigation neighborhood to unlock new price.”
ALCF’s two new storage systems, which it has named “Grand” and “Eagle,” are applying the Cray ClusterStor Eone thousand system to attain a absolutely new, price-effective higher-effectiveness computing (HPC) storage alternative to correctly and competently take care of growing converged workloads that today’s offerings are unable to help.
“When Grand launches, it will benefit ALCF’s legacy petascale machines, giving greater capability for the Theta compute process and enabling new stages of effectiveness for not just conventional checkpoint-restart workloads, but also for advanced workflows and metadata-intensive perform,” mentioned Mark Fahey, director of functions, ALCF.
“Eagle will assistance help the ever-raising worth of data in the day-to-day things to do of science,” mentioned Michael E. Papka, director, ALCF. “By leveraging our experience with our latest data-sharing process, Petrel, this new storage will assistance remove obstacles to productivity and make improvements to collaborations all through the investigation neighborhood.”
The two new systems will attain a whole of 200 petabyes (PB) of storage capability, and by way of the Cray ClusterStor Eone thousand’s clever software package and components patterns, will more precisely align data flows with goal workloads. ALCF’s Grand and Eagle systems will assistance researchers accelerate a selection of scientific discoveries throughout disciplines, and are each and every assigned to address the subsequent:
- Computational capability – ALCF’s “Grand” provides a hundred and fifty PB of centre-broad storage and new stages of input/output (I/O) effectiveness to help substantial computational needs for its buyers.
- Simplified data-sharing – ALCF’s “Eagle” gives a 50 PB community file process to make data-sharing easier than ever among ALCF users, their collaborators and with third events.
ALCF plans to supply its Grand and Eagle storage systems in early 2020. The systems will originally link to existing ALCF supercomputers driven by HPE HPC systems: Theta, based on the Cray® XCforty-AC™ and Cooley, based on the Cray CS-300. ALCF’s Grand, which is capable of one terabyte per next (TB/s) bandwidth, will be optimized to help converged simulation science and data-intensive workloads once the Aurora exascale supercomputer is operational.
Resource: ANL
You can offer your hyperlink to a page which is appropriate to the subject of this post.