Provide centralized high-speed data storage

GLADE-2
This photo shows the new GLADE-2 system that provides NCAR systems with an additional 20 petabytes of high performance centralized data storage. NCAR's two GLADE file systems now offer a total of 36.4 petabytes of storage for the Yellowstone and Cheyenne supercomputing environments.

The GLobally Accessible Data Environment (GLADE) provides centralized high-performance file systems spanning supercomputing, data post-processing, data analysis, visualization, and HPC-based data transfer services. GLADE provides computation, analysis, and visualization work spaces common to all CISL HPC resources. Project space is allocated through NCAR’s allocation panels, while scratch and user space is available to all users of NCAR HPC resources. GLADE also hosts data from NCAR’s Research Data Archive (RDA), NCAR’s Community Data Portal, and the Earth System Grid which curates CMIP5/AR5 data.

GLADE’s architecture shifts user workflows from a design that centers on serving the supercomputer to a more scientifically efficient design that facilitates the flow of data. Through a globally accessible storage infrastructure, users now arrange their workflows to use stored data directly without first needing to move or copy it. Additional services like high-performance data transfer protocols enhance CISL’s ability to bring computational data from other sites to NCAR for post-processing, analysis, and visualization.

This work supports CISL’s computing imperative for hardware cyberinfrastructure by provisioning storage and networking systems customized to support efficient workflows for the atmospheric and related sciences. Specifically, GLADE facilitates typical user workflows plus special efforts like supporting data flows for CMIP5 and the upcoming CMIP6. GLADE also advances CISL’s computing imperative for facilities by demonstrating high-performance data services that are critical for the supercomputing resources that now operate at NWSC and will continue to be critical as next-generation resources begin production in early CY2017.

The GLADE environment has undergone several upgrades this year in preparation for the installation of NWSC-2 resources. All of the supporting infrastructure was upgraded for installing an additional 20 PB of storage in late 2016. All GLADE systems were upgraded to the current RedHat OS, and the GPFS file system was upgraded to 4.2.0 across all HPC resources. A new 40 Gb Ethernet I/O network was installed, and the GLADE-1 system was upgraded to 40 GbE. The new network infrastructure will allow GLADE-1 and GLADE-2 to be fully integrated and accessible to both Yellowstone and the new HPC resource, Cheyenne. Utilization of GLADE remains high with file systems reaching capacity several times in FY2016.

GLADE equipment was purchased with NSF Special funds, and it is supported by NSF Core funds including CSL funding.