The HLRN alliance jointly operates a distributed supercomputer system hosted at the sites Georg-August-Universität Göttingen und Zuse-Institut Berlin (ZIB). In September 2018 the HLRN-IV system phase 1 was put into operation. After the successful installation of phase 2 the total HLRN-IV system will hold more than 200,000 cores with a total peak performance of about 16 PFlop/s.
At the site ZIB the HLRN-IV system phase 1 consists of the HLRN-III system Konrad in the period between September 2018 and September 2019. Starting with the operation of HLRN-IV system phase 2, this system phase 1 will be switched off.
In the following you find a description of the HLRN-IV system phase1 at the site University Göttingen.
Phase 1 is based on Intel Skylake processors and comprises 448 nodes. These nodes are divided into two categories (‘medium’ and ‘large’), which only differ in the amount of memory per node. Additionally, a GPU node was taken into production in September 2019.
The specific hardware of the nodes is as follows:
There are three different file systems available for Göttingen and Berlin respectively.
The HOME file system uses the DDN GRIDScaler (Spectrum Scale) with DDN SFA7700X block storage. There are 340TiB of storage space available, 280TiB of which can be utilized by our users. The remaining 60TiB are reserved for Snapshots as well as for the software in the module system. The file system supports disk and inode quotas for groups and users, which are tested on all nodes for every file access. Additionally, all the usual tools e.g. hard- and soft-limits, grace periods are available on user, group and fileset level. The HOME file system is exported via NFS to the login and compute nodes.
The HOME file system is constantly backed up and there are daily snapshots available.
The WORK file system is a DDN EXAScaler (DDN Lustre) with two DDN ES18K embedded-storage systems per site. These systems have integrated OSS and a scalable MDS unit. This storage system has a net capacity of 8.1PiB per site and a read/write performance of 81GiB/s.
The work file system does not have automatic backups.
the archiving solution in Göttingen uses a Tape-Library with StorNext. The system can be accessed from the login nodes and consists of two nodes, which have the archive mounted as the home file system. These nodes support 1GiByte/s per data stream and 5GiByte/s overall bandwidth for the connection to the archive.
The ZIB uses a Hierarchical Storage Management (HSM) system with Sun StorEdge SAM-FS as a mangement software.