The petascale supercomputer Magnus, from the Latin for ‘Great’, is a latest-generation Cray XC40 system hosting high-end supercomputing projects across the entire range of scientific fields supported by the Pawsey Supercomputing Centre.
Magnus comprises eight compute cabinets, each holding 48 blades, with four nodes per blade, with two compute processors per node. Each compute node hosts two 12-core, Intel Xeon E5-2690V3 “Haswell” processors running at 2.6 GHz, for a total of 35,712 cores, delivering in excess of 1 PetaFLOP of computing power. The compute nodes communicate amongst themselves over Cray’s high-speed, low-latency Aries interconnect.
Local storage (also known as the scratch file system) is provided by a three-cabinet Cray Sonexion 1600 Lustre appliance, with a usable capacity of 3PB and a sustained read/write performance of 70 GB/sec.
In the November 2014 Top500 list, Magnus debuted at #41, achieving 1,097 TeraFLOPS (1 PetaFLOP+). In the most recent June 2015 Top500 list, it was ranked at #58. At the time of writing, this makes Magnus the most powerful public research supercomputer in the Southern Hemisphere.
Magnus was also ranked at #77 in the November 2014 Green500 list of the most power efficient supercomputers in the world.
Galaxy is a Cray XC30 system that supports radio-astronomy activities within the Pawsey Supercomputing Centre community.
It fulfils the real-time processing requirements of the Australian Square Kilometre Array Pathfinder (ASKAP) telescope, as well as providing for the reprocessing and research needs of the wider Australian radio-astronomy community, including those of the Murchison Widefield Array (MWA) telescope.
In the context of ASKAP, Galaxy runs the Central Science Processor, allowing pseudo-real-time processing of data delivered, to the Pawsey Supercomputing Centre, from the Murchison Radio-astronomy Observatory (MRO).
Galaxy consists of three cabinets, containing 118 compute blades, each of which has four nodes. Each node supports two, 10-core Intel Xeon E5-2960V2 “Ivy Bridge” processors operating at 3.00 GHz, for a total of 9,440 cores delivering around 200 TeraFLOPS of compute power. Communication between the nodes occurs via Cray’s high-speed, low-latency Aries interconnect.
Galaxy local storage is provided by a Cray Sonnexion 1600 appliance, providing 1.3 Petabyte of capacity, via an FDR Infiniband network.
Zythos, a latest-generation SGI UV2000 system, is a large shared-memory machine targeted towards memory-intensive jobs. Zythos consists of 24 UV blades. Twenty of the blades each contain two hex-core Intel “Sandy Bridge” processors and 256 GB RAM, and the remaining four each contain a single hex-core Intel processor, an NVIDIA Kepler K20 GPU, and 256 GB RAM. Altogether the machine boasts 264 CPU-cores, 4 GPUs, and a total of 6 TB RAM. Zythos is a partition within the Zeus cluster. Time on Zythos is allocated through the Pawsey Supercomputing Centre Director’s Share.
We are expanding! New features will be available soon.
Zeus is an SGI Linux cluster that supports pre- and post-processing of data, large shared memory computations and remote visualisation work. As part of the Pawsey Project portfolio, Zeus, together with other infrastructure in the Pawsey Centre, allows a diverse range of workflows to be undertaken.
Zeus is heterogeneous with 39 nodes in various configurations. Zythos is the largest node and it is an SGI UV2000 system with 6TB shared memory, 264 Intel Xeon processor cores and 4 NVIDIA K20 GPUs. Zeus’s other 38 nodes each have two Intel Xeon CPUs with Intel Xeon E5-2670 8-core CPUs on some nodes and Intel Xeon E5-2670v2 10-core CPUs on others. These nodes also have a GPU card on each and there are a mixture of NVIDIA Quadro K5000, NVIDIA Tesla K20 and NVIDIA Tesla K40c GPUs. In terms of memory, these 38 nodes are quite large, varying from 128GB, 256GB to 512GB.
While time on the Zythos node can be applied for directly, Zeus is provided as a support system for projects allocated time on other Pawsey Supercomputing Centre resources.
Nimbus is an Ocata OpenStack deployment based on Ubuntu 16.04 LTS Hypervisors on Ubuntu Cloud Archive repositories.
- 3x Dell KVM Hypervisors each containing:
- 1 controller node (16 vcpus, 128gb ram, 70gb disk)
- 1 db monitoring node (Mongo db) (4 vcpu, 16gb ram, 70gb disk)
- 8 test vms for test and dev environment (6 x 1 vcpu, 4gb ram, 70gb disk, and 2 x 2 vcpu, 8gb ram, 70gb disk).
Storage is provided via CEPH, while object storage CEPH RBD and RADO GATEWAY will also available to integrate with S3 APIs.
- CEPH storage nodes:
- 1x Raid1 SSD (2x physical disks):
- ~70 GB O.S. Partition
- ~50 GB CEPH Journal
- 12x Jbod 2TB – CEPH OSDs.
CEPH monitoring nodes:
- 3x separated on different controller VMs
Athena is a state-of-the-art next generation high-performance computing system. It provides Pawsey researchers with access to cutting edge technologies, and facilitate the evaluation of these technologies.
• 80 nodes with 64-core Intel Xeon Phi 7210 processors with a 100 Gb/s OmniPath interconnect
• 11 nodes with four NVIDIA Tesla P100 SXM2 GPUs with a 100 Gb/s InfiniBand interconnect