The Pawsey Supercomputing Centre

Content

Systems

magnuspawsey-1bannercrop

 

Magnus

The petascale supercomputer Magnus, from the Latin for ‘Great’, is a latest-generation Cray XC40 system hosting high-end supercomputing projects across the entire range of scientific fields supported by the Pawsey Supercomputing Centre.

Magnus comprises eight compute cabinets, each holding 48 blades, with four nodes per blade, with two compute processors per node. Each compute node hosts two 12-core, Intel Xeon E5-2690V3 “Haswell” processors running at 2.6 GHz, for a total of 35,712 cores, delivering in excess of 1 PetaFLOP of computing power. The compute nodes communicate amongst themselves over Cray’s high-speed, low-latency Aries interconnect.

Local storage (also known as the scratch file system) is provided by a three-cabinet Cray Sonexion 1600 Lustre appliance, with a usable capacity of 3PB and a sustained read/write performance of 70 GB/sec.

In the November 2014 Top500 list, Magnus debuted at #41, achieving 1,097 TeraFLOPS (1 PetaFLOP+). In the most recent June 2015 Top500 list, Magnus was ranked at #58. At the time of writing, this makes Magnus the most powerful public research supercomputer in the Southern Hemisphere.

Magnus was also ranked at #77 in the November 2014 Green500 list of the most power efficient supercomputers in the world.

Technical Specifications

How to Access

 

GalaxySystemsBanner

 

Galaxy

Galaxy is a Cray XC30 system that supports radio-astronomy activities within the Pawsey Supercomputing Centre community.

It fulfils the real-time processing requirements of the Australian Square Kilometre Array Pathfinder (ASKAP) telescope, as well as providing for the reprocessing and research needs of the wider Australian radio-astronomy community, including those of the Murchison Widefield Array (MWA) telescope.

In the context of ASKAP, Galaxy runs the Central Science Processor, allowing pseudo-real-time processing of data delivered, to the Pawsey Supercomputing Centre, from the Murchison Radio-astronomy Observatory (MRO).

Galaxy consists of three cabinets, containing 118 compute blades, each of which has four nodes. Each node supports two, 10-core Intel Xeon E5-2960V2 “Ivy Bridge” processors operating at 3.00 GHz, for a total of 9,440 cores delivering around 200 TeraFLOPS of compute power. Communication between the nodes occurs via Cray’s high-speed, low-latency Aries interconnect.

Galaxy local storage is provided by a Cray Sonnexion 1600 appliance, providing 1.3 Petabyte of capacity, via an FDR Infiniband network.

Technical Specifications

How to Access

 

Zythosbannercrop

 

Zythos

Zythos, a latest-generation SGI UV2000 system, is a large shared-memory machine targeted towards memory-intensive jobs. Zythos consists of 24 UV blades. Twenty of the blades each contain two hex-core Intel “Sandy Bridge” processors and 256 GB RAM, and the remaining four each contain a single hex-core Intel processor, an NVIDIA Kepler K20 GPU, and 256 GB RAM. Altogether the machine boasts 264 CPU-cores, 4 GPUs, and a total of 6 TB RAM. Zythos is a partition within the Zeus cluster. Time on Zythos is allocated through the Pawsey Supercomputing Centre Director’s Share.

Technical Specifications

How to Access

 

 

zeusbanner

 

Zeus

Zeus is an SGI Linux cluster that supports pre- and post-processing of data, large shared memory computations and remote visualisation work. As part of the Pawsey Project portfolio, Zeus, together with other infrastructure in the Pawsey Centre, allows a diverse range of workflows to be undertaken.

Zeus is heterogeneous with 39 nodes in various configurations. Zythos is the largest node and it is an SGI UV2000 system with 6TB shared memory, 264 Intel Xeon processor cores and 4 NVIDIA K20 GPUs. Zeus’s other 38 nodes each have two Intel Xeon CPUs with Intel Xeon E5-2670 8-core CPUs on some nodes and Intel Xeon E5-2670v2 10-core CPUs on others. These nodes also have a GPU card on each and there are a mixture of NVIDIA Quadro K5000, NVIDIA Tesla K20 and NVIDIA Tesla K40c GPUs. In terms of memory, these 38 nodes are quite large, varying from 128GB, 256GB to 512GB.

While time on the Zythos node can be applied for directly, Zeus is provided as a support system for projects allocated time on other Pawsey Supercomputing Centre resources.

Technical Specifications

How to Access

 

NeCTARSystems1

 

NeCTAR Research Cloud

The Pawsey Supercomputing Centre is part of the national NeCTAR Research Cloud Federation.  The Pawsey Research Cloud node is housed within the state of the art Pawsey Centre, Perth, WA.

The WA NeCTAR Research Cloud features 46 IBM System X 3755 M3 servers as compute nodes. Each node has:

  • 64 cores at 2.3GHz
  • 256GB RAM
  • 6x 10Gbps links for storage and external access

This makes a grand total of 2944 cores and 11.5TB of memory.

It also includes 31 IBM System X 3650 M4 servers as Ceph storage nodes. Each storage node has 24TB of raw SATA disk, which adds up to 216TB of volume storage.

Technical Specifications

How to Access