Nimbus is Pawsey’s Cloud service and as a national facility, it is available to any researcher in Australia.
Specifically designed for research applications, Nimbus facilitates large data workflows, computational tasks and even offers data analytics capabilities.
What does Nimbus offer?
- Fast access to the Pawsey data storage facility – Nimbus integrates Pawsey’s compute and storage facilities
- High Performance Computing on the OpenStack system – this is beneficial for smaller projects (while larger projects can continue on Magnus)
- A centralised system and toolset for data science – such as data analytics and machine learning; with data centric workflows and visualisation
- Open access for Australian researchers, including state government researchers
- A new application process that supports scientific research
- A dedicated, expert support team
There are two types of application options to access Nimbus – the standard Nimbus application and a Special Project Application. You can apply for either of these by visiting https://apply.pawsey.org.au
The Nimbus application is designed for researchers who need to use individual or small clusters of Virtual Machines (VMs) from the standard set of VMs. These come in 1, 2, 4, 8 and 16 core sizes. Applicants simply need to be an Australian researcher who can describe their compute requirements and intended research use case in their application.
The Nimbus Special Project application is for users with needs outside the mainstream. They can be researchers who need to make use of large clusters (Spark, Hadoop, MPI etc.) or those projects where 16 cores aren’t enough.
Such users can apply for a special project, which usually has a pre-specified limited duration. These applications require more technical detail and a more rigorous scientific justification.
With over fifty projects currently running, Nimbus has some serious technology on offer!
Nimbus is an Ocata OpenStack deployment based on Ubuntu 16.04 LTS Hypervisors on Ubuntu Cloud Archive repositories.
- 3x Dell KVM Hypervisors each containing:
- 1 controller node (16 vcpus, 128gb ram, 70gb disk)
- 1 db monitoring node (Mongo db) (4 vcpu, 16gb ram, 70gb disk)
- 8 test vms for test and dev environment (6 x 1 vcpu, 4gb ram, 70gb disk, and 2 x 2 vcpu, 8gb ram, 70gb disk).
Storage is provided via CEPH, while object storage CEPH RBD and RADO GATEWAY will also available to integrate with S3 APIs.
- CEPH storage nodes:
- 1x Raid1 SSD (2x physical disks):
- ~70 GB O.S. Partition
- ~50 GB CEPH Journal
- 12x Jbod 2TB – CEPH OSDs.
CEPH monitoring nodes:
- 3x separated on different controller VMs
In Q4 2017, Pawsey will be deploying GPU nodes on Nimbus. This additional GPU hardware will allow users to create virtual machines for more computational power.
Throughout the year, the Pawsey Supercomputing Centre runs Cloud user training sessions, which form part of the Centre’s National Training initiative.
This training introduces users to Nimbus, including:
- An introduction to creating and setting up virtual machines
- Migrating VMs from other cloud services
- Advanced tools available in Nimbus
For all Pawsey training and events, visit the Events calendar here.