The Pawsey Supercomputing Centre


EoI for Pawsey Training

Please consider filling the form below to help Pawsey understand your training needs.

This is another opportunity to open a communication channel with researchers. This time we aim to gain more understanding of their training needs.
  • Please choose from the list below the training that you believe will benefit you and your research project the most. Below you will find a learning outcomes section from each session, to give you more understanding of what to expect from each session.
  • Please specify which other HPC or Data training will be relevant for you
  • Please enter which email you wish to be contacted by
  • Where it will be suitable for you to take the training selected. City and/or Institution
  • How do you believe this training will impact your research and/or your usage of Pawsey services?

Expected learning outcomes

Introduction to Supercomputing 

  • Receive a brief overview of the computational resources available at the Pawsey Supercomputing Centre.
  • Understand parallel computing, the terminology and the various components of a  supercomputer
  • Understand resource allocation and its usage, accounting and monitoring

Intermediate Supercomputing

  • Learn about the software environment on supercomputing systems, including software modules and compilation procedures
  • Understand how to construct complex workflows with the job scheduling system SLURM
  • Be able to make effective use of parallel filesystems

Pawsey Data services

  • Receive a hands-on introduction to Mediaflux including the MedaFlux Query Language
  • Know how to move data to and from mass storage
  • Understand what Pawsey Data portal frontend offers

Remote Visualisation

  • Learn about different visualization options, including X11 forwarding, Strudel, and remote client/server options
  • Gain hands-on experience with the VisIt and ParaView scientific visualization packages

Shared memory Programming ( OpenMP)

  • A hands on introduction to OpenMP API will cover the following topics
  • Threading concepts
  • OpenMP directives
  • Data scoping
  • Runtime library routines
  • Using environment variables to control behaviour of a program with OpenMP directives
  • OpenMP SIMD directives for vectorisation on multi- and manycore architecture

Distributed memory Programming (Message Passing Interface)

  • Basic concepts of distributed memory programming
  • The MPI API
  • Point-to-point, blocking, and non-blocking communication
  • Collective communication
  • Communicators and virtual topologies
  • A brief introduction to advanced topics such as one-sided communication and remote memory access

High performance File I/O  

  • In-depth topics related to the Lustre filesystem
  • A brief, example-based introduction to parallel file formats, including
  • MPI-IO
  • HDF5

Parallel visualisation with VisIT

  • How to run VisIt in a remote client/server setting
  • Running VisIt in a parallel, distributed memory setting
  • Hands-on examples showing how to use different data operators in VisIt
  • Movie generation
  • An introduction to VisIt’s Python scripting interface