The Pawsey Supercomputing Centre


Training Request form

If you are a principal investigator for a project using Pawsey infrastructure and requered training to take advantage of the services we provide please consider requesting training by filling this form
  • Please provide the full name of the principal investigator requesting the training
  • Please enter the email account linked to your Pawsey project
  • Please provide your LDAP project name
  • Please enter state, institution and city where you and your work are located
  • Please specify if is expected the session to be closed (only for the group requesting the training) or could be open for researchers from other groups or institutions
  • Could you please specify when the training is needed and why?
  • Could you please choose from the list below the training that you are requesting
  • Specify

Expected learning outcomes

  • Introduction to Supercomputing 
    • Receive a brief overview of the computational resources available at the Pawsey Supercomputing Centre.
    • Understand parallel computing, the terminology and the various components of a  supercomputer
    • Understand resource allocation and its usage, accounting and monitoring

    Intermediate Supercomputing

    • Learn about the software environment on supercomputing systems, including software modules and compilation procedures
    • Understand how to construct complex workflows with the job scheduling system SLURM
    • Be able to make effective use of parallel filesystems

    Pawsey Data services

    • Receive a hands-on introduction to Mediaflux including the MedaFlux Query Language
    • Know how to move data to and from mass storage
    • Understand what Pawsey Data portal frontend offers

    Remote Visualisation

    • Learn about different visualization options, including X11 forwarding, Strudel, and remote client/server options
    • Gain hands-on experience with the VisIt and ParaView scientific visualization packages

    Shared memory Programming ( OpenMP)

    • A hands on introduction to OpenMP API will cover the following topics
    • Threading concepts
    • OpenMP directives
    • Data scoping
    • Runtime library routines
    • Using environment variables to control behaviour of a program with OpenMP directives
    • OpenMP SIMD directives for vectorisation on multi- and manycore architecture

    Distributed memory Programming (Message Passing Interface)

    • Basic concepts of distributed memory programming
    • The MPI API
    • Point-to-point, blocking, and non-blocking communication
    • Collective communication
    • Communicators and virtual topologies
    • A brief introduction to advanced topics such as one-sided communication and remote memory access

    High performance File I/O  

    • In-depth topics related to the Lustre filesystem
    • A brief, example-based introduction to parallel file formats, including
    • MPI-IO
    • HDF5
    • ADIOS

    Parallel visualisation with VisIT

    • How to run VisIt in a remote client/server setting
    • Running VisIt in a parallel, distributed memory setting
    • Hands-on examples showing how to use different data operators in VisIt
    • Movie generation
    • An introduction to VisIt’s Python scripting interface