ARCHER logo ARCHER banner

The ARCHER Service is now closed and has been superseded by ARCHER2.

  • ARCHER homepage
  • About ARCHER
    • About ARCHER
    • News & Events
    • Calendar
    • Blog Articles
    • Hardware
    • Software
    • Service Policies
    • Service Reports
    • Partners
    • People
    • Media Gallery
  • Get Access
    • Getting Access
    • TA Form and Notes
    • kAU Calculator
    • Cost of Access
  • User Support
    • User Support
    • Helpdesk
    • Frequently Asked Questions
    • ARCHER App
  • Documentation
    • User Guides & Documentation
    • Essential Skills
    • Quick Start Guide
    • ARCHER User Guide
    • ARCHER Best Practice Guide
    • Scientific Software Packages
    • UK Research Data Facility Guide
    • Knights Landing Guide
    • Data Management Guide
    • SAFE User Guide
    • ARCHER Troubleshooting Guide
    • ARCHER White Papers
    • Screencast Videos
  • Service Status
    • Detailed Service Status
    • Maintenance
  • Training
    • Upcoming Courses
    • Online Training
    • Driving Test
    • Course Registration
    • Course Descriptions
    • Virtual Tutorials and Webinars
    • Locations
    • Training personnel
    • Past Course Materials Repository
    • Feedback
  • Community
    • ARCHER Community
    • ARCHER Benchmarks
    • ARCHER KNL Performance Reports
    • Cray CoE for ARCHER
    • Embedded CSE
    • ARCHER Champions
    • ARCHER Scientific Consortia
    • HPC Scientific Advisory Committee
    • ARCHER for Early Career Researchers
  • Industry
    • Information for Industry
  • Outreach
    • Outreach (on EPCC Website)

You are here:

  • ARCHER
  • Upcoming Courses
  • Online Training
  • Driving Test
  • Course Registration
  • Course Descriptions
  • Virtual Tutorials and Webinars
  • Locations
  • Training personnel
  • Past Course Materials Repository
  • Feedback

Contact Us

support@archer.ac.uk

Twitter Feed

Tweets by @ARCHER_HPC

ISO 9001 Certified

ISO 27001 Certified

Message Passing Programming with MPI : in collaboration with Women in HPC

This course is being organised in collaboration with Women in HPC which aims to improve the representation of women in the HPC community. The course tutors from EPCC will be Neelofer Banglawala and Adrian Jackson.

The world's largest supercomputers are used almost exclusively to run applications which are parallelised using Message Passing. The course covers all the basic knowledge required to write parallel programs using this programming model, and is directly applicable to almost every parallel computer architecture.

Details

Parallel programming by definition involves co-operation between processes to solve a common task. The programmer has to define the tasks that will be executed by the processors, and also how these tasks are to synchronise and exchange data with one another. In the message-passing model the tasks are separate processes that communicate and synchronise by explicitly sending each other messages. All these parallel operations are performed via calls to some message-passing interface that is entirely responsible for interfacing with the physical communication network linking the actual processors together. This course uses the de facto standard for message passing, the Message Passing Interface (MPI). It covers point-to-point communication, non-blocking operations, derived datatypes, virtual topologies, collective communication and general design issues.

The course is normally delivered in an intensive two-day format, or as in this case, over three days. It is taught using a variety of methods including formal lectures, practical exercises, programming examples and informal tutorial discussions. This enables lecture material to be supported by the tutored practical sessions in order to reinforce the key concepts.

This course is free to all academics.

Intended learning outcomes

On completion of this course students should be able to:

  • Understand the message-passing model in detail.
  • Implement standard message-passing algorithms in MPI.
  • Debug simple MPI codes.
  • Measure and comment on the performance of MPI codes.
  • Design and implement efficient parallel programs to solve regular-grid problems.

Pre-requisites

Programming Languages:

  • Fortran, C or C++.

It is not possible to do the exercises in Java.

Timetable

Day 1

  • 09:30 - 10:15 : Message-Passing Concepts
  • 10:15 - 11:00 : Practical: Parallel Traffic Modelling
  • 11:00 - 11:30 : Break
  • 11:30 - 12:00 : MPI Programs
  • 12:00 - 12:15 : MPI on ARCHER
  • 12:15 - 13:00 : Practical: Hello World
  • 13:00 - 14:00 : Lunch
  • 14:00 - 14:30 : Point-to-Point Communication
  • 14:30 - 15:30 : Practical: Pi
  • 15:30 - 16:00 : Break
  • 16:00 - 16:45 : Communicators, Tags and Modes
  • 16:45 - 17:30 : Practical: Ping-Pong

Day 2

  • 09:30 - 10:00 : Non-Blocking Communication
  • 10:00 - 11:00 : Practical: Message Round a Ring
  • 11:00 - 11:30 : Break
  • 11:30 - 12:00 : Collective Communicaton
  • 12:00 - 13:00 : Practical: Collective Communication
  • 13:00 - 14:00 : Lunch
  • 14:00 - 14:30 : Virtual Topologies
  • 14:30 - 15:30 : Practical: Message Round a Ring (cont.)
  • 15:30 - 16:00 : Break
  • 16:00 - 16:45 : Derived Data Types
  • 16:45 - 17:30 : Practical: Message Round a Ring (cont.)
Day 3
  • 09:30 - 10:00 : Introduction to the Case Study
  • 10:00 - 11:00 : Practical: Case Study
  • 11:00 - 11:30 : Break
  • 11:30 - 13:00 : Practical: Case Study (cont.)
  • 13:00 - 14:00 : Lunch
  • 14:15 - 15:00 : Designing MPI Programs
  • 15:00 - 15:30 : Practical: Case Study (cont.)
  • 15:30 - 16:00 : Break
  • 16:00 - 16:30 : Scaling and Performance Analysis

Course Materials

Links to the Slides and exercise material for this course.

Location

The course will be held in University College London

Registration

Please use the registration page to register for ARCHER courses.

Questions?

If you have any questions please contact the ARCHER Helpdesk.

Copyright © Design and Content 2013-2019 EPCC. All rights reserved.

EPSRC NERC EPCC