Course

Different Paradigms of Parallel Computing

Jul 16, 2025 - Jul 16, 2025
2 credits

Spots remaining: 18

Enroll

Full course description

Term: Summer 2025

Date: July 16th, 2025

Time: 10:00 a.m. to Noon

Location: Online Only

Instructor: Chris Kuhlman

Presented By: Advanced Research Computing (ARC)

 

Description:

Concurrency programming is a general term meaning that (most often) one execution of one code performs operations simultaneously (concurrently) by using multiple computing hardware processors (e.g., cores on a compute node, GPU cores/threads on an accelerator).  There are many approaches for implementing concurrency; some of the more common are threading (e.g., via OpenMP or Pthreads) on one compute node, interprocess communication across multiple compute nodes (e.g., MPI, ACE, message queues), GPU computing, forking, and embarrassingly parallel.  Moreover, for each class of concurrency, there can be several/many approaches and tools, and different programming languages sometimes have their own implementations of concurrency constructs.  There are also concepts such as critical sections and barriers that are useful across these classes.  This workshop presents many of these ideas through working examples that use five programming languages and other tools.  The workshop has two thrusts: (1) to convey concepts and ideas, (2) to provide examples that work on ARC resources.

Topics covered, with example codes, are:  (1) high-level representations of different concurrency mechanisms, (2) srun within Slurm and job arrays within Slurm, (3) Python threading (NOT … yet), (4) Pthreads and Java threads, (5) OpenMP (for threading), (6) MPI, (7) sockets, and (8) GPUs.

You should consider attending if any of these apply:  (1) you are a first-year graduate student, (2) you are in your first year of research computing, (3) you want to speed up your computations (if possible), (4) you want to learn which methods are more easily implementable than others, (5) you want to learn about design choices, (6) you want to implement your own parallel codes, (7) you use software like ANSYS or LAMMPS and want to understand what is happening when you select different options for concurrency/parallelism in your executions.

Prerequisites:

A basic understanding of computer programming (in any language); even a beginner level is sufficient.
The main prerequisites are conceptual; understanding these concepts is important:  a directory, a file, a programming language, input data, output data, that there exist other ways to run a code besides serial coding (you don’t have to know what these are—the purpose of this workshop is to explain different approaches for concurrency). 
A basic understanding of UNIX/Linux shells is helpful, but not required (commands like cd, ls, pwd, mkdir).
An account on ARC and an account to run jobs (i.e., run codes) will eventually be required, but not mandatory for the workshop.
Ability to connect to ARC clusters (either directly on campus or using VPN at home) is required to do the exercises, but you do not have to do the exercises during the workshop; the instructor will do the exercises.
No additional software (e.g., compilers) is needed; all required software is on the ARC clusters

 

Sign up for this course today!

Enroll