Safe Programming Methods

Lecturer: dr hab. inż. Paweł T. Wojciechowski, prof. PP

E-kursy page: https://ekursy.put.poznan.pl/course/view.php?id=37629

Consultations: via e-mail

For decades, Moore’s Law has held true: “the number of transistors in an integrated circuit doubles approximately every two years”. Historically, each new generation of processors made programs run faster. However, this trend is shifting. Future computer systems will feature an increasing number of multi-core CPU units, but individual cores will not necessarily be faster than those from previous generations. Thus, to fully utilize hardware capabilities, computations must be executed concurrently.

Unfortunately, writing concurrent programs for modern architectures—which do not guarantee sequential consistency—is significantly more challenging than writing analogous sequential ones. Additionally, testing such programs is difficult, and identifying or resolving all potential bugs is not always feasible.

The goal of these lectures is to explore programming methods for writing correct and efficient concurrent programs. First, we will examine the correct and efficient use of monitors. Second, we will discuss alternative synchronization methods, such as transactional memory. Third, we will present selected automatic verification methods. Fourth, we will address correctness issues in concurrent programs designed for modern architectures. Finally, we will introduce selected methods and languages that simplify the development of parallel and distributed programs.

If time permits, we will also give an introduction to functional programming using OCaml, a standard functional language rooted in 1970s ML. OCaml has inspired many modern languages, including Rust. By exploring the core constructs, we will emphasize expressiveness and code safety. OCaml elegantly combines powerful features such as modules, first-class functions, pattern matching, automatic memory management, and static typing with abstract types, type variables, and immutability, all of which naturally facilitate robust software development.

This course is a continuation of the Concurent and System Programming course.

Lectures

Below is a list of topics:

Date

Synopsis, articles, and other materials

Slides

5.03

Discussion of the scope of the course and the form of assessment. Repetition of selected material from the Concurrent and system programming course.

slides

12.03, 19.03

Concurrent programming and synchronization on the example of monitors in C# or Java:

  • basic monitor operations, correct access to shared data, invariants, correct design patterns, incorrect practices (e.g. double-check locking),

  • deadlock, starvation, efficiency problems with lock conflicts and priority inversion, advanced synchronization problems and optimization (e.g., avoiding spurious wake-ups and spurious lock conflicts).

An Introduction to Programming with C# Threads

slides

26.03

Detection of the race condition during program execution on the example of the Eraser tool: the lockset algorithm and its optimization (an initialization of variables, read-only shared data, read-write locks), correctness and completeness of the algorithm.

Eraser: A Dynamic Data Race Detector for Multithreaded Programs

slides

2.04

Detection of the high-level data race condition by static analysis: the algorithm and its correctness and completeness (false positives – unnecessary warnings, false negatives – unnoticed errors).

High-level Data Races

slides

9.04

Detection of programming errors (divide by zero, race condition, deadlock) based on model checking, on the example of the Java Pathfinder tool.

Java Pathfinder (JPF)

slides

9.04, 23.04

Conditional critical regions (CCR) and transactional memory: low-level operations on transactional memory, an implementation of CCR using these operations, data structures (ownership records and transaction descriptors), an algorithm of atomic writing to transactional memory.

Language Support for Lightweight Transactions

slides 1 slides 2

30.04, 7.05

Memory model on the example of Java language:

  • a memory model as a specification of correct semantics of concurrent programs and legal implementations of compilers and virtual machines,

  • weakness vs. strength of the memory model, contemporary limitations of the classical model of sequential consistency, global analysis and code optimization,

  • the happens-before-based memory model and its weakness, the memory model taking into account circular causality,

  • examples of controversial program code transformations.

The Java Memory Model

slides

14.05

Invited lecture

21.05

Parallel programming on the example of the Cilk language: programming constructs, the acyclic directed graph (DAG) as a model of multithreaded computing, performance measures on a multi-core processor, greedy scheduling and upper limits for computation time.

A Minicourse on Multithreaded Programming

Further reading (not obligatory):

OpenCilk: A Modular and Extensible Software Infrastructure for Fast Task-Parallel Code

slides

28.05

Distributed programming with guarantees of eventual consistency on the example of cloud types and conflict-free replicated types (CRDTs).

Cloud Types for Eventual Consistency

Conflict-free Replicated Data Types

Further reading (not obligatory):

A Comprehensive Study of Convergent and Commutative Replicated Data Types

slides

29.05 (piątek), L123BT

Invited Lecture (zamiast wykładu 16.04)

11.06

Colloquium

18.06

Distributed programming in the message-passing model (e.g. on the example of the Erlang language).

The development of Erlang

Typed First-Class Communication Channels and Mobility for Concurrent Scripting Languages

Further reading (not obligatory):

Nomadic Pict: programming languages, communication infrastructure overlays, and semantics for mobile computation

Labs

In the labs, students will explore various concurrent programming constructs, with a focus on memory safety in modern architectures that lack sequential consistency. We will also discuss algorithms for synchronization mechanisms and non-blocking data structures. This is complementary material to lectures.

Below are papers explaining popular algorithms implementing non-blocking data structures; pseudocode for these algorithms is also presented on the slides:

Sprzęt

W celu zbadania przepustowości i czasu odpowiedzi należy wykorzystać jeden z sześciu serwerów NUMA o nazwach pmem1-6. Każdy wyposażono w dwa procesory Intel® Xeon® Gold 6252N (2.3 GHz), 192 GiB pamięci DRAM oraz sześć modułów pamięci trwałej Intel® OptaneTM DCPMM 128 GB. Procesory posiadają po 24 rdzenie (48 wątków logicznych) i 36 MB pamięci podręcznej L3. Maszyny pracują pod kontrolą systemu OpenSUSE Tumbleweed (Linux kernel 5.8). Serwery korzystają z pamięci zewnętrznej SSD Intel® DC P4510 NVMe o pojemności 1TB i są połączone z przełącznikiem Ethernet 10 Gbps. Zainstalowane karty sieciowe wspierają technologię RDMA (Remote Direct Memory Access).

Dostęp do zasobów odbywa się w ramach klastra obliczeniowego.

This document was generated on Apr 22, 2026.