Checkpoint synchronization: Difference between revisions

m
m (→‎{{header|Wren}}: Minor tidy)
 
(35 intermediate revisions by 14 users not shown)
Line 1:
{{task|Concurrency}}[[Category:Classic CS problems and programs]]{{requires|Concurrency}}
{{task|Concurrency}}{{requires|Concurrency}}
The checkpoint synchronization is a problem of synchronizing multiple [[task]]s. Consider a workshop where several workers ([[task]]s) assembly details of some mechanism. When each of them completes his work they put the details together. There is no store, so a worker who finished its part first must wait for others before starting another one. Putting details together is the ''checkpoint'' at which [[task]]s synchronize themselves before going their paths apart.
 
Line 11 ⟶ 12:
 
If you can, implement workers joining and leaving.
 
=={{header|Ada}}==
<langsyntaxhighlight Adalang="ada">with Ada.Calendar; use Ada.Calendar;
with Ada.Numerics.Float_Random;
with Ada.Text_IO; use Ada.Text_IO;
Line 115:
end Test_Checkpoint;
 
</syntaxhighlight>
</lang>
Sample output:
<pre style="height: 200px;overflow:scroll">
Line 186:
F ends shift
D ends shift
</pre>
 
=={{header|BBC BASIC}}==
{{works with|BBC BASIC for Windows}}
<langsyntaxhighlight lang="bbcbasic"> INSTALL @lib$+"TIMERLIB"
nWorkers% = 3
DIM tID%(nWorkers%)
Line 248 ⟶ 247:
PROC_killtimer(tID%(I%))
NEXT
ENDPROC</langsyntaxhighlight>
'''Output:'''
<pre>
Line 269 ⟶ 268:
Worker 2 starting (5 ticks)
</pre>
 
=={{header|C}}==
Using OpenMP. Compiled with <code>gcc -Wall -fopenmp</code>.
<langsyntaxhighlight Clang="c">#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
Line 302 ⟶ 300:
 
return 0;
}</langsyntaxhighlight>
=={{header|C sharp|C#}}==
{{works with|C sharp|10}}
<syntaxhighlight lang="csharp">using System;
using System.Linq;
using System.Threading;
using System.Threading.Tasks;
 
namespace Rosetta.CheckPointSync;
 
public class Program
{
public async Task Main()
{
RobotBuilder robotBuilder = new RobotBuilder();
Task work = robotBuilder.BuildRobots(
"Optimus Prime", "R. Giskard Reventlov", "Data", "Marvin",
"Bender", "Number Six", "C3-PO", "Dolores");
await work;
}
 
public class RobotBuilder
{
static readonly string[] parts = { "Head", "Torso", "Left arm", "Right arm", "Left leg", "Right leg" };
static readonly Random rng = new Random();
static readonly object key = new object();
 
public Task BuildRobots(params string[] robots)
{
int r = 0;
Barrier checkpoint = new Barrier(parts.Length, b => {
Console.WriteLine($"{robots[r]} assembled. Hello, {robots[r]}!");
Console.WriteLine();
r++;
});
var tasks = parts.Select(part => BuildPart(checkpoint, part, robots)).ToArray();
return Task.WhenAll(tasks);
}
 
private static int GetTime()
{
//Random is not threadsafe, so we'll use a lock.
//There are better ways, but that's out of scope for this exercise.
lock (key) {
return rng.Next(100, 1000);
}
}
 
private async Task BuildPart(Barrier barrier, string part, string[] robots)
{
foreach (var robot in robots) {
int time = GetTime();
Console.WriteLine($"Constructing {part} for {robot}. This will take {time}ms.");
await Task.Delay(time);
Console.WriteLine($"{part} for {robot} finished.");
barrier.SignalAndWait();
}
}
 
}
}</syntaxhighlight>
{{out}}
<pre style="height:30ex;overflow:scroll">
Constructing Head for Optimus Prime. This will take 607ms.
Constructing Torso for Optimus Prime. This will take 997ms.
Constructing Left arm for Optimus Prime. This will take 201ms.
Constructing Right arm for Optimus Prime. This will take 993ms.
Constructing Left leg for Optimus Prime. This will take 165ms.
Constructing Right leg for Optimus Prime. This will take 132ms.
Right leg for Optimus Prime finished.
Left leg for Optimus Prime finished.
Left arm for Optimus Prime finished.
Head for Optimus Prime finished.
Right arm for Optimus Prime finished.
Torso for Optimus Prime finished.
Optimus Prime assembled. Hello, Optimus Prime!
 
Constructing Right arm for R. Giskard Reventlov. This will take 772ms.
Constructing Left leg for R. Giskard Reventlov. This will take 722ms.
Constructing Head for R. Giskard Reventlov. This will take 140ms.
Constructing Left arm for R. Giskard Reventlov. This will take 299ms.
Constructing Right leg for R. Giskard Reventlov. This will take 637ms.
Constructing Torso for R. Giskard Reventlov. This will take 249ms.
Head for R. Giskard Reventlov finished.
Torso for R. Giskard Reventlov finished.
Left arm for R. Giskard Reventlov finished.
Right leg for R. Giskard Reventlov finished.
Left leg for R. Giskard Reventlov finished.
Right arm for R. Giskard Reventlov finished.
R. Giskard Reventlov assembled. Hello, R. Giskard Reventlov!
 
//etc
</pre>
=={{header|C++}}==
{{works with|C++11}}
<langsyntaxhighlight lang="cpp">#include <iostream>
#include <chrono>
#include <atomic>
Line 357 ⟶ 446:
for(auto& t: threads) t.join();
std::cout << "Assembly is finished";
}</langsyntaxhighlight>
{{out}}
<pre>
Worker Mary finished work
Line 372 ⟶ 461:
Assembly is finished
</pre>
 
=={{header|Clojure}}==
With a fixed number of workers, this would be very straightforward in Clojure by using a ''CyclicBarrier'' from ''java.util.concurrent''.
So to make it interesting, this version supports workers dynamically joining and parting, and uses the new (2013) ''core.async'' library to use Go-like channels.
Also, each worker passes a value to the checkpoint, so that some ''combine'' function could consume them once they're all received.
<langsyntaxhighlight lang="clojure">(ns checkpoint.core
(:gen-class)
(:require [clojure.core.async :as async :refer [go <! >! <!! >!! alts! close!]]
Line 448 ⟶ 536:
(worker ckpt 10 (monitor 2))))
 
</syntaxhighlight>
</lang>
 
=={{header|D}}==
<langsyntaxhighlight lang="d">import std.stdio;
import std.parallelism: taskPool, defaultPoolThreads, totalCPUs;
 
Line 474 ⟶ 561:
buildMechanism(42);
buildMechanism(11);
}</langsyntaxhighlight>
{{out|Example output}}
<pre>Build detail 0
Line 489 ⟶ 576:
Checkpoint reached. Assemble details ...
Mechanism with 11 parts finished: 55</pre>
 
=={{header|E}}==
 
Line 496 ⟶ 582:
That said, here is an implementation of the task as stated. We start by defining a 'flag set' data structure (which is hopefully also useful for other problems), which allows us to express the checkpoint algorithm straightforwardly while being protected against the possibility of a task calling <code>deliver</code> or <code>leave</code> too many times. Note also that each task gets its own reference denoting its membership in the checkpoint group; thus it can only speak for itself and not break any global invariants.
 
<langsyntaxhighlight lang="e">/** A flagSet solves this problem: There are N things, each in a true or false
* state, and we want to know whether they are all true (or all false), and be
* able to bulk-change all of them, and all this without allowing double-
Line 609 ⟶ 695:
waits with= makeWorker(piece, checkpoint)
}
interp.waitAtTop(promiseAllFulfilled(waits))</langsyntaxhighlight>
 
=={{header|Erlang}}==
A team of 5 workers assemble 3 items. The time it takes to assemble 1 item is 0 - 100 milliseconds.
<syntaxhighlight lang="erlang">
<lang Erlang>
-module( checkpoint_synchronization ).
 
Line 649 ⟶ 734:
end,
worker_loop( Worker, N - 1, Checkpoint ).
</syntaxhighlight>
</lang>
{{out}}
<pre>
Line 669 ⟶ 754:
Worker 5 item 1
</pre>
=={{header|FreeBASIC}}==
The library ontimer.bi, I have taken it from [https://www.freebasic.net/forum/viewtopic.php?f=7&t=23454 forums of FB].
<syntaxhighlight lang="freebasic">#include "ontimer.bi"
 
Randomize Timer
Dim Shared As Uinteger nWorkers = 3
Dim Shared As Uinteger tID(nWorkers)
Dim Shared As Integer cnt(nWorkers)
Dim Shared As Integer checked = 0
 
Sub checkpoint()
Dim As Boolean sync
If checked = 0 Then sync = False
checked += 1
If (sync = False) And (checked = nWorkers) Then
sync = True
Color 14 : Print "--Sync Point--"
checked = 0
End If
End Sub
 
Sub task(worker As Uinteger)
Redim Preserve cnt(nWorkers)
Select Case cnt(worker)
Case 0
cnt(worker) = Rnd * 3
Color 15 : Print "Worker " & worker & " starting (" & cnt(worker) & " ticks)"
Case -1
Exit Select
Case Else
cnt(worker) -= 1
If cnt(worker) = 0 Then
Color 7 : Print "Worker "; worker; " ready and waiting"
cnt(worker) = -1
checkpoint
cnt(worker) = 0
End If
End Select
End Sub
 
Sub worker1
task(1)
End Sub
Sub worker2
task(2)
End Sub
Sub worker3
task(3)
End Sub
 
Do
OnTimer(500, @worker1, 1)
OnTimer(100, @worker2, 1)
OnTimer(900, @worker3, 1)
Sleep 1000
Loop</syntaxhighlight>
{{out}}
<pre>Worker 1 starting (2 ticks)
Worker 1 ready and waiting
Worker 3 starting (1 ticks)
Worker 3 ready and waiting
--Sync Point--
Worker 3 starting (1 ticks)
Worker 3 ready and waiting
Worker 2 ready and waiting
Worker 1 starting (1 ticks)
Worker 2 starting (0 ticks)
Worker 1 ready and waiting
--Sync Point--
Worker 3 starting (0 ticks)
Worker 2 starting (1 ticks)
Worker 1 starting (1 ticks)
Worker 3 starting (2 ticks)
Worker 2 ready and waiting
Worker 1 ready and waiting
Worker 2 starting (1 ticks)
Worker 1 starting (0 ticks)
Worker 3 ready and waiting
--Sync Point--
Worker 2 ready and waiting
Worker 1 starting (1 ticks)
Worker 3 starting (1 ticks)
Worker 2 starting (3 ticks)
Worker 1 ready and waiting
Worker 3 ready and waiting</pre>
=={{header|Go}}==
'''Solution 1, WaitGroup'''
As of February 2011, Go has checkpoint synchronization in the standard library, with a type called WaitGroup in the package sync. Code below uses this feature and completes the task with the workshop scenario, including workers joining and leaving.
 
The type sync.WaitGroup in the standard library implements a sort of checkpoint synchronization. It allows one goroutine to wait for a number of other goroutines to indicate something, such as completing some work.
Also see the Go solution(s) to [[concurrent computing]]. That is a much simpler task, and shown there are two different implementations of checkpoint synchronization.
 
<lang go>package main
This first solution is a simple interpretation of the task, starting a goroutine (worker) for each part, letting the workers run concurrently, and waiting for them to all indicate completion. This is efficient and idiomatic in Go.
 
<syntaxhighlight lang="go">package main
import (
"log"
"math/rand"
"sync"
"time"
)
 
func worker(part string) {
log.Println(part, "worker begins part")
time.Sleep(time.Duration(rand.Int63n(1e6)))
log.Println(part, "worker completes part")
wg.Done()
}
 
var (
partList = []string{"A", "B", "C", "D"}
nAssemblies = 3
wg sync.WaitGroup
)
 
func main() {
rand.Seed(time.Now().UnixNano())
for c := 1; c <= nAssemblies; c++ {
log.Println("begin assembly cycle", c)
wg.Add(len(partList))
for _, part := range partList {
go worker(part)
}
wg.Wait()
log.Println("assemble. cycle", c, "complete")
}
}</syntaxhighlight>
{{out}}
Sample run, with race detector option to show no race conditions detected.
<pre>
$ go run -race r1.go
2018/06/04 15:44:11 begin assembly cycle 1
2018/06/04 15:44:11 A worker begins part
2018/06/04 15:44:11 B worker begins part
2018/06/04 15:44:11 B worker completes part
2018/06/04 15:44:11 D worker begins part
2018/06/04 15:44:11 A worker completes part
2018/06/04 15:44:11 C worker begins part
2018/06/04 15:44:11 D worker completes part
2018/06/04 15:44:11 C worker completes part
2018/06/04 15:44:11 assemble. cycle 1 complete
2018/06/04 15:44:11 begin assembly cycle 2
2018/06/04 15:44:11 A worker begins part
2018/06/04 15:44:11 B worker begins part
2018/06/04 15:44:11 A worker completes part
2018/06/04 15:44:11 C worker begins part
2018/06/04 15:44:11 D worker begins part
2018/06/04 15:44:11 C worker completes part
2018/06/04 15:44:11 B worker completes part
2018/06/04 15:44:11 D worker completes part
2018/06/04 15:44:11 assemble. cycle 2 complete
2018/06/04 15:44:11 begin assembly cycle 3
2018/06/04 15:44:11 A worker begins part
2018/06/04 15:44:11 B worker begins part
2018/06/04 15:44:11 A worker completes part
2018/06/04 15:44:11 C worker begins part
2018/06/04 15:44:11 D worker begins part
2018/06/04 15:44:11 B worker completes part
2018/06/04 15:44:11 C worker completes part
2018/06/04 15:44:11 D worker completes part
2018/06/04 15:44:11 assemble. cycle 3 complete
$
</pre>
 
'''Solution 2, channels'''
 
Channels also synchronize, and in addition can send data. The solution shown here is very similar to the WaitGroup solution above but sends data on a channel to simulate a completed part. The channel operations provide synchronization and a WaitGroup is not needed.
 
<syntaxhighlight lang="go">package main
 
import (
"log"
"math/rand"
"strings"
"time"
)
 
func worker(part string, completed chan string) {
log.Println(part, "worker begins part")
time.Sleep(time.Duration(rand.Int63n(1e6)))
p := strings.ToLower(part)
log.Println(part, "worker completed", p)
completed <- p
}
 
var (
partList = []string{"A", "B", "C", "D"}
nAssemblies = 3
)
 
func main() {
rand.Seed(time.Now().UnixNano())
completed := make([]chan string, len(partList))
for i := range completed {
completed[i] = make(chan string)
}
for c := 1; c <= nAssemblies; c++ {
log.Println("begin assembly cycle", c)
for i, part := range partList {
go worker(part, completed[i])
}
a := ""
for _, c := range completed {
a += <-c
}
log.Println(a, "assembled. cycle", c, "complete")
}
}</syntaxhighlight>
{{out}}
<pre>
$ go run -race r2.go
2018/06/04 15:56:33 begin assembly cycle 1
2018/06/04 15:56:33 A worker begins part
2018/06/04 15:56:33 B worker begins part
2018/06/04 15:56:33 A worker completed a
2018/06/04 15:56:33 D worker begins part
2018/06/04 15:56:33 C worker begins part
2018/06/04 15:56:33 B worker completed b
2018/06/04 15:56:33 C worker completed c
2018/06/04 15:56:33 D worker completed d
2018/06/04 15:56:33 abcd assembled. cycle 1 complete
2018/06/04 15:56:33 begin assembly cycle 2
2018/06/04 15:56:33 A worker begins part
2018/06/04 15:56:33 B worker begins part
2018/06/04 15:56:33 C worker begins part
2018/06/04 15:56:33 D worker begins part
2018/06/04 15:56:33 A worker completed a
2018/06/04 15:56:33 B worker completed b
2018/06/04 15:56:33 D worker completed d
2018/06/04 15:56:33 C worker completed c
2018/06/04 15:56:33 abcd assembled. cycle 2 complete
2018/06/04 15:56:33 begin assembly cycle 3
2018/06/04 15:56:33 A worker begins part
2018/06/04 15:56:33 B worker begins part
2018/06/04 15:56:33 C worker begins part
2018/06/04 15:56:33 D worker begins part
2018/06/04 15:56:33 B worker completed b
2018/06/04 15:56:33 A worker completed a
2018/06/04 15:56:33 D worker completed d
2018/06/04 15:56:33 C worker completed c
2018/06/04 15:56:33 abcd assembled. cycle 3 complete
$
</pre>
 
'''Solution 3, two-phase barrier'''
 
For those that might object to the way the two solutions above start new goroutines in each cycle, here is a technique sometimes called a two-phase barrier, where goroutines loop until being shutdown. In each loop there are two phases, one of making the part, and one of waiting for the completed parts to be assembled. This more literally satisfies the task but in fact is not idiomatic Go. Goroutines are cheap to start up and shut down in Go and the extra complexity of this two-phase barrier technique is
not justified.
 
<syntaxhighlight lang="go">package main
 
import (
"log"
"math/rand"
"strings"
"sync"
"time"
)
 
func worker(part string, completed chan string) {
log.Println(part, "worker running")
for {
select {
case <-start:
log.Println(part, "worker begins part")
time.Sleep(time.Duration(rand.Int63n(1e6)))
p := strings.ToLower(part)
log.Println(part, "worker completed", p)
completed <- p
<-reset
wg.Done()
case <-done:
log.Println(part, "worker stopped")
wg.Done()
return
}
}
}
 
var (
partList = []string{"A", "B", "C", "D"}
nAssemblies = 3
start = make(chan int)
done = make(chan int)
reset chan int
wg sync.WaitGroup
)
 
func main() {
rand.Seed(time.Now().UnixNano())
completed := make([]chan string, len(partList))
for i, part := range partList {
completed[i] = make(chan string)
go worker(part, completed[i])
}
for c := 1; c <= nAssemblies; c++ {
log.Println("begin assembly cycle", c)
reset = make(chan int)
close(start)
a := ""
for _, c := range completed {
a += <-c
}
log.Println(a, "assembled. cycle", c, "complete")
wg.Add(len(partList))
start = make(chan int)
close(reset)
wg.Wait()
}
wg.Add(len(partList))
close(done)
wg.Wait()
}</syntaxhighlight>
{{out}}
<pre>
$ go run -race r3.go
2018/06/04 16:11:54 A worker running
2018/06/04 16:11:54 B worker running
2018/06/04 16:11:54 C worker running
2018/06/04 16:11:54 begin assembly cycle 1
2018/06/04 16:11:54 A worker begins part
2018/06/04 16:11:54 D worker running
2018/06/04 16:11:54 C worker begins part
2018/06/04 16:11:54 B worker begins part
2018/06/04 16:11:54 D worker begins part
2018/06/04 16:11:54 A worker completed a
2018/06/04 16:11:54 C worker completed c
2018/06/04 16:11:54 D worker completed d
2018/06/04 16:11:54 B worker completed b
2018/06/04 16:11:54 abcd assembled. cycle 1 complete
2018/06/04 16:11:54 begin assembly cycle 2
2018/06/04 16:11:54 C worker begins part
2018/06/04 16:11:54 D worker begins part
2018/06/04 16:11:54 B worker begins part
2018/06/04 16:11:54 A worker begins part
2018/06/04 16:11:54 D worker completed d
2018/06/04 16:11:54 A worker completed a
2018/06/04 16:11:54 B worker completed b
2018/06/04 16:11:54 C worker completed c
2018/06/04 16:11:54 abcd assembled. cycle 2 complete
2018/06/04 16:11:54 begin assembly cycle 3
2018/06/04 16:11:54 A worker begins part
2018/06/04 16:11:54 D worker begins part
2018/06/04 16:11:54 C worker begins part
2018/06/04 16:11:54 B worker begins part
2018/06/04 16:11:54 D worker completed d
2018/06/04 16:11:54 A worker completed a
2018/06/04 16:11:54 B worker completed b
2018/06/04 16:11:54 C worker completed c
2018/06/04 16:11:54 abcd assembled. cycle 3 complete
2018/06/04 16:11:54 D worker stopped
2018/06/04 16:11:54 B worker stopped
2018/06/04 16:11:54 C worker stopped
2018/06/04 16:11:54 A worker stopped
</pre>
 
'''Solution 4, workers joining and leaving'''
 
This solution shows workers joining and leaving, although it is a rather different interpretation of the task.
<syntaxhighlight lang="go">package main
 
import (
Line 741 ⟶ 1,181:
}
l.Println("worker", id, "leaves shop")
}</langsyntaxhighlight>
Output:
<pre>worker 1 contracted to assemble 2 details
Line 779 ⟶ 1,219:
worker 6 leaves shop
mechanism 5 completed</pre>
 
=={{header|Haskell}}==
<p>Although not being sure if the approach might be right, this example shows several workers performing a series of tasks simultaneously and synchronizing themselves before starting the next task.</p>
Line 795 ⟶ 1,234:
<li>For effectful computations, you should use concurrent threads (forkIO and MVar from the module Control.Concurrent), software transactional memory (STM) or alternatives provided by other modules.</li>
</ul>
<langsyntaxhighlight Haskelllang="haskell">import Control.Parallel
 
data Task a = Idle | Make a
Line 877 ⟶ 1,316:
 
main = workshop sum tasks
</syntaxhighlight>
</lang>
<p>The following version works with the concurrency model provided by the module Control.Concurrent</p>
<p>A workshop is an MVar that holds three values: the number of workers doing something, the number of workers ready for the next task and the total number of workers at the moment.</p>
Line 887 ⟶ 1,326:
<p>Other than the parallel version above, this code runs in the IO Monad and makes it possible to perform IO actions such as accessing the hardware. However, all actions must have the return type IO (). If the workers must return some useful values, the MVar should be extended with the necessary fields and the workers should use those fields to store the results they produce.</p>
<p>Note: This code has been tested on GHC 7.6.1 and will most probably not run under other Haskell implementations due to the use of some functions from the module Control.Concurrent. It won't work if compiled with the -O2 compiler switch. Compile with the -threaded compiler switch if you want to run the threads in parallel.</p>
<langsyntaxhighlight Haskelllang="haskell">import Control.Concurrent
import Control.Monad -- needed for "forM", "forM_"
 
Line 998 ⟶ 1,437:
-- kill all worker threads before exit, if they're still running
forM_ (pids1 ++ pids2) killThread</langsyntaxhighlight>
'''Output:'''
<pre style="height: 200px;overflow:scroll">
Line 1,093 ⟶ 1,532:
The following only works in Unicon:
 
<langsyntaxhighlight lang="unicon">global nWorkers, workers, cv
 
procedure main(A)
Line 1,120 ⟶ 1,559:
wait(cv)
}
end</langsyntaxhighlight>
 
Sample run:
Line 1,148 ⟶ 1,587:
->
</pre>
 
=={{header|J}}==
 
Now that J has a threading implementation: threads may be assigned tasks, and referencing the values produced by the tasks automatically synchronizes.
The current implementations of J are all single threaded. However, the language definition offers a lot of parallelism which I imagine will eventually be supported, after performance gains significantly better than a factor of 2 on common problems become economically viable.
 
For example:
 
<syntaxhighlight lang="j"> {{for. y do. 0 T.'' end.}} 0>.4-1 T.'' NB. make sure we have some threads
For example in 1 2 3 + 4 5 6, we have three addition operations which are specified to be carried out in parallel, and this kind of parallelism pervades the language definition.
ts=: 6!:0 NB. timestamp
dl=: 6!:3 NB. delay
{{r=.EMPTY for. i.y do. dl 1[ r=.r,3}.ts'' end. r}} t. ''"0(3 5)
┌────────────┬────────────┐
│12 53 53.569│12 53 53.569│
│12 53 54.578│12 53 54.578│
│12 53 55.587│12 53 55.587│
│ │12 53 56.603│
│ │12 53 57.614│
└────────────┴────────────┘</syntaxhighlight>
 
Here, we had set up a loop which periodically tracked the time, and waited a second each time through the loop, and repeated the loop a number of times specified at task startup. We ran two tasks, to demonstrate that they were running side-by-side.
=={{header|Java}}==
<langsyntaxhighlight Javalang="java">import java.util.Scanner;
import java.util.Random;
 
Line 1,244 ⟶ 1,696:
public static int nWorkers = 0;
}
}</langsyntaxhighlight>
Output:
<pre style="height: 200px;overflow:scroll">
Line 1,284 ⟶ 1,736:
</pre>
{{works with|Java|1.5+}}
<langsyntaxhighlight lang="java5">import java.util.Random;
import java.util.concurrent.CountDownLatch;
 
Line 1,331 ⟶ 1,783:
}
}
}</langsyntaxhighlight>
Output:
<pre style="height: 200px;overflow:scroll">Starting task 1
Line 1,375 ⟶ 1,827:
Worker 2 is ready
Task 3 complete</pre>
=={{header|Julia}}==
Julia has specific macros for checkpoint type synchronization. @async starts an asynchronous task, and multiple @async tasks can be synchronized by wrapping them within the @sync macro statement, which creates a checkpoint for all @async tasks.
<syntaxhighlight lang="julia">
function runsim(numworkers, runs)
for count in 1:runs
@sync begin
for worker in 1:numworkers
@async begin
tasktime = rand()
sleep(tasktime)
println("Worker $worker finished after $tasktime seconds")
end
end
end
println("Checkpoint reached for run $count.")
end
println("Finished all runs.\n")
end
 
const trials = [[3, 2], [4, 1], [2, 5], [7, 6]]
for trial in trials
runsim(trial[1], trial[2])
end</syntaxhighlight>
{{output}}<pre>
Worker 1 finished after 0.2496063425219046 seconds
Worker 3 finished after 0.6437560525692665 seconds
Worker 2 finished after 0.7622150880806831 seconds
Checkpoint reached for run 1.
Worker 2 finished after 0.0745173155757679 seconds
Worker 3 finished after 0.39089824936640993 seconds
Worker 1 finished after 0.5397505221156416 seconds
Checkpoint reached for run 2.
Finished all runs.
 
Worker 4 finished after 0.26840044205839897 seconds
Worker 3 finished after 0.5589553147289623 seconds
Worker 2 finished after 0.8546852411700241 seconds
Worker 1 finished after 0.9300832572304523 seconds
Checkpoint reached for run 1.
Finished all runs.
 
Worker 1 finished after 0.5289138841087624 seconds
Worker 2 finished after 0.7356027970934949 seconds
Checkpoint reached for run 1.
Worker 1 finished after 0.20674100912304416 seconds
Worker 2 finished after 0.6998567540438869 seconds
Checkpoint reached for run 2.
Worker 1 finished after 0.11392579333661912 seconds
Worker 2 finished after 0.4949249386371388 seconds
Checkpoint reached for run 3.
Worker 1 finished after 0.6032150410794788 seconds
Worker 2 finished after 0.8986919181800306 seconds
Checkpoint reached for run 4.
Worker 1 finished after 0.4237385941703915 seconds
Worker 2 finished after 0.5574922259408035 seconds
Checkpoint reached for run 5.
Finished all runs.
 
Worker 7 finished after 0.0396918164082527 seconds
Worker 3 finished after 0.31472648034105966 seconds
Worker 6 finished after 0.32606467253051474 seconds
Worker 5 finished after 0.3690388125862416 seconds
Worker 1 finished after 0.4290499974502766 seconds
Worker 2 finished after 0.48606373107736744 seconds
Worker 4 finished after 0.8723256915201081 seconds
Checkpoint reached for run 1.
Worker 2 finished after 0.10418765463492563 seconds
Worker 3 finished after 0.14023815791725713 seconds
Worker 7 finished after 0.7850239937628409 seconds
Worker 4 finished after 0.8145187186029617 seconds
Worker 6 finished after 0.8446820477646959 seconds
Worker 1 finished after 0.9195642711183825 seconds
Worker 5 finished after 0.9517129615316944 seconds
Checkpoint reached for run 2.
Worker 7 finished after 0.28490757307993486 seconds
Worker 3 finished after 0.4199539978001552 seconds
Worker 2 finished after 0.5509998796559186 seconds
Worker 5 finished after 0.7840588445793306 seconds
Worker 1 finished after 0.8049513381813924 seconds
Worker 6 finished after 0.8848651563027041 seconds
Worker 4 finished after 0.9074862779348334 seconds
Checkpoint reached for run 3.
Worker 5 finished after 0.21855944993484533 seconds
Worker 2 finished after 0.27709606350565275 seconds
Worker 7 finished after 0.28450943951411123 seconds
Worker 4 finished after 0.40871929967426857 seconds
Worker 1 finished after 0.5506243033572837 seconds
Worker 3 finished after 0.9287035426710006 seconds
Worker 6 finished after 0.9624436931735709 seconds
Checkpoint reached for run 4.
Worker 5 finished after 0.04032963358782826 seconds
Worker 6 finished after 0.17464708712852195 seconds
Worker 4 finished after 0.19558842246553398 seconds
Worker 3 finished after 0.2113199231977796 seconds
Worker 7 finished after 0.423009958033447 seconds
Worker 1 finished after 0.7584848109224733 seconds
Worker 2 finished after 0.8116269421151843 seconds
Checkpoint reached for run 5.
Worker 6 finished after 0.12563630313371443 seconds
Worker 4 finished after 0.33588040252159823 seconds
Worker 1 finished after 0.44873857982831256 seconds
Worker 5 finished after 0.536029356963061 seconds
Worker 3 finished after 0.5687590862891123 seconds
Worker 2 finished after 0.6655311849010326 seconds
Worker 7 finished after 0.8454083062748163 seconds
Checkpoint reached for run 6.
Finished all runs.
</pre>
=={{header|Kotlin}}==
{{trans|Java}}
<syntaxhighlight lang="scala">// Version 1.2.41
 
import java.util.Random
 
val rgen = Random()
var nWorkers = 0
var nTasks = 0
 
class Worker(private val threadID: Int) : Runnable {
 
@Synchronized
override fun run() {
try {
val workTime = rgen.nextInt(900) + 100L // 100..999 msec.
println("Worker $threadID will work for $workTime msec.")
Thread.sleep(workTime)
nFinished++
println("Worker $threadID is ready")
}
catch (e: InterruptedException) {
println("Error: thread execution interrupted")
e.printStackTrace()
}
}
 
companion object {
private var nFinished = 0
 
@Synchronized
fun checkPoint() {
while (nFinished != nWorkers) {
try {
Thread.sleep(10)
}
catch (e: InterruptedException) {
println("Error: thread execution interrupted")
e.printStackTrace()
}
}
nFinished = 0 // reset
}
}
}
 
fun runTasks() {
for (i in 1..nTasks) {
println("\nStarting task number $i.")
// Create a thread for each worker and run it.
for (j in 1..nWorkers) Thread(Worker(j)).start()
Worker.checkPoint() // wait for all workers to finish the task
}
}
 
fun main(args: Array<String>) {
print("Enter number of workers to use: ")
nWorkers = readLine()!!.toInt()
print("Enter number of tasks to complete: ")
nTasks = readLine()!!.toInt()
runTasks()
}</syntaxhighlight>
 
{{output}}
Sample session:
<pre style="height: 200px;overflow:scroll">
Enter number of workers to use: 5
Enter number of tasks to complete: 3
 
Starting task number 1.
Worker 1 will work for 894 msec.
Worker 3 will work for 777 msec.
Worker 2 will work for 243 msec.
Worker 4 will work for 938 msec.
Worker 5 will work for 551 msec.
Worker 2 is ready
Worker 5 is ready
Worker 3 is ready
Worker 1 is ready
Worker 4 is ready
 
Starting task number 2.
Worker 2 will work for 952 msec.
Worker 3 will work for 253 msec.
Worker 1 will work for 165 msec.
Worker 4 will work for 995 msec.
Worker 5 will work for 499 msec.
Worker 1 is ready
Worker 3 is ready
Worker 5 is ready
Worker 2 is ready
Worker 4 is ready
 
Starting task number 3.
Worker 1 will work for 622 msec.
Worker 2 will work for 642 msec.
Worker 4 will work for 344 msec.
Worker 3 will work for 191 msec.
Worker 5 will work for 703 msec.
Worker 3 is ready
Worker 4 is ready
Worker 1 is ready
Worker 2 is ready
Worker 5 is ready
</pre>
=={{header|Logtalk}}==
The following example can be found in the Logtalk distribution and is used here with permission. It's based on the Erlang solution for this task. Works when using SWI-Prolog, XSB, or YAP as the backend compiler.
<langsyntaxhighlight lang="logtalk">
:- object(checkpoint).
 
Line 1,444 ⟶ 2,108:
 
:- end_object.
</syntaxhighlight>
</lang>
Output:
<langsyntaxhighlight lang="text">
| ?- checkpoint::run.
Worker 1 item 3
Line 1,468 ⟶ 2,132:
All assemblies done.
yes
</syntaxhighlight>
</lang>
=={{header|Nim}}==
As in Oforth, the checkpoint is a thread (the main thread) and synchronization is done using channels:
:– a channel per worker to send orders; an order may be a task number (greater or equal to one) or the stop order (equal to 0);
:– a channel to receive the responses from workers; workers send their identifier (number) via this channel when they have completed a task.
Working on a task is simulated by sleeping during some time (randomly chosen).
 
<syntaxhighlight lang="nim">import locks
=={{header|Oforth}}==
import os
import random
import strformat
 
const
NWorkers = 3 # Number of workers.
NTasks = 4 # Number of tasks.
StopOrder = 0 # Order 0 is the request to stop.
 
var
randLock: Lock # Lock to access random number generator.
orders: array[1..NWorkers, Channel[int]] # Channel to send orders to workers.
responses: Channel[int] # Channel to receive responses from workers.
working: int # Current number of workers actually working.
threads: array[1..NWorkers, Thread[int]] # Array of running threads.
 
#---------------------------------------------------------------------------------------------------
 
proc worker(num: int) {.thread.} =
## Worker thread.
 
while true:
# Wait for order from main thread (this is the checkpoint).
let order = orders[num].recv
if order == StopOrder: break
# Get a random time to complete the task.
var time: int
withLock(randLock): time = rand(200..1000)
echo fmt"Worker {num}: starting task number {order}"
# Work on task during "time" ms.
sleep(time)
echo fmt"Worker {num}: task number {order} terminated after {time} ms"
# Send message to indicate that the task is terminated.
responses.send(num)
 
#---------------------------------------------------------------------------------------------------
 
# Initializations.
randomize()
randLock.initLock()
for num in 1..NWorkers:
orders[num].open()
responses.open()
 
# Create the worker threads.
for num in 1..NWorkers:
createThread(threads[num], worker, num)
 
# Send orders and wait for responses.
for task in 1..NTasks:
echo fmt"Sending order to start task number {task}"
# Send order (task number) to workers.
for num in 1..NWorkers:
orders[num].send(task)
working = NWorkers # All workers are now working.
# Wait to receive responses from workers.
while working > 0:
discard responses.recv() # Here, we don't care about the message content.
dec working
 
# We have terminated: send stop order to workers.
echo "Sending stop order to workers."
for num in 1..NWorkers:
orders[num].send(StopOrder)
joinThreads(threads)
echo "All workers stopped."
 
# Clean up.
for num in 1..NWorkers:
orders[num].close()
responses.close()
deinitLock(randLock)</syntaxhighlight>
 
{{out}}
<pre>Sending order to start task number 1
Worker 1: starting task number 1
Worker 2: starting task number 1
Worker 3: starting task number 1
Worker 2: task number 1 terminated after 656 ms
Worker 1: task number 1 terminated after 665 ms
Worker 3: task number 1 terminated after 984 ms
Sending order to start task number 2
Worker 2: starting task number 2
Worker 1: starting task number 2
Worker 3: starting task number 2
Worker 1: task number 2 terminated after 480 ms
Worker 3: task number 2 terminated after 583 ms
Worker 2: task number 2 terminated after 778 ms
Sending order to start task number 3
Worker 1: starting task number 3
Worker 2: starting task number 3
Worker 3: starting task number 3
Worker 3: task number 3 terminated after 472 ms
Worker 1: task number 3 terminated after 545 ms
Worker 2: task number 3 terminated after 894 ms
Sending order to start task number 4
Worker 3: starting task number 4
Worker 2: starting task number 4
Worker 1: starting task number 4
Worker 3: task number 4 terminated after 412 ms
Worker 1: task number 4 terminated after 436 ms
Worker 2: task number 4 terminated after 682 ms
Sending stop order to workers.
All workers stopped.</pre>
=={{header|Oforth}}==
Checkpoint is implemented as a task. It :
 
Line 1,486 ⟶ 2,259:
- And waits for $allDone checkpoint return on its personal channel.
 
<langsyntaxhighlight Oforthlang="oforth">func: task(n, jobs, myChannel)
{
while(true) [
System.Out "TASK " << n << " : Beginning my work..." << cr
Line 1,494 ⟶ 2,266:
jobs send($jobDone) drop
myChannel receive drop
] ;
}
 
func: checkPoint(n, jobs, channels)
{
while(true) [
#[ jobs receive drop ] times(n)
"CHECKPOINT : All jobs done, sending done to all tasks" println
channels apply(#[ send($allDone) drop ])
] ;
}
 
func: testCheckPoint(n)
{
| jobs channels i |
nListBuffer seq mapinit(n, #[ drop Channel new ]) dup freeze ->channels
Channel new ->jobs
 
#[ checkPoint(n, jobs, channels) ] &
n loop: i [ #[ task(i, jobs, channels at(i)) ] & ] ;</syntaxhighlight>
}</lang>
 
=={{header|Perl}}==
 
The perlipc man page details several approaches to interprocess communication. Here's one of my favourites: socketpair and fork. I've omitted some error-checking for brevity.
 
<langsyntaxhighlight lang="perl">#!/usr/bin/perl
use warnings;
use strict;
Line 1,588 ⟶ 2,354:
# workers had terminate, it would need to reap them to avoid zombies:
 
wait; wait;</langsyntaxhighlight>
 
A sample run:
Line 1,598 ⟶ 2,364:
msl@64Lucid:~/perl$
</pre>
=={{header|Phix}}==
 
Simple multitasking solution: no locking required, no race condition possible, supports workers leaving and joining.
<!--<syntaxhighlight lang="phix">(notonline)-->
<span style="color: #000080;font-style:italic;">-- demo\rosetta\checkpoint_synchronisation.exw</span>
<span style="color: #008080;">without</span> <span style="color: #008080;">js</span> <span style="color: #000080;font-style:italic;">-- task_xxx(), get_key()</span>
<span style="color: #008080;">constant</span> <span style="color: #000000;">NPARTS</span> <span style="color: #0000FF;">=</span> <span style="color: #000000;">3</span>
<span style="color: #004080;">integer</span> <span style="color: #000000;">workers</span> <span style="color: #0000FF;">=</span> <span style="color: #000000;">0</span>
<span style="color: #004080;">sequence</span> <span style="color: #000000;">waiters</span> <span style="color: #0000FF;">=</span> <span style="color: #0000FF;">{}</span>
<span style="color: #004080;">bool</span> <span style="color: #000000;">terminate</span> <span style="color: #0000FF;">=</span> <span style="color: #004600;">false</span>
<span style="color: #008080;">procedure</span> <span style="color: #000000;">checkpoint</span><span style="color: #0000FF;">(</span><span style="color: #004080;">integer</span> <span style="color: #000000;">task_id</span><span style="color: #0000FF;">)</span>
<span style="color: #008080;">if</span> <span style="color: #7060A8;">length</span><span style="color: #0000FF;">(</span><span style="color: #000000;">waiters</span><span style="color: #0000FF;">)+</span><span style="color: #000000;">1</span><span style="color: #0000FF;">=</span><span style="color: #000000;">NPARTS</span> <span style="color: #008080;">or</span> <span style="color: #000000;">terminate</span> <span style="color: #008080;">then</span>
<span style="color: #7060A8;">printf</span><span style="color: #0000FF;">(</span><span style="color: #000000;">1</span><span style="color: #0000FF;">,</span><span style="color: #008000;">"checkpoint\n"</span><span style="color: #0000FF;">)</span>
<span style="color: #008080;">for</span> <span style="color: #000000;">i</span><span style="color: #0000FF;">=</span><span style="color: #000000;">1</span> <span style="color: #008080;">to</span> <span style="color: #7060A8;">length</span><span style="color: #0000FF;">(</span><span style="color: #000000;">waiters</span><span style="color: #0000FF;">)</span> <span style="color: #008080;">do</span>
<span style="color: #000000;">task_schedule</span><span style="color: #0000FF;">(</span><span style="color: #000000;">waiters</span><span style="color: #0000FF;">[</span><span style="color: #000000;">i</span><span style="color: #0000FF;">],</span><span style="color: #000000;">1</span><span style="color: #0000FF;">)</span>
<span style="color: #008080;">end</span> <span style="color: #008080;">for</span>
<span style="color: #000000;">waiters</span> <span style="color: #0000FF;">=</span> <span style="color: #0000FF;">{}</span>
<span style="color: #008080;">else</span>
<span style="color: #000000;">waiters</span> <span style="color: #0000FF;">&=</span> <span style="color: #000000;">task_id</span>
<span style="color: #000000;">task_suspend</span><span style="color: #0000FF;">(</span><span style="color: #000000;">task_id</span><span style="color: #0000FF;">)</span>
<span style="color: #000000;">task_yield</span><span style="color: #0000FF;">()</span>
<span style="color: #008080;">end</span> <span style="color: #008080;">if</span>
<span style="color: #008080;">end</span> <span style="color: #008080;">procedure</span>
<span style="color: #008080;">procedure</span> <span style="color: #000000;">worker</span><span style="color: #0000FF;">(</span><span style="color: #004080;">string</span> <span style="color: #000000;">name</span><span style="color: #0000FF;">)</span>
<span style="color: #7060A8;">printf</span><span style="color: #0000FF;">(</span><span style="color: #000000;">1</span><span style="color: #0000FF;">,</span><span style="color: #008000;">"worker %s running\n"</span><span style="color: #0000FF;">,{</span><span style="color: #000000;">name</span><span style="color: #0000FF;">})</span>
<span style="color: #008080;">while</span> <span style="color: #008080;">not</span> <span style="color: #000000;">terminate</span> <span style="color: #008080;">do</span>
<span style="color: #7060A8;">printf</span><span style="color: #0000FF;">(</span><span style="color: #000000;">1</span><span style="color: #0000FF;">,</span><span style="color: #008000;">"worker %s begins part\n"</span><span style="color: #0000FF;">,{</span><span style="color: #000000;">name</span><span style="color: #0000FF;">})</span>
<span style="color: #000000;">task_delay</span><span style="color: #0000FF;">(</span><span style="color: #7060A8;">rnd</span><span style="color: #0000FF;">())</span>
<span style="color: #7060A8;">printf</span><span style="color: #0000FF;">(</span><span style="color: #000000;">1</span><span style="color: #0000FF;">,</span><span style="color: #008000;">"worker %s completes part\n"</span><span style="color: #0000FF;">,{</span><span style="color: #000000;">name</span><span style="color: #0000FF;">})</span>
<span style="color: #000000;">checkpoint</span><span style="color: #0000FF;">(</span><span style="color: #000000;">task_self</span><span style="color: #0000FF;">())</span>
<span style="color: #008080;">if</span> <span style="color: #7060A8;">find</span><span style="color: #0000FF;">(</span><span style="color: #000000;">task_self</span><span style="color: #0000FF;">(),</span><span style="color: #000000;">waiters</span><span style="color: #0000FF;">)</span> <span style="color: #008080;">then</span> <span style="color: #0000FF;">?</span><span style="color: #000000;">9</span><span style="color: #0000FF;">/</span><span style="color: #000000;">0</span> <span style="color: #008080;">end</span> <span style="color: #008080;">if</span>
<span style="color: #008080;">if</span> <span style="color: #000000;">terminate</span> <span style="color: #008080;">or</span> <span style="color: #7060A8;">rnd</span><span style="color: #0000FF;">()></span><span style="color: #000000;">0.95</span> <span style="color: #008080;">then</span> <span style="color: #008080;">exit</span> <span style="color: #008080;">end</span> <span style="color: #008080;">if</span>
<span style="color: #000000;">task_delay</span><span style="color: #0000FF;">(</span><span style="color: #7060A8;">rnd</span><span style="color: #0000FF;">())</span>
<span style="color: #008080;">end</span> <span style="color: #008080;">while</span>
<span style="color: #7060A8;">printf</span><span style="color: #0000FF;">(</span><span style="color: #000000;">1</span><span style="color: #0000FF;">,</span><span style="color: #008000;">"worker %s leaves\n"</span><span style="color: #0000FF;">,{</span><span style="color: #000000;">name</span><span style="color: #0000FF;">})</span>
<span style="color: #000000;">workers</span> <span style="color: #0000FF;">-=</span> <span style="color: #000000;">1</span>
<span style="color: #008080;">end</span> <span style="color: #008080;">procedure</span>
<span style="color: #004080;">string</span> <span style="color: #000000;">name</span> <span style="color: #0000FF;">=</span> <span style="color: #008000;">"A"</span>
<span style="color: #008080;">while</span> <span style="color: #7060A8;">get_key</span><span style="color: #0000FF;">()!=</span><span style="color: #000000;">#1B</span> <span style="color: #008080;">do</span> <span style="color: #000080;font-style:italic;">-- (key escape to shut down)</span>
<span style="color: #008080;">if</span> <span style="color: #000000;">workers</span><span style="color: #0000FF;"><</span><span style="color: #000000;">NPARTS</span> <span style="color: #008080;">then</span>
<span style="color: #004080;">integer</span> <span style="color: #000000;">task_id</span> <span style="color: #0000FF;">=</span> <span style="color: #000000;">task_create</span><span style="color: #0000FF;">(</span><span style="color: #7060A8;">routine_id</span><span style="color: #0000FF;">(</span><span style="color: #008000;">"worker"</span><span style="color: #0000FF;">),{</span><span style="color: #000000;">name</span><span style="color: #0000FF;">})</span>
<span style="color: #000000;">task_schedule</span><span style="color: #0000FF;">(</span><span style="color: #000000;">task_id</span><span style="color: #0000FF;">,</span><span style="color: #000000;">1</span><span style="color: #0000FF;">)</span>
<span style="color: #000000;">name</span><span style="color: #0000FF;">[</span><span style="color: #000000;">1</span><span style="color: #0000FF;">]</span> <span style="color: #0000FF;">+=</span> <span style="color: #000000;">1</span>
<span style="color: #000000;">workers</span> <span style="color: #0000FF;">+=</span> <span style="color: #000000;">1</span>
<span style="color: #008080;">end</span> <span style="color: #008080;">if</span>
<span style="color: #000000;">task_yield</span><span style="color: #0000FF;">()</span>
<span style="color: #008080;">end</span> <span style="color: #008080;">while</span>
<span style="color: #7060A8;">printf</span><span style="color: #0000FF;">(</span><span style="color: #000000;">1</span><span style="color: #0000FF;">,</span><span style="color: #008000;">"escape keyed\n"</span><span style="color: #0000FF;">)</span>
<span style="color: #000000;">terminate</span> <span style="color: #0000FF;">=</span> <span style="color: #004600;">true</span>
<span style="color: #008080;">while</span> <span style="color: #000000;">workers</span><span style="color: #0000FF;">></span><span style="color: #000000;">0</span> <span style="color: #008080;">do</span>
<span style="color: #000000;">task_yield</span><span style="color: #0000FF;">()</span>
<span style="color: #008080;">end</span> <span style="color: #008080;">while</span>
<span style="color: #0000FF;">{}</span> <span style="color: #0000FF;">=</span> <span style="color: #7060A8;">wait_key</span><span style="color: #0000FF;">()</span>
<!--</syntaxhighlight>-->
{{out}}
<pre style="height: 200px;overflow:scroll">
worker A running
worker A begins part
worker B running
worker B begins part
worker C running
worker C begins part
worker B completes part
worker C completes part
worker A completes part
checkpoint
worker B begins part
worker C begins part
worker A begins part
worker B completes part
worker A completes part
worker C completes part
checkpoint
worker B begins part
worker C begins part
worker A begins part
worker B completes part
worker C completes part
worker A completes part
checkpoint
worker B begins part
worker B completes part
worker C begins part
worker C completes part
worker A begins part
worker A completes part
checkpoint
worker A leaves
worker C begins part
worker B begins part
worker D running
worker D begins part
worker D completes part
worker C completes part
worker B completes part
checkpoint
worker B begins part
worker D begins part
worker B completes part
worker D completes part
worker C begins part
worker C completes part
checkpoint
worker B begins part
worker D begins part
worker C begins part
worker B completes part
worker C completes part
worker D completes part
checkpoint
worker B begins part
worker D begins part
worker C begins part
worker D completes part
worker B completes part
worker C completes part
checkpoint
worker C begins part
worker D begins part
worker B begins part
worker C completes part
worker B completes part
worker D completes part
checkpoint
escape keyed
worker C leaves
worker D leaves
worker B leaves
</pre>
=={{header|PicoLisp}}==
The following solution implements each worker as a coroutine. Therefore, it
Line 1,610 ⟶ 2,508:
'worker' takes a number of steps to perform. It "works" by printing each step,
and returning NIL when done.
<langsyntaxhighlight PicoLisplang="picolisp">(de checkpoints (Projects Workers)
(for P Projects
(prinl "Starting project number " P ":")
Line 1,628 ⟶ 2,526:
(yield ID)
(prinl "Worker " ID " step " N) )
NIL ) )</langsyntaxhighlight>
Output:
<pre>: (checkpoints 2 3) # Start two projects with 3 workers
Line 1,660 ⟶ 2,558:
Worker 1 step 4
Project number 2 is done.</pre>
 
=={{header|PureBasic}}==
 
PureBasic normally uses Semaphores and Mutex’s to synchronize parallel systems. This system only relies on semaphores between each thread and the controller (CheckPoint-procedure). For exchanging data a Mutex based message stack could easily be added, both synchronized according to this specific task or non-blocking if each worker could be allowed that freedom.
<langsyntaxhighlight PureBasiclang="purebasic">#MaxWorktime=8000 ; "Workday" in msec
 
; Structure that each thread uses
Line 1,750 ⟶ 2,647:
CheckPoint()
Print("Press ENTER to exit"): Input()
EndIf</langsyntaxhighlight>
<pre style="height: 200px;overflow:scroll">Enter number of workers to use [2-2000]: 5
Work started, 5 workers has been called.
Line 1,873 ⟶ 2,770:
Thread #1 is done.
Press ENTER to exit</pre>
=={{header|Python}}==
<syntaxhighlight lang="python">
"""
 
Based on https://pymotw.com/3/threading/
 
"""
 
import threading
import time
import random
 
 
def worker(workernum, barrier):
# task 1
sleeptime = random.random()
print('Starting worker '+str(workernum)+" task 1, sleeptime="+str(sleeptime))
time.sleep(sleeptime)
print('Exiting worker'+str(workernum))
barrier.wait()
# task 2
sleeptime = random.random()
print('Starting worker '+str(workernum)+" task 2, sleeptime="+str(sleeptime))
time.sleep(sleeptime)
print('Exiting worker'+str(workernum))
 
barrier = threading.Barrier(3)
 
w1 = threading.Thread(target=worker, args=((1,barrier)))
w2 = threading.Thread(target=worker, args=((2,barrier)))
w3 = threading.Thread(target=worker, args=((3,barrier)))
 
w1.start()
w2.start()
w3.start()
</syntaxhighlight>
Output:
<pre>
Starting worker 1 task 1, sleeptime=0.26685336937182835
Starting worker 2 task 1, sleeptime=0.947511912308323
Starting worker 3 task 1, sleeptime=0.6495569605252262
Exiting worker1
Exiting worker3
Exiting worker2
Starting worker 2 task 2, sleeptime=0.5585479798026259
Starting worker 3 task 2, sleeptime=0.4104925281220747
Starting worker 1 task 2, sleeptime=0.15963562165203105
Exiting worker1
Exiting worker3
Exiting worker2
</pre>
=={{header|Racket}}==
This solution uses a double barrier to synchronize the five threads.
The method can be found on page 41 of the delightful book
[http://greenteapress.com/semaphores/downey08semaphores.pdf "The Little Book of Semaphores"] by Allen B. Downey.
<langsyntaxhighlight lang="racket">
#lang racket
(define t 5) ; total number of threads
Line 1,923 ⟶ 2,870:
(displayln (for/list ([_ t]) (channel-get ch)))
(loop))
</syntaxhighlight>
</lang>
Output:
<langsyntaxhighlight lang="racket">
(1 4 2 0 3)
(6 9 7 8 5)
Line 1,947 ⟶ 2,894:
(97 98 99 95 96)
...
</syntaxhighlight>
</lang>
=={{header|Raku}}==
(formerly Perl 6)
<syntaxhighlight lang="raku" line>my $TotalWorkers = 3;
my $BatchToRun = 3;
my @TimeTaken = (5..15); # in seconds
 
my $batch_progress = 0;
my @batch_lock = map { Semaphore.new(1) } , ^$TotalWorkers;
my $lock = Lock.new;
 
sub assembly_line ($ID) {
my $wait;
for ^$BatchToRun -> $j {
$wait = @TimeTaken.roll;
say "Worker ",$ID," at batch $j will work for ",$wait," seconds ..";
sleep($wait);
$lock.protect: {
my $k = ++$batch_progress;
print "Worker ",$ID," is done and update batch $j complete counter ";
say "to $k of $TotalWorkers";
if ($batch_progress == $TotalWorkers) {
say ">>>>> batch $j completed.";
$batch_progress = 0; # reset for next batch
for @batch_lock { .release }; # and ready for next batch
};
};
 
@batch_lock[$ID].acquire; # for next batch
}
}
 
for ^$TotalWorkers -> $i {
Thread.start(
sub {
@batch_lock[$i].acquire;
assembly_line($i);
}
);
}</syntaxhighlight>
{{out}}
<pre>Worker 1 at batch 0 will work for 6 seconds ..
Worker 2 at batch 0 will work for 32 seconds ..
Worker 0 at batch 0 will work for 13 seconds ..
Worker 1 is done and update batch 0 complete counter to 1 of 3
Worker 0 is done and update batch 0 complete counter to 2 of 3
Worker 2 is done and update batch 0 complete counter to 3 of 3
>>>>> batch 0 completed.
Worker 2 at batch 1 will work for 27 seconds ..
Worker 0 at batch 1 will work for 18 seconds ..
Worker 1 at batch 1 will work for 13 seconds ..
Worker 1 is done and update batch 1 complete counter to 1 of 3
Worker 0 is done and update batch 1 complete counter to 2 of 3
Worker 2 is done and update batch 1 complete counter to 3 of 3
>>>>> batch 1 completed.
Worker 2 at batch 2 will work for 5 seconds ..
Worker 0 at batch 2 will work for 28 seconds ..
Worker 1 at batch 2 will work for 33 seconds ..
Worker 2 is done and update batch 2 complete counter to 1 of 3
Worker 0 is done and update batch 2 complete counter to 2 of 3
Worker 1 is done and update batch 2 complete counter to 3 of 3
>>>>> batch 2 completed.
</pre>
=={{header|Ruby}}==
{{needs-review|Ruby|This code might or might not do the correct task. See comment at [[Talk:{{PAGENAME}}]].}}
 
<langsyntaxhighlight lang="ruby">require 'socket'
 
# A Workshop runs all of its workers, then collects their results. Use
Line 2,109 ⟶ 3,117:
# Remove all workers.
wids.each { |wid| shop.remove wid }
pp shop.work(6)</langsyntaxhighlight>
 
Example of output: <pre>{23187=>[0, 1346269],
Line 2,125 ⟶ 3,133:
4494=>[5, 4, 1166220]}
{}</pre>
=={{header|Rust}}==
<syntaxhighlight lang="rust">
//! We implement this task using Rust's Barriers. Barriers are simply thread synchronization
//! points--if a task waits at a barrier, it will not continue until the number of tasks for which
//! the variable was initialized are also waiting at the barrier, at which point all of them will
//! stop waiting. This can be used to allow threads to do asynchronous work and guarantee
//! properties at checkpoints.
 
use std::sync::atomic::{AtomicBool, Ordering};
=={{header|Tcl}}==
use std::sync::mpsc::channel;
use std::sync::{Arc, Barrier};
use std::thread::spawn;
 
use array_init::array_init;
 
pub fn checkpoint() {
const NUM_TASKS: usize = 10;
const NUM_ITERATIONS: u8 = 10;
 
let barrier = Barrier::new(NUM_TASKS);
let events: [AtomicBool; NUM_TASKS] = array_init(|_| AtomicBool::new(false));
 
// Arc for sharing between tasks
let arc = Arc::new((barrier, events));
// Channel for communicating when tasks are done
let (tx, rx) = channel();
for i in 0..NUM_TASKS {
let arc = Arc::clone(&arc);
let tx = tx.clone();
// Spawn a new worker
spawn(move || {
let (ref barrier, ref events) = *arc;
// Assign an event to this task
let event = &events[i];
// Start processing events
for _ in 0..NUM_ITERATIONS {
// Between checkpoints 4 and 1, turn this task's event on.
event.store(true, Ordering::Release);
// Checkpoint 1
barrier.wait();
// Between checkpoints 1 and 2, all events are on.
assert!(events.iter().all(|e| e.load(Ordering::Acquire)));
// Checkpoint 2
barrier.wait();
// Between checkpoints 2 and 3, turn this task's event off.
event.store(false, Ordering::Release);
// Checkpoint 3
barrier.wait();
// Between checkpoints 3 and 4, all events are off.
assert!(events.iter().all(|e| !e.load(Ordering::Acquire)));
// Checkpoint 4
barrier.wait();
}
// Finish processing events.
tx.send(()).unwrap();
});
}
drop(tx);
// The main thread will not exit until all tasks have exited.
for _ in 0..NUM_TASKS {
rx.recv().unwrap();
}
}
 
fn main() {
checkpoint();
}
</syntaxhighlight>
=={{header|Scala}}==
<syntaxhighlight lang="scala">import java.util.{Random, Scanner}
 
object CheckpointSync extends App {
val in = new Scanner(System.in)
 
/*
* Informs that workers started working on the task and
* starts running threads. Prior to proceeding with next
* task syncs using static Worker.checkpoint() method.
*/
private def runTasks(nTasks: Int): Unit = {
 
for (i <- 0 until nTasks) {
println("Starting task number " + (i + 1) + ".")
runThreads()
Worker.checkpoint()
}
}
 
/*
* Creates a thread for each worker and runs it.
*/
private def runThreads(): Unit =
for (i <- 0 until Worker.nWorkers) new Thread(new Worker(i + 1)).start()
 
class Worker(/* inner class instance variables */ var threadID: Int)
extends Runnable {
override def run(): Unit = {
work()
}
 
/*
* Notifies that thread started running for 100 to 1000 msec.
* Once finished increments static counter 'nFinished'
* that counts number of workers finished their work.
*/
private def work(): Unit = {
try {
val workTime = Worker.rgen.nextInt(900) + 100
println("Worker " + threadID + " will work for " + workTime + " msec.")
Thread.sleep(workTime) //work for 'workTime'
 
Worker.nFinished += 1 //increases work finished counter
 
println("Worker " + threadID + " is ready")
} catch {
case e: InterruptedException =>
System.err.println("Error: thread execution interrupted")
e.printStackTrace()
}
}
}
 
/*
* Worker inner static class.
*/
object Worker {
private val rgen = new Random
var nWorkers = 0
private var nFinished = 0
 
/*
* Used to synchronize Worker threads using 'nFinished' static integer.
* Waits (with step of 10 msec) until 'nFinished' equals to 'nWorkers'.
* Once they are equal resets 'nFinished' counter.
*/
def checkpoint(): Unit = {
while (nFinished != nWorkers)
try Thread.sleep(10)
catch {
case e: InterruptedException =>
System.err.println("Error: thread execution interrupted")
e.printStackTrace()
}
nFinished = 0
}
}
 
print("Enter number of workers to use: ")
Worker.nWorkers = in.nextInt
print("Enter number of tasks to complete:")
runTasks(in.nextInt)
 
}</syntaxhighlight>
=={{header|Tcl}}==
This implementation works by having a separate thread handle the synchronization (inter-thread message delivery already being serialized). The alternative, using a read-write mutex, is more complex and more likely to run into trouble with multi-core machines.
<langsyntaxhighlight lang="tcl">package require Tcl 8.5
package require Thread
 
Line 2,225 ⟶ 3,384:
expr {[llength $members] > 0}
}
}</langsyntaxhighlight>
Demonstration of how this works.
{{trans|Ada}}
<langsyntaxhighlight lang="tcl"># Build the workers
foreach worker {A B C D} {
dict set ids $worker [checkpoint makeThread {
Line 2,257 ⟶ 3,416:
break
}
}</langsyntaxhighlight>
Output:
<pre>
Line 2,299 ⟶ 3,458:
A is ready
B is ready
D is ready</pre>
=={{header|Wren}}==
{{trans|Kotlin}}
{{libheader|Wren-ioutil}}
<syntaxhighlight lang="wren">import "random" for Random
import "scheduler" for Scheduler
import "timer" for Timer
import "./ioutil" for Input
 
var rgen = Random.new()
var nWorkers = 0
var nTasks = 0
var nFinished = 0
 
var worker = Fn.new { |id|
var workTime = rgen.int(100, 1000) // 100..999 msec.
System.print("Worker %(id) will work for %(workTime) msec.")
Timer.sleep(workTime)
nFinished = nFinished + 1
System.print("Worker %(id) is ready.")
}
 
var checkPoint = Fn.new {
while (nFinished != nWorkers) {
Timer.sleep(10)
}
nFinished = 0 // reset
}
 
var runTasks = Fn.new {
for (i in 1..nTasks) {
System.print("\nStarting task number %(i).")
var first = rgen.int(1, nWorkers + 1) // randomize first worker to start
// schedule other workers to start while another fiber is sleeping
for (j in 1..nWorkers) {
if (j != first) Scheduler.add { worker.call(j) }
}
worker.call(first) // start first worker
checkPoint.call() // start checkPoint
}
}
 
nWorkers = Input.integer("Enter number of workers to use: ", 1)
nTasks = Input.integer("Enter number of tasks to complete: ", 1)
runTasks.call()</syntaxhighlight>
 
{{out}}
Sample run:
<pre>
Enter number of workers to use: 5
Enter number of tasks to complete: 3
 
Starting task number 1.
Worker 3 will work for 822 msec.
Worker 1 will work for 127 msec.
Worker 2 will work for 618 msec.
Worker 4 will work for 175 msec.
Worker 5 will work for 402 msec.
Worker 1 is ready.
Worker 4 is ready.
Worker 5 is ready.
Worker 2 is ready.
Worker 3 is ready.
 
Starting task number 2.
Worker 2 will work for 537 msec.
Worker 1 will work for 408 msec.
Worker 3 will work for 878 msec.
Worker 4 will work for 101 msec.
Worker 5 will work for 822 msec.
Worker 4 is ready.
Worker 1 is ready.
Worker 2 is ready.
Worker 5 is ready.
Worker 3 is ready.
 
Starting task number 3.
Worker 4 will work for 568 msec.
Worker 1 will work for 603 msec.
Worker 2 will work for 341 msec.
Worker 3 will work for 250 msec.
Worker 5 will work for 837 msec.
Worker 3 is ready.
Worker 2 is ready.
Worker 4 is ready.
Worker 1 is ready.
Worker 5 is ready.
</pre>
 
Line 2,306 ⟶ 3,551:
The consumer requests a part it doesn't have, waits for a part and puts the received part (which might not be the requested one (if buggy code)) in a bin and assembles the parts into a product.
Repeat until all requested products are made.
<langsyntaxhighlight lang="zkl">const NUM_PARTS=5; // number of parts used to make the product
var requested=Atomic.Int(-1); // the id of the part the consumer needs
var pipe=Thread.Pipe(); // "conveyor belt" of parts to consumer
Line 2,331 ⟶ 3,576:
foreach n in (NUM_PARTS){ product[n]-=1 } // remove parts from bin
}
println("Done"); // but workers are still waiting</langsyntaxhighlight>
An AtomicInt is an integer that does its operations in an atomic fashion. It is used to serialize the producers and consumer.
 
Line 2,349 ⟶ 3,594:
Done
</pre>
{{omit from|Axe}}
 
{{omit from|Maxima}}
{{omit from|ML/I}}
{{omit from|Maxima}}
{{omit from|Axe}}
9,477

edits