Checkpoint synchronization: Difference between revisions

m
m (→‎{{header|Phix}}: added syntax colouring, marked p2js incompatible)
m (→‎{{header|Wren}}: Minor tidy)
 
(11 intermediate revisions by 4 users not shown)
Line 1:
{{task|Concurrency}}[[Category:Classic CS problems and programs]]{{requires|Concurrency}}
{{task|Concurrency}}{{requires|Concurrency}}
The checkpoint synchronization is a problem of synchronizing multiple [[task]]s. Consider a workshop where several workers ([[task]]s) assembly details of some mechanism. When each of them completes his work they put the details together. There is no store, so a worker who finished its part first must wait for others before starting another one. Putting details together is the ''checkpoint'' at which [[task]]s synchronize themselves before going their paths apart.
 
Line 11 ⟶ 12:
 
If you can, implement workers joining and leaving.
 
=={{header|Ada}}==
<langsyntaxhighlight Adalang="ada">with Ada.Calendar; use Ada.Calendar;
with Ada.Numerics.Float_Random;
with Ada.Text_IO; use Ada.Text_IO;
Line 115:
end Test_Checkpoint;
 
</syntaxhighlight>
</lang>
Sample output:
<pre style="height: 200px;overflow:scroll">
Line 187:
D ends shift
</pre>
 
=={{header|BBC BASIC}}==
{{works with|BBC BASIC for Windows}}
<langsyntaxhighlight lang="bbcbasic"> INSTALL @lib$+"TIMERLIB"
nWorkers% = 3
DIM tID%(nWorkers%)
Line 248 ⟶ 247:
PROC_killtimer(tID%(I%))
NEXT
ENDPROC</langsyntaxhighlight>
'''Output:'''
<pre>
Line 269 ⟶ 268:
Worker 2 starting (5 ticks)
</pre>
 
=={{header|C}}==
Using OpenMP. Compiled with <code>gcc -Wall -fopenmp</code>.
<langsyntaxhighlight Clang="c">#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
Line 302 ⟶ 300:
 
return 0;
}</langsyntaxhighlight>
=={{header|C sharp|C#}}==
{{works with|C sharp|10}}
<syntaxhighlight lang="csharp">using System;
using System.Linq;
using System.Threading;
using System.Threading.Tasks;
 
namespace Rosetta.CheckPointSync;
 
public class Program
{
public async Task Main()
{
RobotBuilder robotBuilder = new RobotBuilder();
Task work = robotBuilder.BuildRobots(
"Optimus Prime", "R. Giskard Reventlov", "Data", "Marvin",
"Bender", "Number Six", "C3-PO", "Dolores");
await work;
}
 
public class RobotBuilder
{
static readonly string[] parts = { "Head", "Torso", "Left arm", "Right arm", "Left leg", "Right leg" };
static readonly Random rng = new Random();
static readonly object key = new object();
 
public Task BuildRobots(params string[] robots)
{
int r = 0;
Barrier checkpoint = new Barrier(parts.Length, b => {
Console.WriteLine($"{robots[r]} assembled. Hello, {robots[r]}!");
Console.WriteLine();
r++;
});
var tasks = parts.Select(part => BuildPart(checkpoint, part, robots)).ToArray();
return Task.WhenAll(tasks);
}
 
private static int GetTime()
{
//Random is not threadsafe, so we'll use a lock.
//There are better ways, but that's out of scope for this exercise.
lock (key) {
return rng.Next(100, 1000);
}
}
 
private async Task BuildPart(Barrier barrier, string part, string[] robots)
{
foreach (var robot in robots) {
int time = GetTime();
Console.WriteLine($"Constructing {part} for {robot}. This will take {time}ms.");
await Task.Delay(time);
Console.WriteLine($"{part} for {robot} finished.");
barrier.SignalAndWait();
}
}
 
}
}</syntaxhighlight>
{{out}}
<pre style="height:30ex;overflow:scroll">
Constructing Head for Optimus Prime. This will take 607ms.
Constructing Torso for Optimus Prime. This will take 997ms.
Constructing Left arm for Optimus Prime. This will take 201ms.
Constructing Right arm for Optimus Prime. This will take 993ms.
Constructing Left leg for Optimus Prime. This will take 165ms.
Constructing Right leg for Optimus Prime. This will take 132ms.
Right leg for Optimus Prime finished.
Left leg for Optimus Prime finished.
Left arm for Optimus Prime finished.
Head for Optimus Prime finished.
Right arm for Optimus Prime finished.
Torso for Optimus Prime finished.
Optimus Prime assembled. Hello, Optimus Prime!
 
Constructing Right arm for R. Giskard Reventlov. This will take 772ms.
Constructing Left leg for R. Giskard Reventlov. This will take 722ms.
Constructing Head for R. Giskard Reventlov. This will take 140ms.
Constructing Left arm for R. Giskard Reventlov. This will take 299ms.
Constructing Right leg for R. Giskard Reventlov. This will take 637ms.
Constructing Torso for R. Giskard Reventlov. This will take 249ms.
Head for R. Giskard Reventlov finished.
Torso for R. Giskard Reventlov finished.
Left arm for R. Giskard Reventlov finished.
Right leg for R. Giskard Reventlov finished.
Left leg for R. Giskard Reventlov finished.
Right arm for R. Giskard Reventlov finished.
R. Giskard Reventlov assembled. Hello, R. Giskard Reventlov!
 
//etc
</pre>
=={{header|C++}}==
{{works with|C++11}}
<langsyntaxhighlight lang="cpp">#include <iostream>
#include <chrono>
#include <atomic>
Line 356 ⟶ 446:
for(auto& t: threads) t.join();
std::cout << "Assembly is finished";
}</langsyntaxhighlight>
{{out}}
<pre>
Line 371 ⟶ 461:
Assembly is finished
</pre>
 
=={{header|Clojure}}==
With a fixed number of workers, this would be very straightforward in Clojure by using a ''CyclicBarrier'' from ''java.util.concurrent''.
So to make it interesting, this version supports workers dynamically joining and parting, and uses the new (2013) ''core.async'' library to use Go-like channels.
Also, each worker passes a value to the checkpoint, so that some ''combine'' function could consume them once they're all received.
<langsyntaxhighlight lang="clojure">(ns checkpoint.core
(:gen-class)
(:require [clojure.core.async :as async :refer [go <! >! <!! >!! alts! close!]]
Line 447 ⟶ 536:
(worker ckpt 10 (monitor 2))))
 
</syntaxhighlight>
</lang>
 
=={{header|D}}==
<langsyntaxhighlight lang="d">import std.stdio;
import std.parallelism: taskPool, defaultPoolThreads, totalCPUs;
 
Line 473 ⟶ 561:
buildMechanism(42);
buildMechanism(11);
}</langsyntaxhighlight>
{{out|Example output}}
<pre>Build detail 0
Line 488 ⟶ 576:
Checkpoint reached. Assemble details ...
Mechanism with 11 parts finished: 55</pre>
 
=={{header|E}}==
 
Line 495 ⟶ 582:
That said, here is an implementation of the task as stated. We start by defining a 'flag set' data structure (which is hopefully also useful for other problems), which allows us to express the checkpoint algorithm straightforwardly while being protected against the possibility of a task calling <code>deliver</code> or <code>leave</code> too many times. Note also that each task gets its own reference denoting its membership in the checkpoint group; thus it can only speak for itself and not break any global invariants.
 
<langsyntaxhighlight lang="e">/** A flagSet solves this problem: There are N things, each in a true or false
* state, and we want to know whether they are all true (or all false), and be
* able to bulk-change all of them, and all this without allowing double-
Line 608 ⟶ 695:
waits with= makeWorker(piece, checkpoint)
}
interp.waitAtTop(promiseAllFulfilled(waits))</langsyntaxhighlight>
 
=={{header|Erlang}}==
A team of 5 workers assemble 3 items. The time it takes to assemble 1 item is 0 - 100 milliseconds.
<syntaxhighlight lang="erlang">
<lang Erlang>
-module( checkpoint_synchronization ).
 
Line 648 ⟶ 734:
end,
worker_loop( Worker, N - 1, Checkpoint ).
</syntaxhighlight>
</lang>
{{out}}
<pre>
Line 668 ⟶ 754:
Worker 5 item 1
</pre>
=={{header|FreeBASIC}}==
The library ontimer.bi, I have taken it from [https://www.freebasic.net/forum/viewtopic.php?f=7&t=23454 forums of FB].
<syntaxhighlight lang="freebasic">#include "ontimer.bi"
 
Randomize Timer
Dim Shared As Uinteger nWorkers = 3
Dim Shared As Uinteger tID(nWorkers)
Dim Shared As Integer cnt(nWorkers)
Dim Shared As Integer checked = 0
 
Sub checkpoint()
Dim As Boolean sync
If checked = 0 Then sync = False
checked += 1
If (sync = False) And (checked = nWorkers) Then
sync = True
Color 14 : Print "--Sync Point--"
checked = 0
End If
End Sub
 
Sub task(worker As Uinteger)
Redim Preserve cnt(nWorkers)
Select Case cnt(worker)
Case 0
cnt(worker) = Rnd * 3
Color 15 : Print "Worker " & worker & " starting (" & cnt(worker) & " ticks)"
Case -1
Exit Select
Case Else
cnt(worker) -= 1
If cnt(worker) = 0 Then
Color 7 : Print "Worker "; worker; " ready and waiting"
cnt(worker) = -1
checkpoint
cnt(worker) = 0
End If
End Select
End Sub
 
Sub worker1
task(1)
End Sub
Sub worker2
task(2)
End Sub
Sub worker3
task(3)
End Sub
 
Do
OnTimer(500, @worker1, 1)
OnTimer(100, @worker2, 1)
OnTimer(900, @worker3, 1)
Sleep 1000
Loop</syntaxhighlight>
{{out}}
<pre>Worker 1 starting (2 ticks)
Worker 1 ready and waiting
Worker 3 starting (1 ticks)
Worker 3 ready and waiting
--Sync Point--
Worker 3 starting (1 ticks)
Worker 3 ready and waiting
Worker 2 ready and waiting
Worker 1 starting (1 ticks)
Worker 2 starting (0 ticks)
Worker 1 ready and waiting
--Sync Point--
Worker 3 starting (0 ticks)
Worker 2 starting (1 ticks)
Worker 1 starting (1 ticks)
Worker 3 starting (2 ticks)
Worker 2 ready and waiting
Worker 1 ready and waiting
Worker 2 starting (1 ticks)
Worker 1 starting (0 ticks)
Worker 3 ready and waiting
--Sync Point--
Worker 2 ready and waiting
Worker 1 starting (1 ticks)
Worker 3 starting (1 ticks)
Worker 2 starting (3 ticks)
Worker 1 ready and waiting
Worker 3 ready and waiting</pre>
=={{header|Go}}==
'''Solution 1, WaitGroup'''
Line 676 ⟶ 848:
This first solution is a simple interpretation of the task, starting a goroutine (worker) for each part, letting the workers run concurrently, and waiting for them to all indicate completion. This is efficient and idiomatic in Go.
 
<langsyntaxhighlight lang="go">package main
import (
Line 709 ⟶ 881:
log.Println("assemble. cycle", c, "complete")
}
}</langsyntaxhighlight>
{{out}}
Sample run, with race detector option to show no race conditions detected.
Line 751 ⟶ 923:
Channels also synchronize, and in addition can send data. The solution shown here is very similar to the WaitGroup solution above but sends data on a channel to simulate a completed part. The channel operations provide synchronization and a WaitGroup is not needed.
 
<langsyntaxhighlight lang="go">package main
 
import (
Line 790 ⟶ 962:
log.Println(a, "assembled. cycle", c, "complete")
}
}</langsyntaxhighlight>
{{out}}
<pre>
Line 832 ⟶ 1,004:
not justified.
 
<langsyntaxhighlight lang="go">package main
 
import (
Line 895 ⟶ 1,067:
close(done)
wg.Wait()
}</langsyntaxhighlight>
{{out}}
<pre>
Line 942 ⟶ 1,114:
 
This solution shows workers joining and leaving, although it is a rather different interpretation of the task.
<langsyntaxhighlight lang="go">package main
 
import (
Line 1,009 ⟶ 1,181:
}
l.Println("worker", id, "leaves shop")
}</langsyntaxhighlight>
Output:
<pre>worker 1 contracted to assemble 2 details
Line 1,047 ⟶ 1,219:
worker 6 leaves shop
mechanism 5 completed</pre>
 
=={{header|Haskell}}==
<p>Although not being sure if the approach might be right, this example shows several workers performing a series of tasks simultaneously and synchronizing themselves before starting the next task.</p>
Line 1,063 ⟶ 1,234:
<li>For effectful computations, you should use concurrent threads (forkIO and MVar from the module Control.Concurrent), software transactional memory (STM) or alternatives provided by other modules.</li>
</ul>
<langsyntaxhighlight Haskelllang="haskell">import Control.Parallel
 
data Task a = Idle | Make a
Line 1,145 ⟶ 1,316:
 
main = workshop sum tasks
</syntaxhighlight>
</lang>
<p>The following version works with the concurrency model provided by the module Control.Concurrent</p>
<p>A workshop is an MVar that holds three values: the number of workers doing something, the number of workers ready for the next task and the total number of workers at the moment.</p>
Line 1,155 ⟶ 1,326:
<p>Other than the parallel version above, this code runs in the IO Monad and makes it possible to perform IO actions such as accessing the hardware. However, all actions must have the return type IO (). If the workers must return some useful values, the MVar should be extended with the necessary fields and the workers should use those fields to store the results they produce.</p>
<p>Note: This code has been tested on GHC 7.6.1 and will most probably not run under other Haskell implementations due to the use of some functions from the module Control.Concurrent. It won't work if compiled with the -O2 compiler switch. Compile with the -threaded compiler switch if you want to run the threads in parallel.</p>
<langsyntaxhighlight Haskelllang="haskell">import Control.Concurrent
import Control.Monad -- needed for "forM", "forM_"
 
Line 1,266 ⟶ 1,437:
-- kill all worker threads before exit, if they're still running
forM_ (pids1 ++ pids2) killThread</langsyntaxhighlight>
'''Output:'''
<pre style="height: 200px;overflow:scroll">
Line 1,361 ⟶ 1,532:
The following only works in Unicon:
 
<langsyntaxhighlight lang="unicon">global nWorkers, workers, cv
 
procedure main(A)
Line 1,388 ⟶ 1,559:
wait(cv)
}
end</langsyntaxhighlight>
 
Sample run:
Line 1,416 ⟶ 1,587:
->
</pre>
 
=={{header|J}}==
 
Now that J has a threading implementation: threads may be assigned tasks, and referencing the values produced by the tasks automatically synchronizes.
The current implementations of J are all single threaded. However, the language definition offers a lot of parallelism which I imagine will eventually be supported, after performance gains significantly better than a factor of 2 on common problems become economically viable.
 
For example:
 
<syntaxhighlight lang="j"> {{for. y do. 0 T.'' end.}} 0>.4-1 T.'' NB. make sure we have some threads
For example in 1 2 3 + 4 5 6, we have three addition operations which are specified to be carried out in parallel, and this kind of parallelism pervades the language definition.
ts=: 6!:0 NB. timestamp
dl=: 6!:3 NB. delay
{{r=.EMPTY for. i.y do. dl 1[ r=.r,3}.ts'' end. r}} t. ''"0(3 5)
┌────────────┬────────────┐
│12 53 53.569│12 53 53.569│
│12 53 54.578│12 53 54.578│
│12 53 55.587│12 53 55.587│
│ │12 53 56.603│
│ │12 53 57.614│
└────────────┴────────────┘</syntaxhighlight>
 
Here, we had set up a loop which periodically tracked the time, and waited a second each time through the loop, and repeated the loop a number of times specified at task startup. We ran two tasks, to demonstrate that they were running side-by-side.
=={{header|Java}}==
<langsyntaxhighlight Javalang="java">import java.util.Scanner;
import java.util.Random;
 
Line 1,512 ⟶ 1,696:
public static int nWorkers = 0;
}
}</langsyntaxhighlight>
Output:
<pre style="height: 200px;overflow:scroll">
Line 1,552 ⟶ 1,736:
</pre>
{{works with|Java|1.5+}}
<langsyntaxhighlight lang="java5">import java.util.Random;
import java.util.concurrent.CountDownLatch;
 
Line 1,599 ⟶ 1,783:
}
}
}</langsyntaxhighlight>
Output:
<pre style="height: 200px;overflow:scroll">Starting task 1
Line 1,643 ⟶ 1,827:
Worker 2 is ready
Task 3 complete</pre>
 
=={{header|Julia}}==
Julia has specific macros for checkpoint type synchronization. @async starts an asynchronous task, and multiple @async tasks can be synchronized by wrapping them within the @sync macro statement, which creates a checkpoint for all @async tasks.
<langsyntaxhighlight lang="julia">
function runsim(numworkers, runs)
for count in 1:runs
Line 1,666 ⟶ 1,849:
for trial in trials
runsim(trial[1], trial[2])
end</langsyntaxhighlight>
{{output}}<pre>
Worker 1 finished after 0.2496063425219046 seconds
Line 1,752 ⟶ 1,935:
Finished all runs.
</pre>
 
=={{header|Kotlin}}==
{{trans|Java}}
<langsyntaxhighlight lang="scala">// Version 1.2.41
 
import java.util.Random
Line 1,814 ⟶ 1,996:
nTasks = readLine()!!.toInt()
runTasks()
}</langsyntaxhighlight>
 
{{output}}
Line 1,858 ⟶ 2,040:
Worker 5 is ready
</pre>
 
=={{header|Logtalk}}==
The following example can be found in the Logtalk distribution and is used here with permission. It's based on the Erlang solution for this task. Works when using SWI-Prolog, XSB, or YAP as the backend compiler.
<langsyntaxhighlight lang="logtalk">
:- object(checkpoint).
 
Line 1,927 ⟶ 2,108:
 
:- end_object.
</syntaxhighlight>
</lang>
Output:
<langsyntaxhighlight lang="text">
| ?- checkpoint::run.
Worker 1 item 3
Line 1,951 ⟶ 2,132:
All assemblies done.
yes
</syntaxhighlight>
</lang>
 
=={{header|Nim}}==
As in Oforth, the checkpoint is a thread (the main thread) and synchronization is done using channels:
Line 1,959 ⟶ 2,139:
Working on a task is simulated by sleeping during some time (randomly chosen).
 
<langsyntaxhighlight Nimlang="nim">import locks
import os
import random
Line 2,031 ⟶ 2,211:
orders[num].close()
responses.close()
deinitLock(randLock)</langsyntaxhighlight>
 
{{out}}
Line 2,064 ⟶ 2,244:
Sending stop order to workers.
All workers stopped.</pre>
 
=={{header|Oforth}}==
Checkpoint is implemented as a task. It :
Line 2,080 ⟶ 2,259:
- And waits for $allDone checkpoint return on its personal channel.
 
<langsyntaxhighlight Oforthlang="oforth">: task(n, jobs, myChannel)
while(true) [
System.Out "TASK " << n << " : Beginning my work..." << cr
Line 2,102 ⟶ 2,281:
 
#[ checkPoint(n, jobs, channels) ] &
n loop: i [ #[ task(i, jobs, channels at(i)) ] & ] ;</langsyntaxhighlight>
 
=={{header|Perl}}==
 
The perlipc man page details several approaches to interprocess communication. Here's one of my favourites: socketpair and fork. I've omitted some error-checking for brevity.
 
<langsyntaxhighlight lang="perl">#!/usr/bin/perl
use warnings;
use strict;
Line 2,176 ⟶ 2,354:
# workers had terminate, it would need to reap them to avoid zombies:
 
wait; wait;</langsyntaxhighlight>
 
A sample run:
Line 2,186 ⟶ 2,364:
msl@64Lucid:~/perl$
</pre>
 
=={{header|Phix}}==
Simple multitasking solution: no locking required, no race condition possible, supports workers leaving and joining.
<!--<langsyntaxhighlight Phixlang="phix">(notonline)-->
<span style="color: #000080;font-style:italic;">-- demo\rosetta\checkpoint_synchronisation.exw</span>
<span style="color: #008080;">without</span> <span style="color: #008080;">js</span> <span style="color: #000080;font-style:italic;">-- task_xxx(), get_key()</span>
Line 2,244 ⟶ 2,421:
<span style="color: #0000FF;">{}</span> <span style="color: #0000FF;">=</span> <span style="color: #7060A8;">wait_key</span><span style="color: #0000FF;">()</span>
<!--</langsyntaxhighlight>-->
{{out}}
<pre style="height: 200px;overflow:scroll">
Line 2,320 ⟶ 2,497:
worker B leaves
</pre>
 
=={{header|PicoLisp}}==
The following solution implements each worker as a coroutine. Therefore, it
Line 2,332 ⟶ 2,508:
'worker' takes a number of steps to perform. It "works" by printing each step,
and returning NIL when done.
<langsyntaxhighlight PicoLisplang="picolisp">(de checkpoints (Projects Workers)
(for P Projects
(prinl "Starting project number " P ":")
Line 2,350 ⟶ 2,526:
(yield ID)
(prinl "Worker " ID " step " N) )
NIL ) )</langsyntaxhighlight>
Output:
<pre>: (checkpoints 2 3) # Start two projects with 3 workers
Line 2,382 ⟶ 2,558:
Worker 1 step 4
Project number 2 is done.</pre>
 
=={{header|PureBasic}}==
 
PureBasic normally uses Semaphores and Mutex’s to synchronize parallel systems. This system only relies on semaphores between each thread and the controller (CheckPoint-procedure). For exchanging data a Mutex based message stack could easily be added, both synchronized according to this specific task or non-blocking if each worker could be allowed that freedom.
<langsyntaxhighlight PureBasiclang="purebasic">#MaxWorktime=8000 ; "Workday" in msec
 
; Structure that each thread uses
Line 2,472 ⟶ 2,647:
CheckPoint()
Print("Press ENTER to exit"): Input()
EndIf</langsyntaxhighlight>
<pre style="height: 200px;overflow:scroll">Enter number of workers to use [2-2000]: 5
Work started, 5 workers has been called.
Line 2,595 ⟶ 2,770:
Thread #1 is done.
Press ENTER to exit</pre>
 
=={{header|Python}}==
<syntaxhighlight lang="python">
<lang Python>
"""
 
Line 2,631 ⟶ 2,805:
w2.start()
w3.start()
</syntaxhighlight>
</lang>
Output:
<pre>
Line 2,647 ⟶ 2,821:
Exiting worker2
</pre>
 
=={{header|Racket}}==
This solution uses a double barrier to synchronize the five threads.
The method can be found on page 41 of the delightful book
[http://greenteapress.com/semaphores/downey08semaphores.pdf "The Little Book of Semaphores"] by Allen B. Downey.
<langsyntaxhighlight lang="racket">
#lang racket
(define t 5) ; total number of threads
Line 2,697 ⟶ 2,870:
(displayln (for/list ([_ t]) (channel-get ch)))
(loop))
</syntaxhighlight>
</lang>
Output:
<langsyntaxhighlight lang="racket">
(1 4 2 0 3)
(6 9 7 8 5)
Line 2,721 ⟶ 2,894:
(97 98 99 95 96)
...
</syntaxhighlight>
</lang>
 
=={{header|Raku}}==
(formerly Perl 6)
<syntaxhighlight lang="raku" perl6line>my $TotalWorkers = 3;
my $BatchToRun = 3;
my @TimeTaken = (5..15); # in seconds
Line 2,761 ⟶ 2,933:
}
);
}</langsyntaxhighlight>
{{out}}
<pre>Worker 1 at batch 0 will work for 6 seconds ..
Line 2,785 ⟶ 2,957:
>>>>> batch 2 completed.
</pre>
 
=={{header|Ruby}}==
{{needs-review|Ruby|This code might or might not do the correct task. See comment at [[Talk:{{PAGENAME}}]].}}
 
<langsyntaxhighlight lang="ruby">require 'socket'
 
# A Workshop runs all of its workers, then collects their results. Use
Line 2,946 ⟶ 3,117:
# Remove all workers.
wids.each { |wid| shop.remove wid }
pp shop.work(6)</langsyntaxhighlight>
 
Example of output: <pre>{23187=>[0, 1346269],
Line 2,962 ⟶ 3,133:
4494=>[5, 4, 1166220]}
{}</pre>
 
=={{header|Rust}}==
<langsyntaxhighlight lang="rust">
//! We implement this task using Rust's Barriers. Barriers are simply thread synchronization
//! points--if a task waits at a barrier, it will not continue until the number of tasks for which
Line 3,030 ⟶ 3,200:
checkpoint();
}
</syntaxhighlight>
</lang>
 
 
=={{header|Scala}}==
<langsyntaxhighlight Scalalang="scala">import java.util.{Random, Scanner}
 
object CheckpointSync extends App {
Line 3,117 ⟶ 3,285:
runTasks(in.nextInt)
 
}</langsyntaxhighlight>
 
=={{header|Tcl}}==
This implementation works by having a separate thread handle the synchronization (inter-thread message delivery already being serialized). The alternative, using a read-write mutex, is more complex and more likely to run into trouble with multi-core machines.
<langsyntaxhighlight lang="tcl">package require Tcl 8.5
package require Thread
 
Line 3,217 ⟶ 3,384:
expr {[llength $members] > 0}
}
}</langsyntaxhighlight>
Demonstration of how this works.
{{trans|Ada}}
<langsyntaxhighlight lang="tcl"># Build the workers
foreach worker {A B C D} {
dict set ids $worker [checkpoint makeThread {
Line 3,249 ⟶ 3,416:
break
}
}</langsyntaxhighlight>
Output:
<pre>
Line 3,292 ⟶ 3,459:
B is ready
D is ready</pre>
 
=={{header|Wren}}==
{{trans|Kotlin}}
{{libheader|Wren-ioutil}}
<langsyntaxhighlight ecmascriptlang="wren">import "random" for Random
import "scheduler" for Scheduler
import "timer" for Timer
import "./ioutil" for Input
 
var rgen = Random.new()
Line 3,336 ⟶ 3,502:
nWorkers = Input.integer("Enter number of workers to use: ", 1)
nTasks = Input.integer("Enter number of tasks to complete: ", 1)
runTasks.call()</langsyntaxhighlight>
 
{{out}}
Line 3,385 ⟶ 3,551:
The consumer requests a part it doesn't have, waits for a part and puts the received part (which might not be the requested one (if buggy code)) in a bin and assembles the parts into a product.
Repeat until all requested products are made.
<langsyntaxhighlight lang="zkl">const NUM_PARTS=5; // number of parts used to make the product
var requested=Atomic.Int(-1); // the id of the part the consumer needs
var pipe=Thread.Pipe(); // "conveyor belt" of parts to consumer
Line 3,410 ⟶ 3,576:
foreach n in (NUM_PARTS){ product[n]-=1 } // remove parts from bin
}
println("Done"); // but workers are still waiting</langsyntaxhighlight>
An AtomicInt is an integer that does its operations in an atomic fashion. It is used to serialize the producers and consumer.
 
Line 3,428 ⟶ 3,594:
Done
</pre>
{{omit from|Axe}}
 
{{omit from|Maxima}}
{{omit from|ML/I}}
{{omit from|Maxima}}
{{omit from|Axe}}
9,476

edits