Checkpoint synchronization: Difference between revisions

Content added Content deleted
m (C# removed a redundant async.)
m (syntax highlighting fixup automation)
Line 13: Line 13:


=={{header|Ada}}==
=={{header|Ada}}==
<lang Ada>with Ada.Calendar; use Ada.Calendar;
<syntaxhighlight lang=Ada>with Ada.Calendar; use Ada.Calendar;
with Ada.Numerics.Float_Random;
with Ada.Numerics.Float_Random;
with Ada.Text_IO; use Ada.Text_IO;
with Ada.Text_IO; use Ada.Text_IO;
Line 115: Line 115:
end Test_Checkpoint;
end Test_Checkpoint;


</syntaxhighlight>
</lang>
Sample output:
Sample output:
<pre style="height: 200px;overflow:scroll">
<pre style="height: 200px;overflow:scroll">
Line 190: Line 190:
=={{header|BBC BASIC}}==
=={{header|BBC BASIC}}==
{{works with|BBC BASIC for Windows}}
{{works with|BBC BASIC for Windows}}
<lang bbcbasic> INSTALL @lib$+"TIMERLIB"
<syntaxhighlight lang=bbcbasic> INSTALL @lib$+"TIMERLIB"
nWorkers% = 3
nWorkers% = 3
DIM tID%(nWorkers%)
DIM tID%(nWorkers%)
Line 248: Line 248:
PROC_killtimer(tID%(I%))
PROC_killtimer(tID%(I%))
NEXT
NEXT
ENDPROC</lang>
ENDPROC</syntaxhighlight>
'''Output:'''
'''Output:'''
<pre>
<pre>
Line 272: Line 272:
=={{header|C}}==
=={{header|C}}==
Using OpenMP. Compiled with <code>gcc -Wall -fopenmp</code>.
Using OpenMP. Compiled with <code>gcc -Wall -fopenmp</code>.
<lang C>#include <stdio.h>
<syntaxhighlight lang=C>#include <stdio.h>
#include <stdlib.h>
#include <stdlib.h>
#include <unistd.h>
#include <unistd.h>
Line 302: Line 302:


return 0;
return 0;
}</lang>
}</syntaxhighlight>


=={{header|C++}}==
=={{header|C++}}==
{{works with|C++11}}
{{works with|C++11}}
<lang cpp>#include <iostream>
<syntaxhighlight lang=cpp>#include <iostream>
#include <chrono>
#include <chrono>
#include <atomic>
#include <atomic>
Line 356: Line 356:
for(auto& t: threads) t.join();
for(auto& t: threads) t.join();
std::cout << "Assembly is finished";
std::cout << "Assembly is finished";
}</lang>
}</syntaxhighlight>
{{out}}
{{out}}
<pre>
<pre>
Line 374: Line 374:
=={{header|C sharp|C#}}==
=={{header|C sharp|C#}}==
{{works with|C sharp|10}}
{{works with|C sharp|10}}
<lang csharp>using System;
<syntaxhighlight lang=csharp>using System;
using System.Linq;
using System.Linq;
using System.Threading;
using System.Threading;
Line 432: Line 432:
}
}
}</lang>
}</syntaxhighlight>
{{out}}
{{out}}
<pre style="height:30ex;overflow:scroll">
<pre style="height:30ex;overflow:scroll">
Line 470: Line 470:
So to make it interesting, this version supports workers dynamically joining and parting, and uses the new (2013) ''core.async'' library to use Go-like channels.
So to make it interesting, this version supports workers dynamically joining and parting, and uses the new (2013) ''core.async'' library to use Go-like channels.
Also, each worker passes a value to the checkpoint, so that some ''combine'' function could consume them once they're all received.
Also, each worker passes a value to the checkpoint, so that some ''combine'' function could consume them once they're all received.
<lang clojure>(ns checkpoint.core
<syntaxhighlight lang=clojure>(ns checkpoint.core
(:gen-class)
(:gen-class)
(:require [clojure.core.async :as async :refer [go <! >! <!! >!! alts! close!]]
(:require [clojure.core.async :as async :refer [go <! >! <!! >!! alts! close!]]
Line 541: Line 541:
(worker ckpt 10 (monitor 2))))
(worker ckpt 10 (monitor 2))))


</syntaxhighlight>
</lang>


=={{header|D}}==
=={{header|D}}==
<lang d>import std.stdio;
<syntaxhighlight lang=d>import std.stdio;
import std.parallelism: taskPool, defaultPoolThreads, totalCPUs;
import std.parallelism: taskPool, defaultPoolThreads, totalCPUs;


Line 567: Line 567:
buildMechanism(42);
buildMechanism(42);
buildMechanism(11);
buildMechanism(11);
}</lang>
}</syntaxhighlight>
{{out|Example output}}
{{out|Example output}}
<pre>Build detail 0
<pre>Build detail 0
Line 589: Line 589:
That said, here is an implementation of the task as stated. We start by defining a 'flag set' data structure (which is hopefully also useful for other problems), which allows us to express the checkpoint algorithm straightforwardly while being protected against the possibility of a task calling <code>deliver</code> or <code>leave</code> too many times. Note also that each task gets its own reference denoting its membership in the checkpoint group; thus it can only speak for itself and not break any global invariants.
That said, here is an implementation of the task as stated. We start by defining a 'flag set' data structure (which is hopefully also useful for other problems), which allows us to express the checkpoint algorithm straightforwardly while being protected against the possibility of a task calling <code>deliver</code> or <code>leave</code> too many times. Note also that each task gets its own reference denoting its membership in the checkpoint group; thus it can only speak for itself and not break any global invariants.


<lang e>/** A flagSet solves this problem: There are N things, each in a true or false
<syntaxhighlight lang=e>/** A flagSet solves this problem: There are N things, each in a true or false
* state, and we want to know whether they are all true (or all false), and be
* state, and we want to know whether they are all true (or all false), and be
* able to bulk-change all of them, and all this without allowing double-
* able to bulk-change all of them, and all this without allowing double-
Line 702: Line 702:
waits with= makeWorker(piece, checkpoint)
waits with= makeWorker(piece, checkpoint)
}
}
interp.waitAtTop(promiseAllFulfilled(waits))</lang>
interp.waitAtTop(promiseAllFulfilled(waits))</syntaxhighlight>


=={{header|Erlang}}==
=={{header|Erlang}}==
A team of 5 workers assemble 3 items. The time it takes to assemble 1 item is 0 - 100 milliseconds.
A team of 5 workers assemble 3 items. The time it takes to assemble 1 item is 0 - 100 milliseconds.
<lang Erlang>
<syntaxhighlight lang=Erlang>
-module( checkpoint_synchronization ).
-module( checkpoint_synchronization ).


Line 742: Line 742:
end,
end,
worker_loop( Worker, N - 1, Checkpoint ).
worker_loop( Worker, N - 1, Checkpoint ).
</syntaxhighlight>
</lang>
{{out}}
{{out}}
<pre>
<pre>
Line 766: Line 766:
=={{header|FreeBASIC}}==
=={{header|FreeBASIC}}==
The library ontimer.bi, I have taken it from [https://www.freebasic.net/forum/viewtopic.php?f=7&t=23454 forums of FB].
The library ontimer.bi, I have taken it from [https://www.freebasic.net/forum/viewtopic.php?f=7&t=23454 forums of FB].
<lang freebasic>#include "ontimer.bi"
<syntaxhighlight lang=freebasic>#include "ontimer.bi"


Randomize Timer
Randomize Timer
Line 821: Line 821:
OnTimer(900, @worker3, 1)
OnTimer(900, @worker3, 1)
Sleep 1000
Sleep 1000
Loop</lang>
Loop</syntaxhighlight>
{{out}}
{{out}}
<pre>Worker 1 starting (2 ticks)
<pre>Worker 1 starting (2 ticks)
Line 860: Line 860:
This first solution is a simple interpretation of the task, starting a goroutine (worker) for each part, letting the workers run concurrently, and waiting for them to all indicate completion. This is efficient and idiomatic in Go.
This first solution is a simple interpretation of the task, starting a goroutine (worker) for each part, letting the workers run concurrently, and waiting for them to all indicate completion. This is efficient and idiomatic in Go.


<lang go>package main
<syntaxhighlight lang=go>package main
import (
import (
Line 893: Line 893:
log.Println("assemble. cycle", c, "complete")
log.Println("assemble. cycle", c, "complete")
}
}
}</lang>
}</syntaxhighlight>
{{out}}
{{out}}
Sample run, with race detector option to show no race conditions detected.
Sample run, with race detector option to show no race conditions detected.
Line 935: Line 935:
Channels also synchronize, and in addition can send data. The solution shown here is very similar to the WaitGroup solution above but sends data on a channel to simulate a completed part. The channel operations provide synchronization and a WaitGroup is not needed.
Channels also synchronize, and in addition can send data. The solution shown here is very similar to the WaitGroup solution above but sends data on a channel to simulate a completed part. The channel operations provide synchronization and a WaitGroup is not needed.


<lang go>package main
<syntaxhighlight lang=go>package main


import (
import (
Line 974: Line 974:
log.Println(a, "assembled. cycle", c, "complete")
log.Println(a, "assembled. cycle", c, "complete")
}
}
}</lang>
}</syntaxhighlight>
{{out}}
{{out}}
<pre>
<pre>
Line 1,016: Line 1,016:
not justified.
not justified.


<lang go>package main
<syntaxhighlight lang=go>package main


import (
import (
Line 1,079: Line 1,079:
close(done)
close(done)
wg.Wait()
wg.Wait()
}</lang>
}</syntaxhighlight>
{{out}}
{{out}}
<pre>
<pre>
Line 1,126: Line 1,126:


This solution shows workers joining and leaving, although it is a rather different interpretation of the task.
This solution shows workers joining and leaving, although it is a rather different interpretation of the task.
<lang go>package main
<syntaxhighlight lang=go>package main


import (
import (
Line 1,193: Line 1,193:
}
}
l.Println("worker", id, "leaves shop")
l.Println("worker", id, "leaves shop")
}</lang>
}</syntaxhighlight>
Output:
Output:
<pre>worker 1 contracted to assemble 2 details
<pre>worker 1 contracted to assemble 2 details
Line 1,247: Line 1,247:
<li>For effectful computations, you should use concurrent threads (forkIO and MVar from the module Control.Concurrent), software transactional memory (STM) or alternatives provided by other modules.</li>
<li>For effectful computations, you should use concurrent threads (forkIO and MVar from the module Control.Concurrent), software transactional memory (STM) or alternatives provided by other modules.</li>
</ul>
</ul>
<lang Haskell>import Control.Parallel
<syntaxhighlight lang=Haskell>import Control.Parallel


data Task a = Idle | Make a
data Task a = Idle | Make a
Line 1,329: Line 1,329:


main = workshop sum tasks
main = workshop sum tasks
</syntaxhighlight>
</lang>
<p>The following version works with the concurrency model provided by the module Control.Concurrent</p>
<p>The following version works with the concurrency model provided by the module Control.Concurrent</p>
<p>A workshop is an MVar that holds three values: the number of workers doing something, the number of workers ready for the next task and the total number of workers at the moment.</p>
<p>A workshop is an MVar that holds three values: the number of workers doing something, the number of workers ready for the next task and the total number of workers at the moment.</p>
Line 1,339: Line 1,339:
<p>Other than the parallel version above, this code runs in the IO Monad and makes it possible to perform IO actions such as accessing the hardware. However, all actions must have the return type IO (). If the workers must return some useful values, the MVar should be extended with the necessary fields and the workers should use those fields to store the results they produce.</p>
<p>Other than the parallel version above, this code runs in the IO Monad and makes it possible to perform IO actions such as accessing the hardware. However, all actions must have the return type IO (). If the workers must return some useful values, the MVar should be extended with the necessary fields and the workers should use those fields to store the results they produce.</p>
<p>Note: This code has been tested on GHC 7.6.1 and will most probably not run under other Haskell implementations due to the use of some functions from the module Control.Concurrent. It won't work if compiled with the -O2 compiler switch. Compile with the -threaded compiler switch if you want to run the threads in parallel.</p>
<p>Note: This code has been tested on GHC 7.6.1 and will most probably not run under other Haskell implementations due to the use of some functions from the module Control.Concurrent. It won't work if compiled with the -O2 compiler switch. Compile with the -threaded compiler switch if you want to run the threads in parallel.</p>
<lang Haskell>import Control.Concurrent
<syntaxhighlight lang=Haskell>import Control.Concurrent
import Control.Monad -- needed for "forM", "forM_"
import Control.Monad -- needed for "forM", "forM_"


Line 1,450: Line 1,450:
-- kill all worker threads before exit, if they're still running
-- kill all worker threads before exit, if they're still running
forM_ (pids1 ++ pids2) killThread</lang>
forM_ (pids1 ++ pids2) killThread</syntaxhighlight>
'''Output:'''
'''Output:'''
<pre style="height: 200px;overflow:scroll">
<pre style="height: 200px;overflow:scroll">
Line 1,545: Line 1,545:
The following only works in Unicon:
The following only works in Unicon:


<lang unicon>global nWorkers, workers, cv
<syntaxhighlight lang=unicon>global nWorkers, workers, cv


procedure main(A)
procedure main(A)
Line 1,572: Line 1,572:
wait(cv)
wait(cv)
}
}
end</lang>
end</syntaxhighlight>


Sample run:
Sample run:
Line 1,607: Line 1,607:
For example:
For example:


<lang J> {{for. y do. 0 T.'' end.}} 0>.4-1 T.'' NB. make sure we have some threads
<syntaxhighlight lang=J> {{for. y do. 0 T.'' end.}} 0>.4-1 T.'' NB. make sure we have some threads
ts=: 6!:0 NB. timestamp
ts=: 6!:0 NB. timestamp
dl=: 6!:3 NB. delay
dl=: 6!:3 NB. delay
Line 1,618: Line 1,618:
│ │12 53 56.603│
│ │12 53 56.603│
│ │12 53 57.614│
│ │12 53 57.614│
└────────────┴────────────┘</lang>
└────────────┴────────────┘</syntaxhighlight>


Here, we had set up a loop which periodically tracked the time, and waited a second each time through the loop, and repeated the loop a number of times specified at task startup. We ran two tasks, to demonstrate that they were running side-by-side.
Here, we had set up a loop which periodically tracked the time, and waited a second each time through the loop, and repeated the loop a number of times specified at task startup. We ran two tasks, to demonstrate that they were running side-by-side.


=={{header|Java}}==
=={{header|Java}}==
<lang Java>import java.util.Scanner;
<syntaxhighlight lang=Java>import java.util.Scanner;
import java.util.Random;
import java.util.Random;


Line 1,711: Line 1,711:
public static int nWorkers = 0;
public static int nWorkers = 0;
}
}
}</lang>
}</syntaxhighlight>
Output:
Output:
<pre style="height: 200px;overflow:scroll">
<pre style="height: 200px;overflow:scroll">
Line 1,751: Line 1,751:
</pre>
</pre>
{{works with|Java|1.5+}}
{{works with|Java|1.5+}}
<lang java5>import java.util.Random;
<syntaxhighlight lang=java5>import java.util.Random;
import java.util.concurrent.CountDownLatch;
import java.util.concurrent.CountDownLatch;


Line 1,798: Line 1,798:
}
}
}
}
}</lang>
}</syntaxhighlight>
Output:
Output:
<pre style="height: 200px;overflow:scroll">Starting task 1
<pre style="height: 200px;overflow:scroll">Starting task 1
Line 1,845: Line 1,845:
=={{header|Julia}}==
=={{header|Julia}}==
Julia has specific macros for checkpoint type synchronization. @async starts an asynchronous task, and multiple @async tasks can be synchronized by wrapping them within the @sync macro statement, which creates a checkpoint for all @async tasks.
Julia has specific macros for checkpoint type synchronization. @async starts an asynchronous task, and multiple @async tasks can be synchronized by wrapping them within the @sync macro statement, which creates a checkpoint for all @async tasks.
<lang julia>
<syntaxhighlight lang=julia>
function runsim(numworkers, runs)
function runsim(numworkers, runs)
for count in 1:runs
for count in 1:runs
Line 1,865: Line 1,865:
for trial in trials
for trial in trials
runsim(trial[1], trial[2])
runsim(trial[1], trial[2])
end</lang>
end</syntaxhighlight>
{{output}}<pre>
{{output}}<pre>
Worker 1 finished after 0.2496063425219046 seconds
Worker 1 finished after 0.2496063425219046 seconds
Line 1,954: Line 1,954:
=={{header|Kotlin}}==
=={{header|Kotlin}}==
{{trans|Java}}
{{trans|Java}}
<lang scala>// Version 1.2.41
<syntaxhighlight lang=scala>// Version 1.2.41


import java.util.Random
import java.util.Random
Line 2,013: Line 2,013:
nTasks = readLine()!!.toInt()
nTasks = readLine()!!.toInt()
runTasks()
runTasks()
}</lang>
}</syntaxhighlight>


{{output}}
{{output}}
Line 2,060: Line 2,060:
=={{header|Logtalk}}==
=={{header|Logtalk}}==
The following example can be found in the Logtalk distribution and is used here with permission. It's based on the Erlang solution for this task. Works when using SWI-Prolog, XSB, or YAP as the backend compiler.
The following example can be found in the Logtalk distribution and is used here with permission. It's based on the Erlang solution for this task. Works when using SWI-Prolog, XSB, or YAP as the backend compiler.
<lang logtalk>
<syntaxhighlight lang=logtalk>
:- object(checkpoint).
:- object(checkpoint).


Line 2,126: Line 2,126:


:- end_object.
:- end_object.
</syntaxhighlight>
</lang>
Output:
Output:
<lang text>
<syntaxhighlight lang=text>
| ?- checkpoint::run.
| ?- checkpoint::run.
Worker 1 item 3
Worker 1 item 3
Line 2,150: Line 2,150:
All assemblies done.
All assemblies done.
yes
yes
</syntaxhighlight>
</lang>


=={{header|Nim}}==
=={{header|Nim}}==
Line 2,158: Line 2,158:
Working on a task is simulated by sleeping during some time (randomly chosen).
Working on a task is simulated by sleeping during some time (randomly chosen).


<lang Nim>import locks
<syntaxhighlight lang=Nim>import locks
import os
import os
import random
import random
Line 2,230: Line 2,230:
orders[num].close()
orders[num].close()
responses.close()
responses.close()
deinitLock(randLock)</lang>
deinitLock(randLock)</syntaxhighlight>


{{out}}
{{out}}
Line 2,279: Line 2,279:
- And waits for $allDone checkpoint return on its personal channel.
- And waits for $allDone checkpoint return on its personal channel.


<lang Oforth>: task(n, jobs, myChannel)
<syntaxhighlight lang=Oforth>: task(n, jobs, myChannel)
while(true) [
while(true) [
System.Out "TASK " << n << " : Beginning my work..." << cr
System.Out "TASK " << n << " : Beginning my work..." << cr
Line 2,301: Line 2,301:


#[ checkPoint(n, jobs, channels) ] &
#[ checkPoint(n, jobs, channels) ] &
n loop: i [ #[ task(i, jobs, channels at(i)) ] & ] ;</lang>
n loop: i [ #[ task(i, jobs, channels at(i)) ] & ] ;</syntaxhighlight>


=={{header|Perl}}==
=={{header|Perl}}==
Line 2,307: Line 2,307:
The perlipc man page details several approaches to interprocess communication. Here's one of my favourites: socketpair and fork. I've omitted some error-checking for brevity.
The perlipc man page details several approaches to interprocess communication. Here's one of my favourites: socketpair and fork. I've omitted some error-checking for brevity.


<lang perl>#!/usr/bin/perl
<syntaxhighlight lang=perl>#!/usr/bin/perl
use warnings;
use warnings;
use strict;
use strict;
Line 2,375: Line 2,375:
# workers had terminate, it would need to reap them to avoid zombies:
# workers had terminate, it would need to reap them to avoid zombies:


wait; wait;</lang>
wait; wait;</syntaxhighlight>


A sample run:
A sample run:
Line 2,388: Line 2,388:
=={{header|Phix}}==
=={{header|Phix}}==
Simple multitasking solution: no locking required, no race condition possible, supports workers leaving and joining.
Simple multitasking solution: no locking required, no race condition possible, supports workers leaving and joining.
<!--<lang Phix>(notonline)-->
<!--<syntaxhighlight lang=Phix>(notonline)-->
<span style="color: #000080;font-style:italic;">-- demo\rosetta\checkpoint_synchronisation.exw</span>
<span style="color: #000080;font-style:italic;">-- demo\rosetta\checkpoint_synchronisation.exw</span>
<span style="color: #008080;">without</span> <span style="color: #008080;">js</span> <span style="color: #000080;font-style:italic;">-- task_xxx(), get_key()</span>
<span style="color: #008080;">without</span> <span style="color: #008080;">js</span> <span style="color: #000080;font-style:italic;">-- task_xxx(), get_key()</span>
Line 2,443: Line 2,443:
<span style="color: #0000FF;">{}</span> <span style="color: #0000FF;">=</span> <span style="color: #7060A8;">wait_key</span><span style="color: #0000FF;">()</span>
<span style="color: #0000FF;">{}</span> <span style="color: #0000FF;">=</span> <span style="color: #7060A8;">wait_key</span><span style="color: #0000FF;">()</span>
<!--</lang>-->
<!--</syntaxhighlight>-->
{{out}}
{{out}}
<pre style="height: 200px;overflow:scroll">
<pre style="height: 200px;overflow:scroll">
Line 2,531: Line 2,531:
'worker' takes a number of steps to perform. It "works" by printing each step,
'worker' takes a number of steps to perform. It "works" by printing each step,
and returning NIL when done.
and returning NIL when done.
<lang PicoLisp>(de checkpoints (Projects Workers)
<syntaxhighlight lang=PicoLisp>(de checkpoints (Projects Workers)
(for P Projects
(for P Projects
(prinl "Starting project number " P ":")
(prinl "Starting project number " P ":")
Line 2,549: Line 2,549:
(yield ID)
(yield ID)
(prinl "Worker " ID " step " N) )
(prinl "Worker " ID " step " N) )
NIL ) )</lang>
NIL ) )</syntaxhighlight>
Output:
Output:
<pre>: (checkpoints 2 3) # Start two projects with 3 workers
<pre>: (checkpoints 2 3) # Start two projects with 3 workers
Line 2,585: Line 2,585:


PureBasic normally uses Semaphores and Mutex’s to synchronize parallel systems. This system only relies on semaphores between each thread and the controller (CheckPoint-procedure). For exchanging data a Mutex based message stack could easily be added, both synchronized according to this specific task or non-blocking if each worker could be allowed that freedom.
PureBasic normally uses Semaphores and Mutex’s to synchronize parallel systems. This system only relies on semaphores between each thread and the controller (CheckPoint-procedure). For exchanging data a Mutex based message stack could easily be added, both synchronized according to this specific task or non-blocking if each worker could be allowed that freedom.
<lang PureBasic>#MaxWorktime=8000 ; "Workday" in msec
<syntaxhighlight lang=PureBasic>#MaxWorktime=8000 ; "Workday" in msec


; Structure that each thread uses
; Structure that each thread uses
Line 2,671: Line 2,671:
CheckPoint()
CheckPoint()
Print("Press ENTER to exit"): Input()
Print("Press ENTER to exit"): Input()
EndIf</lang>
EndIf</syntaxhighlight>
<pre style="height: 200px;overflow:scroll">Enter number of workers to use [2-2000]: 5
<pre style="height: 200px;overflow:scroll">Enter number of workers to use [2-2000]: 5
Work started, 5 workers has been called.
Work started, 5 workers has been called.
Line 2,796: Line 2,796:


=={{header|Python}}==
=={{header|Python}}==
<lang Python>
<syntaxhighlight lang=Python>
"""
"""


Line 2,830: Line 2,830:
w2.start()
w2.start()
w3.start()
w3.start()
</syntaxhighlight>
</lang>
Output:
Output:
<pre>
<pre>
Line 2,851: Line 2,851:
The method can be found on page 41 of the delightful book
The method can be found on page 41 of the delightful book
[http://greenteapress.com/semaphores/downey08semaphores.pdf "The Little Book of Semaphores"] by Allen B. Downey.
[http://greenteapress.com/semaphores/downey08semaphores.pdf "The Little Book of Semaphores"] by Allen B. Downey.
<lang racket>
<syntaxhighlight lang=racket>
#lang racket
#lang racket
(define t 5) ; total number of threads
(define t 5) ; total number of threads
Line 2,896: Line 2,896:
(displayln (for/list ([_ t]) (channel-get ch)))
(displayln (for/list ([_ t]) (channel-get ch)))
(loop))
(loop))
</syntaxhighlight>
</lang>
Output:
Output:
<lang racket>
<syntaxhighlight lang=racket>
(1 4 2 0 3)
(1 4 2 0 3)
(6 9 7 8 5)
(6 9 7 8 5)
Line 2,920: Line 2,920:
(97 98 99 95 96)
(97 98 99 95 96)
...
...
</syntaxhighlight>
</lang>


=={{header|Raku}}==
=={{header|Raku}}==
(formerly Perl 6)
(formerly Perl 6)
<lang perl6>my $TotalWorkers = 3;
<syntaxhighlight lang=raku line>my $TotalWorkers = 3;
my $BatchToRun = 3;
my $BatchToRun = 3;
my @TimeTaken = (5..15); # in seconds
my @TimeTaken = (5..15); # in seconds
Line 2,960: Line 2,960:
}
}
);
);
}</lang>
}</syntaxhighlight>
{{out}}
{{out}}
<pre>Worker 1 at batch 0 will work for 6 seconds ..
<pre>Worker 1 at batch 0 will work for 6 seconds ..
Line 2,988: Line 2,988:
{{needs-review|Ruby|This code might or might not do the correct task. See comment at [[Talk:{{PAGENAME}}]].}}
{{needs-review|Ruby|This code might or might not do the correct task. See comment at [[Talk:{{PAGENAME}}]].}}


<lang ruby>require 'socket'
<syntaxhighlight lang=ruby>require 'socket'


# A Workshop runs all of its workers, then collects their results. Use
# A Workshop runs all of its workers, then collects their results. Use
Line 3,145: Line 3,145:
# Remove all workers.
# Remove all workers.
wids.each { |wid| shop.remove wid }
wids.each { |wid| shop.remove wid }
pp shop.work(6)</lang>
pp shop.work(6)</syntaxhighlight>


Example of output: <pre>{23187=>[0, 1346269],
Example of output: <pre>{23187=>[0, 1346269],
Line 3,163: Line 3,163:


=={{header|Rust}}==
=={{header|Rust}}==
<lang rust>
<syntaxhighlight lang=rust>
//! We implement this task using Rust's Barriers. Barriers are simply thread synchronization
//! We implement this task using Rust's Barriers. Barriers are simply thread synchronization
//! points--if a task waits at a barrier, it will not continue until the number of tasks for which
//! points--if a task waits at a barrier, it will not continue until the number of tasks for which
Line 3,229: Line 3,229:
checkpoint();
checkpoint();
}
}
</syntaxhighlight>
</lang>




=={{header|Scala}}==
=={{header|Scala}}==
<lang Scala>import java.util.{Random, Scanner}
<syntaxhighlight lang=Scala>import java.util.{Random, Scanner}


object CheckpointSync extends App {
object CheckpointSync extends App {
Line 3,316: Line 3,316:
runTasks(in.nextInt)
runTasks(in.nextInt)


}</lang>
}</syntaxhighlight>


=={{header|Tcl}}==
=={{header|Tcl}}==
This implementation works by having a separate thread handle the synchronization (inter-thread message delivery already being serialized). The alternative, using a read-write mutex, is more complex and more likely to run into trouble with multi-core machines.
This implementation works by having a separate thread handle the synchronization (inter-thread message delivery already being serialized). The alternative, using a read-write mutex, is more complex and more likely to run into trouble with multi-core machines.
<lang tcl>package require Tcl 8.5
<syntaxhighlight lang=tcl>package require Tcl 8.5
package require Thread
package require Thread


Line 3,416: Line 3,416:
expr {[llength $members] > 0}
expr {[llength $members] > 0}
}
}
}</lang>
}</syntaxhighlight>
Demonstration of how this works.
Demonstration of how this works.
{{trans|Ada}}
{{trans|Ada}}
<lang tcl># Build the workers
<syntaxhighlight lang=tcl># Build the workers
foreach worker {A B C D} {
foreach worker {A B C D} {
dict set ids $worker [checkpoint makeThread {
dict set ids $worker [checkpoint makeThread {
Line 3,448: Line 3,448:
break
break
}
}
}</lang>
}</syntaxhighlight>
Output:
Output:
<pre>
<pre>
Line 3,495: Line 3,495:
{{trans|Kotlin}}
{{trans|Kotlin}}
{{libheader|Wren-ioutil}}
{{libheader|Wren-ioutil}}
<lang ecmascript>import "random" for Random
<syntaxhighlight lang=ecmascript>import "random" for Random
import "scheduler" for Scheduler
import "scheduler" for Scheduler
import "timer" for Timer
import "timer" for Timer
Line 3,535: Line 3,535:
nWorkers = Input.integer("Enter number of workers to use: ", 1)
nWorkers = Input.integer("Enter number of workers to use: ", 1)
nTasks = Input.integer("Enter number of tasks to complete: ", 1)
nTasks = Input.integer("Enter number of tasks to complete: ", 1)
runTasks.call()</lang>
runTasks.call()</syntaxhighlight>


{{out}}
{{out}}
Line 3,584: Line 3,584:
The consumer requests a part it doesn't have, waits for a part and puts the received part (which might not be the requested one (if buggy code)) in a bin and assembles the parts into a product.
The consumer requests a part it doesn't have, waits for a part and puts the received part (which might not be the requested one (if buggy code)) in a bin and assembles the parts into a product.
Repeat until all requested products are made.
Repeat until all requested products are made.
<lang zkl>const NUM_PARTS=5; // number of parts used to make the product
<syntaxhighlight lang=zkl>const NUM_PARTS=5; // number of parts used to make the product
var requested=Atomic.Int(-1); // the id of the part the consumer needs
var requested=Atomic.Int(-1); // the id of the part the consumer needs
var pipe=Thread.Pipe(); // "conveyor belt" of parts to consumer
var pipe=Thread.Pipe(); // "conveyor belt" of parts to consumer
Line 3,609: Line 3,609:
foreach n in (NUM_PARTS){ product[n]-=1 } // remove parts from bin
foreach n in (NUM_PARTS){ product[n]-=1 } // remove parts from bin
}
}
println("Done"); // but workers are still waiting</lang>
println("Done"); // but workers are still waiting</syntaxhighlight>
An AtomicInt is an integer that does its operations in an atomic fashion. It is used to serialize the producers and consumer.
An AtomicInt is an integer that does its operations in an atomic fashion. It is used to serialize the producers and consumer.