Mutex
You are encouraged to solve this task according to the task description, using any language you may know.
A mutex (from "mutual exclusion") is a synchronization object, a variant of semaphore with k=1. A mutex is said to be seized by a task decreasing k. It is released when the task restores k. Mutexes are typically used to protect a shared resource from concurrent access. A task seizes (or acquires) the mutex, then accesses the resource, and after that releases the mutex.
A mutex is a low-level synchronization primitive exposed to deadlocking. A deadlock can occur with just two tasks and two mutexes (if each task attempts to acquire both mutexes, but in the opposite order). Entering the deadlock is usually aggravated by a race condition state, which leads to sporadic hangups, which are very difficult to track down.
Variants of mutexes
Global and local mutexes
Usually the OS provides various implementations of mutexes corresponding to the variants of tasks available in the OS. For example, system-wide mutexes can be used by processes. Local mutexes can be used only by threads etc. This distinction is maintained because, depending on the hardware, seizing a global mutex might be a thousand times slower than seizing a local one.
Reentrant mutex
A reentrant mutex can be seized by the same task multiple times. Each seizing of the mutex is matched by releasing it, in order to allow another task to seize it.
Read write mutex
A read write mutex can be seized at two levels for read and for write. The mutex can be seized for read by any number of tasks. Only one task may seize it for 'write. Read write mutexes are usually used to protect resources which can be accessed in mutable and immutable ways. Immutable (read) access is granted concurrently for many tasks because they do not change the resource state. Read write mutexes can be reentrant, global or local. Further, promotion operations may be provided. That's when a task that has seized the mutex for write releases it while keeping seized for read. Note that the reverse operation is potentially deadlocking and requires some additional access policy control.
Deadlock prevention
There exists a simple technique of deadlock prevention when mutexes are seized in some fixed order. This is discussed in depth in the Dining philosophers problem.
Sample implementations / APIs
6502 Assembly
There isn't any hardware support for mutexes, but a simple flag in memory will do. This implementation is more akin to a "starting pistol" for some time-critical process such as the Nintendo Entertainment System's vBlank NMI (Non-Maskable Interrupt) which is typically used to update video memory. The function's parameters are pre-loaded in global memory, but the function that uses them won't be called unless the lock is released. Once all the parameters are ready, the main procedure can wait for the interrupt, after which it releases the lock and waits for the interrupt again. This time, the interrupt routine that needs those parameters is run, and when it's finished, the flag is locked again. For simplicity, most of the hardware-specific routines are omitted (and in reality would require additional mutexes since it usually takes more than one frame to do something like print a long string to the screen.)
Side note: During a 6502's NMI, other interrupts cannot occur, not even another NMI. Therefore there is no chance that an IRQ will happen while executing NMI code.
;assume that the NES's screen is active and NMI occurs at the end of every frame.
mutex equ $01 ;these addresses in zero page memory will serve as the global variables.
vblankflag equ $02
main:
;if your time-sensitive function has parameters, pre-load them into global memory here.
;The only thing the NMI should have to do is write the data to the hardware registers.
jsr waitframe
LDA #$01 ;there's not enough time for a second vblank to occur between these two calls to waitframe().
STA mutex ;release the mutex. the next NMI will service the function we just unlocked.
jsr waitframe
halt:
jmp halt ;we're done - trap the cpu. NMI will still occur but nothing of interest happens since the mutex is locked.
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
nmi: ;every 1/60th of a second the CPU jumps here automatically.
pha
txa
pha
tya
pha ;pushAll
;for simplicity's sake the needs of the hardware are going to be omitted. A real NES game would perform sprite DMA here.
LDA mutex
BEQ exit_nmi
; whatever you wanted to gatekeep behind your mutex goes here.
; typically it would be something like a text box printer, etc.
; Something that needs to update the video RAM and do so ASAP.
LDA #$00
STA mutex ;lock the mutex again.
exit_nmi:
LDA #$01
STA vblankflag ;allow waitframe() to exit
pla
tay
pla
tax
pla ;popAll
rti
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
waitframe:
pha
LDA #0
sta vblankflag
.again:
LDA vblankflag
BEQ again ;this would loop infinitely if it weren't for vblankflag being set during NMI
pla
rts
8086 Assembly
LOCK
is a prefix that can be added to instructions that read a value then write back to it, such as INC
and DEC
. This prefix "locks" the memory bus, preventing other CPUs (if any) from accessing the same memory location at the same time as the CPU executing the "locked" instruction. The lock lasts until the locked instruction is complete, at which point the lock is released. This isn't used much on the original 8086, and there are a few limitations to the usage of the LOCK
prefix:
- You cannot
LOCK
registers (unless they are in brackets). This makes sense - each processor has its own registers and they can't access each others' registers anyway. LOCK
can't be placed in front of any instruction on the 8086, only ones where it actually applies.
lock inc word ptr [ds:TestData] ;increment the word at TestData. Only this CPU can access it right now.
lock dec byte ptr [es:di]
Ada
Ada provides higher-level concurrency primitives, which are complete in the sense that they also allow implementations of the lower-level ones, like mutexes. Here is an implementation of a plain non-reentrant mutex based on protected objects.
The mutex interface:
protected type Mutex is
entry Seize;
procedure Release;
private
Owned : Boolean := False;
end Mutex;
The implementation of:
protected body Mutex is
entry Seize when not Owned is
begin
Owned := True;
end Seize;
procedure Release is
begin
Owned := False;
end Release;
end Mutex;
Here the entry Seize has a queue of the tasks waiting for the mutex. The entry's barrier is closed when Owned is true. So any task calling to the entry will be queued. When the barrier is open the first task from the queue executes the entry and Owned becomes true closing the barrier again. The procedure Release simply sets Owned to false. Both Seize and Release are protected actions whose execution causes reevaluation of all barriers, in this case one of Seize.
Use:
declare
M : Mutex;
begin
M.Seize; -- Wait infinitely for the mutex to be free
... -- Critical code
M.Release; -- Release the mutex
...
select
M.Seize; -- Wait no longer than 0.5s
or delay 0.5;
raise Timed_Out;
end select;
... -- Critical code
M.Release; -- Release the mutex
end;
It is also possible to implement mutex as a monitor task.
BBC BASIC
REM Create mutex:
SYS "CreateMutex", 0, 0, 0 TO hMutex%
REM Wait to acquire mutex:
REPEAT
SYS "WaitForSingleObject", hMutex%, 1 TO res%
UNTIL res% = 0
REM Release mutex:
SYS "ReleaseMutex", hMutex%
REM Free mutex:
SYS "CloseHandle", hMutex%
C
Win32
To create a mutex operating system "object":
HANDLE hMutex = CreateMutex(NULL, FALSE, NULL);
To lock the mutex:
WaitForSingleObject(hMutex, INFINITE);
To unlock the mutex
ReleaseMutex(hMutex);
When the program is finished with the mutex:
CloseHandle(hMutex);
POSIX
Creating a mutex:
#include <pthread.h>
pthread_mutex_t mutex = PTHREAD_MUTEX_INITIALIZER;
Or:
pthread_mutex_t mutex;
pthread_mutex_init(&mutex, NULL);
Locking:
int error = pthread_mutex_lock(&mutex);
Unlocking:
int error = pthread_mutex_unlock(&mutex);
Trying to lock (but do not wait if it can't)
int error = pthread_mutex_trylock(&mutex);
C++
Win32
POSIX
C++11
C++11 reference for mutexe related functionality in the standard library
D
class Synced
{
public:
synchronized int func (int input)
{
num += input;
return num;
}
private:
static num = 0;
}
Keep in mind that synchronized used as above works on a per-class-instance basis. This is described in [[1]].
The following example tries to illustrate the problem:
import tango.core.Thread, tango.io.Stdout, tango.util.log.Trace;
class Synced {
public synchronized int func (int input) {
Trace.formatln("in {} at func enter: {}", input, foo);
// stupid loop to consume some time
int arg;
for (int i = 0; i < 1000*input; ++i) {
for (int j = 0; j < 10_000; ++j) arg += j;
}
foo += input;
Trace.formatln("in {} at func exit: {}", input, foo);
return arg;
}
private static int foo;
}
void main(char[][] args) {
SimpleThread[] ht;
Stdout.print( "Starting application..." ).newline;
for (int i=0; i < 3; i++) {
Stdout.print( "Starting thread for: " )(i).newline;
ht ~= new SimpleThread(i+1);
ht[i].start();
}
// wait for all threads
foreach( s; ht )
s.join();
}
class SimpleThread : Thread
{
private int d_id;
this (int id) {
super (&run);
d_id = id;
}
void run() {
auto tested = new Synced;
Trace.formatln ("in run() {}", d_id);
tested.func(d_id);
}
}
Every created thread creates its own Synced object, and because the monitor created by synchronized statement is created for every object, each thread can enter the func() method.
To resolve that either func() could be done static (static member functions are synchronized per-class basis) or synchronized block should be used like here:
class Synced {
public int func (int input) {
synchronized(Synced.classinfo) {
// ...
foo += input;
// ...
}
return arg;
}
private static int foo;
}
Delphi
unit main;
interface
uses
Winapi.Windows, System.SysUtils, System.Classes, Vcl.Controls, Vcl.Forms,
System.SyncObjs, Vcl.StdCtrls;
type
TForm1 = class(TForm)
mmo1: TMemo;
btn1: TButton;
procedure FormCreate(Sender: TObject);
procedure FormDestroy(Sender: TObject);
procedure btn1Click(Sender: TObject);
private
{ Private declarations }
public
{ Public declarations }
end;
var
Form1: TForm1;
FMutex: TMutex;
implementation
{$R *.dfm}
procedure TForm1.FormCreate(Sender: TObject);
begin
FMutex := TMutex.Create();
end;
procedure TForm1.FormDestroy(Sender: TObject);
begin
FMutex.Free;
end;
// http://edgarpavao.com/2017/08/07/multithreading-e-processamento-paralelo-no-delphi-ppl/
procedure TForm1.btn1Click(Sender: TObject);
begin
//Thread 1
TThread.CreateAnonymousThread(
procedure
begin
FMutex.Acquire;
try
TThread.Sleep(5000);
TThread.Synchronize(TThread.CurrentThread,
procedure
begin
mmo1.Lines.Add('Thread 1');
end);
finally
FMutex.Release;
end;
end).Start;
//Thread 2
TThread.CreateAnonymousThread(
procedure
begin
FMutex.Acquire;
try
TThread.Sleep(1000);
TThread.Synchronize(TThread.CurrentThread,
procedure
begin
mmo1.Lines.Add('Thread 2');
end);
finally
FMutex.Release;
end;
end).Start;
//Thread 3
TThread.CreateAnonymousThread(
procedure
begin
FMutex.Acquire;
try
TThread.Sleep(3000);
TThread.Synchronize(TThread.CurrentThread,
procedure
begin
mmo1.Lines.Add('Thread 3');
end);
finally
FMutex.Release;
end;
end).Start;
end;
end.
- Output:
Thread 1 Thread 2 Thread 3
E
E's approach to concurrency is to never block, in favor of message passing/event queues/callbacks. Therefore, it is unidiomatic to use a mutex at all, and incorrect, or rather unsafe, to use a mutex which blocks the calling thread. That said, here is a mutex written in E.
def makeMutex() {
# The mutex is available (released) if available is resolved, otherwise it
# has been seized/locked. The specific value of available is irrelevant.
var available := null
# The interface to the mutex is a function, taking a function (action)
# to be executed.
def mutex(action) {
# By assigning available to our promise here, the mutex remains
# unavailable to the /next/ caller until /this/ action has gotten
# its turn /and/ resolved its returned value.
available := Ref.whenResolved(available, fn _ { action <- () })
}
return mutex
}
This implementation of a mutex is designed to have a very short implementation as well as usage in E. The mutex object is a function which takes a function action to be executed once the mutex is available. The mutex is unavailable until the return value of action resolves. This interface has been chosen over lock and unlock operations to reduce the hazard of unbalanced lock/unlock pairs, and because it naturally fits into E code.
Usage example:
Creating the mutex:
? def mutex := makeMutex()
# value: <mutex>
Creating the shared resource:
? var value := 0
# value: 0
Manipulating the shared resource non-atomically so as to show a problem:
? for _ in 0..1 {
> when (def v := (&value) <- get()) -> {
> (&value) <- put(v + 1)
> }
> }
? value
# value: 1
The value has been incremented twice, but non-atomically, and so is 1 rather
than the intended 2.
? value := 0
# value: 0
This time, we use the mutex to protect the action.
? for _ in 0..1 {
> mutex(fn {
> when (def v := (&value) <- get()) -> {
> (&value) <- put(v + 1)
> }
> })
> }
? value
# value: 2
when
blocks and Ref.whenResolved
return a promise for the result of the deferred action, so the mutex here waits for the gratuitously complicated increment to complete before becoming available for the next action.
Erlang
Erlang has no mutexes so this is a super simple one, hand built to allow 3 slowly printing processes to print until done before the next one starts.
-module( mutex ).
-export( [task/0] ).
task() ->
Mutex = erlang:spawn( fun() -> loop() end ),
[erlang:spawn(fun() -> random:seed( X, 0, 0 ), print(Mutex, X, 3) end) || X <- lists:seq(1, 3)].
loop() ->
receive
{acquire, Pid} ->
Pid ! {access, erlang:self()},
receive
{release, Pid} -> loop()
end
end.
mutex_acquire( Pid ) ->
Pid ! {acquire, erlang:self()},
receive
{access, Pid} -> ok
end.
mutex_release( Pid ) -> Pid ! {release, erlang:self()}.
print( _Mutex, _N, 0 ) -> ok;
print( Mutex, N, M ) ->
timer:sleep( random:uniform(100) ),
mutex_acquire( Mutex ),
io:fwrite( "Print ~p: ", [N] ),
[print_slow(X) || X <- lists:seq(1, 3)],
io:nl(),
mutex_release( Mutex ),
print( Mutex, N, M - 1 ).
print_slow( X ) ->
io:fwrite( " ~p", [X] ),
timer:sleep( 100 ).
- Output:
27> mutex:task(). Print 2: 1 2 3 Print 1: 1 2 3 Print 3: 1 2 3 Print 2: 1 2 3 Print 1: 1 2 3 Print 3: 1 2 3 Print 2: 1 2 3 Print 1: 1 2 3 Print 3: 1 2 3
FreeBASIC
Extracted from FreeBASIC help. FreeBASIC has the following Mutex functions: MutexCreate, MutexLock, MutexUnlock, MutexDestroy and ThreadCreate.
' Threading synchronization using Mutexes
' If you comment out the lines containing "MutexLock" and "MutexUnlock", the
' threads will not be in sync and some of the data may be printed out of place.
Const max_hilos = 10
Dim Shared As Any Ptr bloqueo_tty
' Teletipo unfurls some text across the screen at a given location
Sub Teletipo(Byref texto As String, Byval x As Integer, Byval y As Integer)
'
' This MutexLock makes simultaneously running threads wait for each
' other, so only one at a time can continue and print output.
' Otherwise, their Locates would interfere, since there is only one cursor.
'
' It's impossible to predict the order in which threads will arrive here and
' which one will be the first to acquire the lock thus causing the rest to wait.
Mutexlock bloqueo_tty
For i As Integer = 0 To (Len(texto) - 1)
Locate x, y + i : Print Chr(texto[i])
Sleep 25, 1
Next i
' MutexUnlock releases the lock and lets other threads acquire it.
Mutexunlock bloqueo_tty
End Sub
Sub Hilo(Byval datos_usuario As Any Ptr)
Dim As Integer id = Cint(datos_usuario)
Teletipo "Hilo (" & id & ").........", 1 + id, 1
End Sub
' Create a mutex to syncronize the threads
bloqueo_tty = Mutexcreate()
' Create child threads
Dim As Any Ptr sucesos(0 To max_hilos - 1)
For i As Integer = 0 To max_hilos - 1
sucesos(i) = Threadcreate(@Hilo, Cptr(Any Ptr, i))
If sucesos(i) = 0 Then
Print "Error al crear el hilo:"; i
Exit For
End If
Next i
' This is the main thread. Now wait until all child threads have finished.
For i As Integer = 0 To max_hilos - 1
If sucesos(i) <> 0 Then Threadwait(sucesos(i))
Next i
' Clean up when finished
Mutexdestroy(bloqueo_tty)
Sleep
Go
sync.Mutex
Go has mutexes, and here is an example use of a mutex, somewhat following the example of E. This code defines a slow incrementer, that reads a variable, then a significant amount of time later, writes an incremented value back to the variable. Two incrementers are started concurrently. Without the mutex, one would overwrite the other and the result would be 1. Using a mutex, as shown here, one waits for the other and the result is 2.
package main
import (
"fmt"
"sync"
"time"
)
var value int
var m sync.Mutex
var wg sync.WaitGroup
func slowInc() {
m.Lock()
v := value
time.Sleep(1e8)
value = v+1
m.Unlock()
wg.Done()
}
func main() {
wg.Add(2)
go slowInc()
go slowInc()
wg.Wait()
fmt.Println(value)
}
- Output:
2
Read-write mutex is provided by the sync.RWMutex type. For a code example using a RWMutex, see Atomic updates#RWMutex.
Channels
If a mutex is exactly what you need, sync.Mutex is there. As soon as things start getting complicated though, Go channels offer a much clearer alternative. As a gateway from mutexes to channels, here is the above program implemented with channels:
package main
import (
"fmt"
"time"
)
var value int
func slowInc(ch, done chan bool) {
// channel receive, used here to implement mutex lock.
// it will block until a value is available on the channel
<-ch
// same as above
v := value
time.Sleep(1e8)
value = v + 1
// channel send, equivalent to mutex unlock.
// makes a value available on channel
ch <- true
// channels can be used to signal completion too
done <- true
}
func main() {
ch := make(chan bool, 1) // ch used as a mutex
done := make(chan bool) // another channel used to signal completion
go slowInc(ch, done)
go slowInc(ch, done)
// a freshly created sync.Mutex starts out unlocked, but a freshly created
// channel is empty, which for us represents "locked." sending a value on
// the channel puts the value up for grabs, thus representing "unlocked."
ch <- true
<-done
<-done
fmt.Println(value)
}
The value passed on the channel is not accessed here, just as the internal state of a mutex is not accessed. Rather, it is only the effect of the value being available that is important. (Of course if you wanted to send something meaningful on the channel, a reference to the shared resource would be a good start...)
Haskell
Haskell has a slight variation on the mutex, namely the MVar. MVars, unlike mutexes, are containers. However, they are similar enough that MVar () is essentially a mutex. A MVar can be in two states: empty or full, only storing a value when full. There are 4 main ways to deal with MVars:
takeMVar :: MVar a -> IO a
putMVar :: MVar a -> a -> IO ()
tryTakeMVar :: MVar a -> IO (Maybe a)
tryPutMVar :: MVar a -> a -> IO Bool
takeMVar will attempt to fetch a value from the MVar, and will block while the MVar is empty. After using this, the MVar will be left empty. putMVar will attempt to put a value in a MVar, and will block while there already is a value in the MVar. This will leave the MVar full. The last two functions are non-blocking versions of takeMVar and putMVar, returning Nothing and False, respectively, if their blocking counterpart would have blocked.
For more information see the documentation.
Icon and Unicon
The following code uses features exclusive to Unicon.
x: = mutex() # create and return a mutex handle for sharing between threads needing to synchronize with each other
lock(x) # lock mutex x
trylock(x)) # non-blocking lock, succeeds only if there are no other thread already in the critical region
unlock(x) # unlock mutex x
J
J904 introduces mutexes.
Note: currently J mutexes do not have a meaningful display representation.
name=. 10 T. 0 NB. create an exclusive mutex
name=. 10 T. 1 NB. create a shared (aka "recursive" or "reentrant") mutex
failed=. 11 T. mutex NB. take an exclusive lock on a mutex (waiting forever if necessary)
failed=. 11 T. mutex;seconds NB. try to take an exclusive lock on a mutex but may time out
NB. failed is 0 if lock was taken, 1 if lock was not taken
13 T. mutex NB. release lock on mutex
Recursive mutexes may be locked multiple times -- successive locks increase a counter. When unlocked as many times as previously locked, the mutex is released.
Exclusive mutexes will suspend (waiting "forever" if necessary) if the lock was already taken.
Java
Java 5 added a Semaphore
class which can act as a mutex (as stated above, a mutex is "a variant of semaphore with k=1").
import java.util.concurrent.Semaphore;
public class VolatileClass{
public Semaphore mutex = new Semaphore(1); //also a "fair" boolean may be passed which,
//when true, queues requests for the lock
public void needsToBeSynched(){
//...
}
//delegate methods could be added for acquiring and releasing the mutex
}
Using the mutex:
public class TestVolitileClass throws Exception{
public static void main(String[] args){
VolatileClass vc = new VolatileClass();
vc.mutex.acquire(); //will wait automatically if another class has the mutex
//can be interrupted similarly to a Thread
//use acquireUninterruptibly() to avoid that
vc.needsToBeSynched();
vc.mutex.release();
}
}
Java also has the synchronized keyword, which allows almost any object to be used to enforce mutual exclusion.
public class Main {
static Object mutex = new Object();
static int i = 0;
public void addAndPrint()
{
System.out.print("" + i + " + 1 = ");
i++;
System.out.println("" + i);
}
public void subAndPrint()
{
System.out.print("" + i + " - 1 = ");
i--;
System.out.println("" + i);
}
public static void main(String[] args){
final Main m = new Main();
new Thread() {
public void run()
{
while (true) { synchronized(m.mutex) { m.addAndPrint(); } }
}
}.start();
new Thread() {
public void run()
{
while (true) { synchronized(m.mutex) { m.subAndPrint(); } }
}
}.start();
}
}
The "synchronized" keyword actually is a form of monitor, which was a later-proposed solution to the same problems that mutexes and semaphores were designed to solve. More about synchronization may be found on Sun's website - http://java.sun.com/docs/books/tutorial/essential/concurrency/sync.html , and more about monitors may be found in any decent operating systems textbook.
Julia
From the Julia documentation:
SpinLock()
Create a non-reentrant lock. Recursive use will result in a deadlock. Each lock must be matched with an unlock.
lock(lock)
Acquire the lock when it becomes available. If the lock is already locked by a different task/thread, wait for it to become available.
Each lock must be matched by an unlock.
unlock(lock)
Releases ownership of the lock.
If this is a recursive lock which has been acquired before, decrement an internal counter and return immediately.
trylock(lock)
Acquire the lock if it is available, and return true if successful. If the lock is already locked by a different task/thread, return false.
Each successful trylock must be matched by an unlock.
islocked(lock)
Check whether the lock is held by any task/thread. This should not be used for synchronization (see instead trylock).
ReentrantLock()
Creates a re-entrant lock for synchronizing Tasks. The same task can acquire the lock as many times as required. Each lock must be matched with an unlock.
Logtalk
Logtalk provides a synchronized/0 directive for synchronizing all object (or category) predicates using the same implicit mutex and a synchronized/1 directive for synchronizing a set of predicates using the same implicit mutex. Follow an usage example of the synchronized/1 directive (inspired by the Erlang example). Works when using SWI-Prolog, XSB, or YAP as the backend compiler.
:- object(slow_print).
:- threaded.
:- public(start/0).
:- private([slow_print_abc/0, slow_print_123/0]).
:- synchronized([slow_print_abc/0, slow_print_123/0]).
start :-
% launch two threads, running never ending goals
threaded((
repeat_abc,
repeat_123
)).
repeat_abc :-
repeat, slow_print_abc, fail.
repeat_123 :-
repeat, slow_print_123, fail.
slow_print_abc :-
write(a), thread_sleep(0.2),
write(b), thread_sleep(0.2),
write(c), nl.
slow_print_123 :-
write(1), thread_sleep(0.2),
write(2), thread_sleep(0.2),
write(3), nl.
:- end_object.
- Output:
?- slow_print::start. abc 123 abc 123 abc 123 abc 123 abc ...
M2000 Interpreter
We can simulate mutex. Try with Thread.Plan Sequential Using concurrent (in interpreter level), after the execution of one statement, thread change. Using Sequential each thread block run all statements, until end, or leave some if a continue take place. In concurrent also a call to a module, or executing a block of code happen without in one thread.
Form 80, 50
Module CheckIt {
Thread.Plan Concurrent
Class mutex {
mylock as boolean=True
Function Lock {
if not .mylock then exit
.mylock<=False
=True
}
Module Unlock {
.mylock<=True
}
}
Group PhoneBooth {
NowUser$
module UseIt (a$, x){
.NowUser$<=a$
Print a$+" phone home ",Int(x*100);"%"
}
module leave {
.NowUser$<=""
}
}
m=mutex()
Flush
Data "Bob", "John","Tom"
For i=1 to 3 {
Thread {
\\ we use N$, C and Max as stack variables for each thread
\\ all other variables are shared for module
If C=0 Then if not m.lock() then Print N$+" waiting...................................":Refresh 20: Continue
C++
if c=1 then thread this interval 20
PhoneBooth.UseIt N$,C/Max
iF C<Max Then Continue
PhoneBooth.leave
m.Unlock
Thread This Erase
} as K
Read M$
Thread K Execute Static N$=M$, C=0, Max=RANDOM(5,8)
Thread K interval Random(300, 2000)
}
\\ Start we lock Phone Booth for service
Service=m.lock()
Main.Task 50 {
If Service Then if Keypress(32) then m.unlock: Service=false: Continue
If not Service then if Keypress(32) Then if m.lock() then Service=true : Continue
if PhoneBooth.NowUser$<>"" Then {
Print "Phone:";PhoneBooth.NowUser$: Refresh
} Else.if Service then Print "Service Time": Refresh
}
}
CheckIt
Nim
For mutexes (called locks in Nim) threads support is required,
so compile using nim --threads:on c mutex
Creating a mutex:
import locks
var mutex: Lock
initLock mutex
Locking:
acquire mutex
Unlocking:
release mutex
- Trying to lock (but do not wait if it can't)
let success = tryAcquire mutex
Objeck
Objeck provides a simple way to lock a section of code. Please refer to the programer's guide for addition information.
m := ThreadMutex->New("lock a");
# section locked
critical(m) {
...
}
# section unlocked
Objective-C
NSLock *m = [[NSLock alloc] init];
[m lock]; // locks in blocking mode
if ([m tryLock]) { // acquire a lock -- does not block if not acquired
// lock acquired
} else {
// already locked, does not block
}
[m unlock];
Reentrant mutex is provided by the NSRecursiveLock class.
Objective-C also has @synchronized() blocks, like Java.
OCaml
OCaml provides a built-in Mutex module.
It is very simple, there are four functions:
let m = Mutex.create() in
Mutex.lock m; (* locks in blocking mode *)
if (Mutex.try_lock m)
then ... (* did the lock *)
else ... (* already locked, do not block *)
Mutex.unlock m;
Oforth
Oforth has no mutex. A mutex can be simulated using a channel initialized with one object. A task can receive the object from the channel (get the mutex) and send it to the channel when the job is done. If the channel is empty, a task will wait until an object is available into the channel.
import: parallel
: job(mut)
mut receive drop
"I get the mutex !" .
2000 sleep
"Now I release the mutex" println
1 mut send drop ;
: mymutex
| mut |
Channel new dup send(1) drop ->mut
10 #[ #[ mut job ] & ] times ;
Oz
Oz has "locks" which are local, reentrant mutexes.
Creating a mutex:
declare L = {Lock.new}
The only way to acquire a mutex is to use the lock
syntax. This ensures that releasing a lock can never be forgotten. Even if an exception occurs, the lock will be released.
lock L then
{System.show exclusive}
end
To make it easier to work with objects, classes can be marked with the property locking
. Instances of such classes have their own internal lock and can use a variant of the lock
syntax:
class Test
prop locking
meth test
lock
{Show exclusive}
end
end
end
Perl
Code demonstrating shared resources and simple locking. Resource1 and Resource2 represent some limited resources that must be exclusively used and released by each thread. Each thread reports how many of each is available; if it goes below zero, something is wrong. Try comment out either of the "lock $lock*" line to see what happens without locking.
use Thread qw'async';
use threads::shared;
my ($lock1, $lock2, $resource1, $resource2) :shared = (0) x 4;
sub use_resource {
{ # curly provides lexical scope, exiting which causes lock to release
lock $lock1;
$resource1 --; # acquire resource
sleep(int rand 3); # artifical delay to pretend real work
$resource1 ++; # release resource
print "In thread ", threads->tid(), ": ";
print "Resource1 is $resource1\n";
}
{
lock $lock2;
$resource2 --;
sleep(int rand 3);
$resource2 ++;
print "In thread ", threads->tid(), ": ";
print "Resource2 is $resource2\n";
}
}
# create 9 threads and clean up each after they are done.
for ( map async{ use_resource }, 1 .. 9) {
$_->join
}
Phix
local mutexes
Exclusive-only and non-reentrant.
without js -- (critical sections) integer cs = init_cs() -- Create a new critical section ... enter_cs(cs) -- Begin mutually exclusive execution bool b = try_cs(cs) -- As enter_cs, but yields false (0) if the lock cannot be obtained instantly leave_cs(cs) -- End mutually exclusive execution ... delete_cs(cs) -- Delete a critical section that you have no further use for
global mutexes
Using file locking. Only shared locks are reentrant. Every call needs it's own bespoke retry logic. There is no builtin promotion operation.
without js -- (file i/o) integer fn = open("log.txt","u"), ... integer count = 0 while not lock_file(fn,LOCK_SHARED,{}) do --while not lock_file(fn,LOCK_EXCLUSIVE,{}) do sleep(1) count += 1 if count>5 then -- message/abort/retry/... end if end while ... unlock_file(fn,{}) ... close(fn)
PicoLisp
PicoLisp uses several mechanisms of interprocess communication, mainly within the same process family (children of the same parent process) for database synchronization (e.g. 'lock', 'sync' or 'tell'.
For a simple synchronization of unrelated PicoLisp processes the 'acquire' / 'release' function pair can be used.
Prolog
SWI-Prolog implements mutexes, but other Prolog versions vary, so I'll use the SWI-Prolog implementation here.
To create a global mutex:
mutex_create(Mutex, [alias(my_mutex)]).
The recommended way to use the mutex is by wrapping code in a with_mutex/2 call, eg:
synchronized_goal(G) :- with_mutex(my_mutex, call(G)).
This will wrap some code in a mutex to ensure exclusive access and release the mutex on completion (regardless or the result).
There are more options to lock, try_lock, unlock etc here, if needed: https://www.swi-prolog.org/pldoc/man?section=threadsync
PureBasic
PureBasic has the following Mutex functions;
MyMutex=CreateMutex()
Result = TryLockMutex(MyMutex)
LockMutex(MyMutex)
UnlockMutex(MyMutex)
FreeMutex(MyMutex)
Example
Declare ThreadedTask(*MyArgument)
Define Mutex
If OpenConsole()
Define thread1, thread2, thread3
Mutex = CreateMutex()
thread1 = CreateThread(@ThreadedTask(), 1): Delay(5)
thread2 = CreateThread(@ThreadedTask(), 2): Delay(5)
thread3 = CreateThread(@ThreadedTask(), 3)
WaitThread(thread1)
WaitThread(thread2)
WaitThread(thread3)
PrintN(#CRLF$+"Press ENTER to exit"): Input()
FreeMutex(Mutex)
CloseConsole()
EndIf
Procedure ThreadedTask(*MyArgument)
Shared Mutex
Protected a, b
For a = 1 To 3
LockMutex(Mutex)
; Without Lock-/UnLockMutex() here the output from the parallel threads would be all mixed.
; Reading/Writing to shared memory resources are a common use for Mutextes i PureBasic
PrintN("Thread "+Str(*MyArgument)+": Print 3 numbers in a row:")
For b = 1 To 3
Delay(75)
PrintN("Thread "+Str(*MyArgument)+" : "+Str(b))
Next
UnlockMutex(Mutex)
Next
EndProcedure
Python
Demonstrating semaphores. Note that semaphores can be considered as a multiple version of mutex; while a mutex allows a singular exclusive access to code or resources, a semaphore grants access to a number of threads up to certain value.
import threading
from time import sleep
# res: max number of resources. If changed to 1, it functions
# identically to a mutex/lock object
res = 2
sema = threading.Semaphore(res)
class res_thread(threading.Thread):
def run(self):
global res
n = self.getName()
for i in range(1, 4):
# acquire a resource if available and work hard
# for 2 seconds. if all res are occupied, block
# and wait
sema.acquire()
res = res - 1
print n, "+ res count", res
sleep(2)
# after done with resource, return it to pool and flag so
res = res + 1
print n, "- res count", res
sema.release()
# create 4 threads, each acquire resorce and work
for i in range(1, 5):
t = res_thread()
t.start()
Racket
Racket has semaphores which can be used as mutexes in the usual way. With other language features this can be used to implement new features -- for example, here is how we would implement a protected-by-a-mutex function:
(define foo
(let ([sema (make-semaphore 1)])
(lambda (x)
(dynamic-wind (λ() (semaphore-wait sema))
(λ() (... do something ...))
(λ() (semaphore-post sema))))))
and it is now easy to turn this into a macro for definitions of such functions:
(define-syntax-rule (define/atomic (name arg ...) E ...)
(define name
(let ([sema (make-semaphore 1)])
(lambda (arg ...)
(dynamic-wind (λ() (semaphore-wait sema))
(λ() E ...)
(λ() (semaphore-post sema)))))))
;; this does the same as the above now:
(define/atomic (foo x)
(... do something ...))
But more than just linguistic features, Racket has many additional synchronization tools in its VM. Some notable examples: OS semaphore for use with OS threads, green threads, lightweight OS threads, and heavyweight OS threads, synchronization channels, thread mailboxes, CML-style event handling, generic synchronizeable event objects, non-blocking IO, etc, etc.
Raku
(formerly Perl 6)
my $lock = Lock.new;
$lock.protect: { your-ad-here() }
Locks are reentrant. You may explicitly lock and unlock them, but the syntax above guarantees the lock will be unlocked on scope exit, even if by thrown exception or other exotic control flow. That being said, direct use of locks is discouraged in Raku in favor of promises, channels, and supplies, which offer better composable semantics.
Ruby
Ruby's standard library includes a mutex_m module that can be mixed-in to a class.
require 'mutex_m'
class SomethingWithMutex
include Mutex_m
...
end
Individual objects can be extended with the module too
an_object = Object.new
an_object.extend(Mutex_m)
An object with mutex powers can then:
# acquire a lock -- block execution until it becomes free
an_object.mu_lock
# acquire a lock -- return immediately even if not acquired
got_lock = an_object.mu_try_lock
# have a lock?
if an_object.mu_locked? then ...
# release the lock
an_object.mu_unlock
# wrap a lock around a block of code -- block execution until it becomes free
an_object.my_synchronize do
do critical stuff
end
Rust
Rust's standard library provides std::sync::Mutex. Locking the mutex returns a guard that allows accessing the shared data exclusively. When the guard goes out of its scope (and is dropped), the mutex gets unlocked again.
Following small program demonstrates using the mutex with two threads that append to a shared string.
use std::{
sync::{Arc, Mutex},
thread,
time::Duration,
};
fn main() {
let shared = Arc::new(Mutex::new(String::new()));
let handle1 = {
let value = shared.clone();
thread::spawn(move || {
for _ in 0..20 {
thread::sleep(Duration::from_millis(200));
// The guard is valid until the end of the block
let mut guard = value.lock().unwrap();
guard.push_str("A");
println!("{}", guard);
}
})
};
let handle2 = {
let value = shared.clone();
thread::spawn(move || {
for _ in 0..20 {
thread::sleep(Duration::from_millis(300));
{
// Making the guard scope explicit here
let mut guard = value.lock().unwrap();
guard.push_str("B");
println!("{}", guard);
}
}
})
};
handle1.join().ok();
handle2.join().ok();
shared.lock().ok().map_or((), |it| println!("Done: {}", it));
}
Shale
Shale includes a library that provides POSIX threads, semaphores and mutexes. Below is a really simple example usings threads and one mutex. There's a more complete example that includes semaphores available with the Shale source code.
#!/usr/local/bin/shale
thread library // POSIX threads, mutexes and semaphores
time library // We use its sleep function here.
// The threead code which will lock the mutex, print a message,
// then unlock the mutex.
threadCode dup var {
arg dup var swap =
stop lock thread::()
arg "Thread %d has the mutex\n" printf
stop unlock thread::()
} =
stop mutex thread::() // Create the mutex.
stop lock thread::() // Lock it until we've started the threads.
// Now create a few threads that will also try to lock the mutex.
1 threadCode create thread::()
2 threadCode create thread::()
3 threadCode create thread::()
4 threadCode create thread::()
// The threads are all waiting to acquire the mutex.
"Main thread unlocking the mutex now..." println
stop unlock thread::()
// Wait a bit to let the threads do their stuff.
1000 sleep time::() // milliseconds
- Output:
Main thread unlocking the mutex now... Thread 4 has the mutex Thread 2 has the mutex Thread 1 has the mutex Thread 3 has the mutex
Tcl
Tcl's mutexes have four functions.
package require Thread
# How to create a mutex
set m [thread::mutex create]
# This will block if the lock is already held unless the mutex is made recursive
thread::mutex lock $m
# Now locked...
thread::mutex unlock $m
# Unlocked again
# Dispose of the mutex
thread::mutex destroy $m
There are also read-write mutexes available.
set rw [thread::rwmutex create]
# Get and drop a reader lock
thread::rwmutex rlock $rw
thread::rwmutex unlock $rw
# Get and drop a writer lock
thread::rwmutex wlock $rw
thread::rwmutex unlock $rw
thread::rwmutex destroy $rw
Wren
As Wren's VM is effectively single threaded (only one fiber can run at a time), mutexes are only relevant for embedded scripts where two or more VMs are being run in parallel by the host and the VMs need shared access to some resource.
In such a case the host (see here for those currently available) would almost certainly deal directly with synchronization using whatever mechanisms were available to it and access to the resource would therefore be transparent as far as the Wren scripts were concerned.
However, to avoid excessive latency, a VM whose thread were continuing would need to signal to the host that it no longer needed the resource so the lock could be released thereby making it available to the other VM(s). Typically, a shared resource might need to be represented something like this in a Wren script:
foreign class Resource {
// obtain a pointer to the resource when available
construct new() {}
// method for using the resource
foreign doSomething()
// signal to the host that the resource is no longer needed
foreign release()
}
var res = Resource.new() // wait for and obtain a lock on the resource
res.doSomething() // use it
res.release() // release the lock
zkl
zkl has two mutex objects, Lock (mutex) and WriteLock a mutex that allows multiple readers but only one writer. The critical keyword fences code to ensure the lock is released when the code is done.
var lock=Atomic.Lock(); lock.acquire(); doSomething(); lock.release();
critical(lock){ doSomething(); }
var lock=Atomic.WriteLock();
lock.acquireForReading(); doSomeReading(); lock.readerRelease();
critical(lock,acquireForReading,readerRelease){ ... }
lock.acquireForWriting(); write(); lock.writerRelease();
- Programming Tasks
- Solutions by Programming Task
- Encyclopedia
- 6502 Assembly
- 8086 Assembly
- Ada
- BBC BASIC
- C
- C++
- D
- Delphi
- Winapi.Windows
- System.SysUtils
- System.Classes
- Vcl.Controls
- Vcl.Forms
- System.SyncObjs
- Vcl.StdCtrls
- E
- Erlang
- FreeBASIC
- Go
- Haskell
- Unicon
- J
- Java
- Julia
- Logtalk
- M2000 Interpreter
- Nim
- Objeck
- Objective-C
- OCaml
- Oforth
- Oz
- Perl
- Phix
- PicoLisp
- Prolog
- PureBasic
- Python
- Racket
- Raku
- Ruby
- Rust
- Shale
- Tcl
- Wren
- Zkl
- TI-83 BASIC/Omit
- TI-89 BASIC/Omit
- M4/Omit
- Maxima/Omit
- ML/I/Omit
- PARI/GP/Omit
- Factor/Omit
- Unlambda/Omit
- ZX Spectrum Basic/Omit