Concurrent computing: Difference between revisions
m
→{{header|Wren}}: Changed to Wren S/H
m (→{{header|Wren}}: Changed to Wren S/H) |
|||
(12 intermediate revisions by 11 users not shown) | |||
Line 9:
=={{header|Ada}}==
<
procedure Concurrent_Hello is
Line 25:
begin
null; -- the "environment task" doesn't need to do anything
end Concurrent_Hello;</
Note that random generator object is local to each task. It cannot be accessed concurrently without mutual exclusion. In order to get different initial states of local generators Reset is called (see [http://www.adaic.org/resources/add_content/standards/05rm/html/RM-A-5-2.html ARM A.5.2]).
=={{header|ALGOL 68}}==
<
PROC echo = (STRING string)VOID:
printf(($gl$,string));
Line 38:
echo("Code")
)
)</
=={{header|APL}}==
Line 45:
Dyalog APL supports the <code>&</code> operator, which runs a function on its own thread.
<
{{out}}
(Example)
Line 53:
=={{header|Astro}}==
<
for word in words:
(word) |> async (w) =>
sleep(random())
print(w)</
=={{header|
==={{header|BaCon}}===
{{libheader|gomp}}
{{works with|OpenMP}}
BaCon is a BASIC-to-C compiler. Assuming GCC compiler in this demonstration. Based on the C OpenMP source.
<
' Specify compiler flag
Line 83 ⟶ 84:
PRINT str$[i]
NEXT
</syntaxhighlight>
{{out}}
Line 96 ⟶ 97:
Rosetta</pre>
==={{header|BBC BASIC}}===
{{works with|BBC BASIC for Windows}}
The BBC BASIC interpreter is single-threaded so the only way of achieving 'concurrency' (short of using assembler code) is to use timer events:
<
tID1% = FN_ontimer(100, PROCtask1, 1)
Line 129 ⟶ 130:
PROC_killtimer(tID2%)
PROC_killtimer(tID3%)
ENDPROC</
=={{header|C}}==
Line 136 ⟶ 137:
{{libheader|pthread}}
<
#include <unistd.h>
#include <pthread.h>
Line 191 ⟶ 192:
pthread_join(a[i], NULL);
}
}</
'''Note''': since threads are created one after another, it is likely that the execution of their code follows the order of creation. To make this less evident, I've added the ''bang'' idea using condition: the thread really executes their code once the gun bang is heard. Nonetheless, I still obtain the same order of creation (Enjoy, Rosetta, Code), and maybe it is because of the order locks are acquired. The only way to obtain randomness seems to be to add random wait in each thread (or wait for special cpu load condition)
Line 197 ⟶ 198:
===OpenMP===
Compile with <code>gcc -std=c99 -fopenmp</code>:
<
#include <omp.h>
Line 207 ⟶ 208:
printf("%s\n", str[i]);
return 0;
}</
=={{header|
===With Threads===
<
static Random tRand = new Random();
Line 233 ⟶ 234:
Console.WriteLine(p);
}
</syntaxhighlight>
An example result:
Line 243 ⟶ 244:
===With Tasks===
{{works with|C sharp|7.1}}
<
using System.Threading.Tasks;
Line 255 ⟶ 256:
await Task.WhenAll(t1, t2, t3);
}
}</
===With a parallel loop===
<
using System.Threading.Tasks;
Line 263 ⟶ 264:
{
static void Main() => Parallel.ForEach(new[] {"Enjoy", "Rosetta", "Code"}, s => Console.WriteLine(s));
}</
=={{header|C++}}==
Line 271 ⟶ 272:
<code>g++ -std=c++11 -D_GLIBCXX_USE_NANOSLEEP -o concomp concomp.cpp</code>
<
#include <iostream>
#include <vector>
Line 297 ⟶ 298:
return 0;
}</
Output:
Line 306 ⟶ 307:
{{libheader|Microsoft Parallel Patterns Library (PPL)}}
<
#include <ppl.h> // MSVC++
Line 325 ⟶ 326:
);
return 0;
}</
Output:
<pre>
Line 338 ⟶ 339:
=={{header|Cind}}==
<
execute() {
{# host.println("Enjoy");
Line 344 ⟶ 345:
# host.println("Code"); }
}
</syntaxhighlight>
=={{header|Clojure}}==
A simple way to obtain concurrency is using the ''future'' function, which evaluates its body on a separate thread.
<
(future (println text)))</
Using the new (2013) ''core.async'' library, "go blocks" can execute asynchronously,
sharing threads from a pool. This works even in ClojureScript (the JavaScript target of Clojure)
on a single thread. The ''timeout'' call is there just to shuffle things up: note this delay doesn't block a thread.
<
(doseq [text ["Enjoy" "Rosetta" "Code"]]
(go
(<! (timeout (rand-int 1000))) ; wait a random fraction of a second,
(println text)))</
=={{header|CoffeeScript}}==
Line 367 ⟶ 368:
JavaScript, which CoffeeScript compiles to, is single-threaded. This approach launches multiple process to achieve concurrency on [http://nodejs.org Node.js]:
<
for word in [ 'Enjoy', 'Rosetta', 'Code' ]
exec "echo #{word}", (err, stdout) ->
console.log stdout</
===Using Node.js===
Line 377 ⟶ 378:
As stated above, CoffeeScript is single-threaded. This approach launches multiple [http://nodejs.org Node.js] processes to achieve concurrency.
<
{ fork } = require 'child_process'
Line 384 ⟶ 385:
words = [ 'Enjoy', 'Rosetta', 'Code' ]
fork child_name, [ word ] for word in words</
<
console.log process.argv[ 2 ]</
=={{header|Common Lisp}}==
Line 396 ⟶ 397:
Concurrency and threads are not part of the Common Lisp standard. However, most implementations provide some interface for concurrency. [http://common-lisp.net/project/bordeaux-threads/ Bordeaux Threads], used here, provides a compatibility layer for many implementations. (Binding <var>out</var> to <code>*standard-output*</code> before threads are created is needed as each thread gets its own binding for <code>*standard-output*</code>.)
<
(let ((lock (bordeaux-threads:make-lock)))
(flet ((writer (string)
Line 405 ⟶ 406:
(bordeaux-threads:make-thread (writer "Enjoy"))
(bordeaux-threads:make-thread (writer "Rosetta"))
(bordeaux-threads:make-thread (writer "Code")))))</
=={{header|Crystal}}==
Crystal requires the use of channels to ensure that the main fiber doesn't exit before any of the new fibers are done, since each fiber sleeping could return control to the main fiber.
<
require "fiber"
require "random"
Line 425 ⟶ 426:
3.times do
done.receive
end</
=={{header|D}}==
<
void main() {
Line 435 ⟶ 436:
s.writeln;
}
}</
===Alternative version===
{{libheader|Tango}}
<
import tango.io.Console;
import tango.math.Random;
Line 447 ⟶ 448:
(new Thread( { Thread.sleep(Random.shared.next(1000) / 1000.0); Cout("Rosetta").newline; } )).start;
(new Thread( { Thread.sleep(Random.shared.next(1000) / 1000.0); Cout("Code").newline; } )).start;
}</
=={{header|Dart}}==
===Future===
Using Futures, called Promises in Javascript
<
main(){
Line 469 ⟶ 470:
code() => Future.delayed( Duration( milliseconds: rng.nextInt( 10 ) ), () => "Code");
</syntaxhighlight>
===Isolate===
Using Isolates, similar to threads but each has its own memory, so they are more like Rust threads than C++
<
import 'dart:io' show exit, sleep;
import 'dart:math' show Random;
Line 532 ⟶ 533:
}
</syntaxhighlight>
=={{header|Delphi}}==
<
{$APPTYPE CONSOLE}
Line 573 ⟶ 574:
WaitForMultipleObjects(Length(lThreadArray), @lThreadArray, True, INFINITE);
end.</
=={{header|dodo0}}==
<
(
fork() -> return, throw
Line 589 ⟶ 590:
parprint("Code") ->
exit()</
=={{header|E}}==
<
for string in ["Enjoy", "Rosetta", "Code"] {
timer <- whenPast(base + entropy.nextInt(1000), fn { println(string) })
}</
Nondeterminism from preemptive concurrency rather than a random number generator:
<
for string in ["Enjoy", "Rosetta", "Code"] {
seedVat <- (`
Line 607 ⟶ 608:
}
`) <- get(0) <- (string)
}</
=={{header|EchoLisp}}==
<
(lib 'tasks) ;; use the tasks library
Line 624 ⟶ 625:
#task:id:67:running code
#task:id:65:running Enjoy
</syntaxhighlight>
=={{header|Egel}}==
<syntaxhighlight lang="egel">
import "prelude.eg"
import "io.ego"
Line 638 ⟶ 639:
[_ -> print "rosetta\n"])
[_ -> print "code\n"] in nop
</syntaxhighlight>
=={{header|Elixir}}==
<
def computing(xs) do
Enum.each(xs, fn x ->
Line 653 ⟶ 654:
end
Concurrent.computing ["Enjoy", "Rosetta", "Code"]</
{{out}}
Line 664 ⟶ 665:
=={{header|Erlang}}==
hw.erl
<
-export([start/0]).
Line 682 ⟶ 683:
_N -> wait(N-1)
end
end.</
running it
<
|erl -run hw start -run init stop -noshell</
=={{header|Euphoria}}==
<
puts(1,s)
puts(1,'\n')
Line 705 ⟶ 706:
task_schedule(task3,1)
task_yield()</
Output:
Line 714 ⟶ 715:
=={{header|F_Sharp|F#}}==
We define a parallel version of <code>Seq.iter</code> by using asynchronous workflows:
<
let piter f xs =
seq { for x in xs -> async { f x } }
Line 725 ⟶ 726:
["Enjoy"; "Rosetta"; "Code";]
main()</
With version 4 of the .NET framework and F# PowerPack 2.0 installed, it is possible to use the predefined <code>PSeq.iter</code> instead.
=={{header|Factor}}==
<
{ "Enjoy" "Rosetta" "Code" } [ print ] parallel-each</
=={{header|Forth}}==
Line 738 ⟶ 739:
Many Forth implementations come with a simple cooperative task scheduler. Typically each task blocks on I/O or explicit use of the '''pause''' word. There is also a class of variables called "user" variables which contain task-specific data, such as the current base and stack pointers.
<
require random.fs
Line 754 ⟶ 755:
s" Code" task
begin pause single-tasking? until ;
main</
=={{header|Fortran}}==
Fortran doesn't have threads but there are several compilers that support OpenMP, e.g. gfortran and Intel. The following code has been tested with thw Intel 11.1 compiler on WinXP.
<
implicit none
character(len=*), parameter :: str1 = 'Enjoy'
Line 802 ⟶ 803:
!$omp end parallel do
end program concurrency</
=={{header|FreeBASIC}}==
<
' Compiled with -mt switch (to use threadsafe runtiume)
' The 'ThreadCall' functionality in FB is based internally on LibFFi (see [https://github.com/libffi/libffi/blob/master/LICENSE] for license)
Line 833 ⟶ 834:
Print
Sleep
Loop While Inkey <> Chr(27)</
Sample output
Line 849 ⟶ 850:
Code
</pre>
=={{header|FutureBasic}}==
<syntaxhighlight lang="futurebasic">
include "NSLog.incl"
long priority(2)
priority(0) = _dispatchPriorityDefault
priority(1) = _dispatchPriorityHigh
priority(2) = _dispatchPriorityLow
dispatchglobal , priority(rnd(3)-1)
NSLog(@"Enjoy")
dispatchend
dispatchglobal , priority(rnd(3)-1)
NSLog(@"Rosetta")
dispatchend
dispatchglobal , priority(rnd(3)-1)
NSLog(@"Code")
dispatchend
HandleEvents
</syntaxhighlight>
=={{header|Go}}==
Line 856 ⟶ 881:
This solution also shows a good practice for generating random numbers in concurrent goroutines. While certainly not needed for this RC task, in the more general case where you have a number of goroutines concurrently needing random numbers, the goroutines can suffer congestion if they compete heavily for the sole default library source. This can be relieved by having each goroutine create its own non-sharable source. Also particularly in cases where there might be a large number of concurrent goroutines, the source provided in subrepository rand package (exp/rand) can be a better choice than the standard library generator. The subrepo generator requires much less memory for "state" and is much faster to seed.
<
import (
Line 878 ⟶ 903:
fmt.Println(<-q)
}
}</
===Afterfunc===
time.Afterfunc combines the sleep and the goroutine start. log.Println serializes output in the case goroutines attempt to print concurrently. sync.WaitGroup is used directly as a checkpoint.
<
import (
Line 906 ⟶ 931:
}
q.Wait()
}</
===Select===
This solution might stretch the intent of the task a bit. It is concurrent but not parallel. Also it doesn't sleep and doesn't call the random number generator explicity. It works because the select statement is specified to make a "pseudo-random fair choice" among
multiple channel operations.
<
import "fmt"
Line 935 ⟶ 960:
}
}
}</
Output:
<pre>
Line 952 ⟶ 977:
=={{header|Groovy}}==
<
Thread.start {
Thread.sleep(1000 * Math.random() as int)
println w
}
}.each { it.join() }</
=={{header|Haskell}}==
Line 963 ⟶ 988:
Note how the map treats the list of processes just like any other data.
<
main = mapM_ forkIO [process1, process2, process3] where
process1 = putStrLn "Enjoy"
process2 = putStrLn "Rosetta"
process3 = putStrLn "Code"</
A more elaborated example using MVars and a random running time per thread.
<
import System.Random
Line 990 ⟶ 1,015:
-- until we write another value to it
putMVar v (s : val) -- append a text string to the MVar and block other
-- threads from writing to it unless it is read first</
==Icon and {{header|Unicon}}==
The following code uses features exclusive to Unicon
<
L:=[ thread write("Enjoy"), thread write("Rosetta"), thread write("Code") ]
every wait(!L)
end</
=={{header|J}}==
Using J's new threading primitives (in place of some sort of thread emulation):
<syntaxhighlight lang=J>reqthreads=: {{ 0&T.@''^:(0>.y-1 T.'')0 }}
dispatchwith=: (t.'')every
newmutex=: 10&T.
lock=: 11&T.
unlock=: 13&T.
synced=: {{
lock n
r=. u y
unlock n
r
}}
register=: {{ out=: out, y }} synced (newmutex 0)
task=: {{
reqthreads 3 NB. at least 3 worker threads
out=: EMPTY
#@> register dispatchwith ;:'Enjoy Rosetta Code'
out
}}</syntaxhighlight>
Sample use:
<syntaxhighlight lang=J> task''
Enjoy
Rosetta
Code
task''
Enjoy
Code
Rosetta</syntaxhighlight>
=={{header|Java}}==
Create a new <code>Thread</code> array, shuffle the array, start each thread.
<syntaxhighlight lang="java">
Thread[] threads = new Thread[3];
threads[0] = new Thread(() -> System.out.println("enjoy"));
threads[1] = new Thread(() -> System.out.println("rosetta"));
threads[2] = new Thread(() -> System.out.println("code"));
Collections.shuffle(Arrays.asList(threads));
for (Thread thread : threads)
thread.start();
</syntaxhighlight>
<br />
An alternate demonstration
{{works with|Java|1.5+}}
Uses CyclicBarrier to force all threads to wait until they're at the same point before executing the println, increasing the odds they'll print in a different order (otherwise, while the they may be executing in parallel, the threads are started sequentially and with such a short run-time, will usually output sequentially as well).
<
public class Threads
Line 1,052 ⟶ 1,106:
new Thread(new DelayedMessagePrinter(barrier, "Code")).start();
}
}</
=={{header|JavaScript}}==
JavaScript now enjoys access to a concurrency library thanks to [http://en.wikipedia.org/wiki/Web_worker Web Workers]. The Web Workers specification defines an API for spawning background scripts. This first code is the background script and should be in the concurrent_worker.js file.
<
self.postMessage(event.data);
self.close();
}, false);</
This second block creates the workers, sends them a message and creates an event listener to handle the response.
<
var workers = [];
Line 1,071 ⟶ 1,125:
}, false);
workers[i].postMessage(words[i]);
}</
=={{header|Julia}}==
{{works with|Julia|0.6}}
<
function sleepprint(s)
Line 1,085 ⟶ 1,139:
@sync for word in words
@async sleepprint(word)
end</
=={{header|Kotlin}}==
{{trans|Java}}
<
import java.util.concurrent.CyclicBarrier
Line 1,104 ⟶ 1,158:
val barrier = CyclicBarrier(msgs.size)
for (msg in msgs) Thread(DelayedMessagePrinter(barrier, msg)).start()
}</
{{out}}
Line 1,115 ⟶ 1,169:
=={{header|LFE}}==
<
;;;
;;; This is a straight port of the Erlang version.
Line 1,142 ⟶ 1,196:
(0 0)
(_n (wait (- n 1)))))))
</syntaxhighlight>
=={{header|Logtalk}}==
Works when using SWI-Prolog, XSB, or YAP as the backend compiler.
<
:- initialization(output).
Line 1,157 ⟶ 1,211:
)).
:- end_object.</
=={{header|Lua}}==
<
co[1] = coroutine.create( function() print "Enjoy" end )
co[2] = coroutine.create( function() print "Rosetta" end )
Line 1,175 ⟶ 1,229:
i = i + 1
end
until i == 3</
=={{header|M2000 Interpreter}}==
Line 1,186 ⟶ 1,240:
Threads actually runs in Wait loop. We can use Main.Task as a loop which is thread also. Threads can be run when we wait for input in m2000 console, or for events from M2000 GUI forms, also. Events always run in sequential form.
<syntaxhighlight lang="m2000 interpreter">
Thread.Plan Concurrent
Module CheckIt {
Line 1,240 ⟶ 1,294:
CheckIt
</syntaxhighlight>
=={{header|Mathematica}} / {{header|Wolfram Language}}==
Parallelization requires Mathematica 7 or later
<
Pause[RandomReal[]];
Print[s],
{s, {"Enjoy", "Rosetta", "Code"}}
]</
=={{header|Mercury}}==
<syntaxhighlight lang="text">:- module concurrent_computing.
:- interface.
Line 1,263 ⟶ 1,317:
spawn(io.print_cc("Enjoy\n"), !IO),
spawn(io.print_cc("Rosetta\n"), !IO),
spawn(io.print_cc("Code\n"), !IO).</
=={{header|Neko}}==
<syntaxhighlight lang="actionscript">/**
Concurrent computing, in Neko
*/
Line 1,299 ⟶ 1,353:
/* Let the threads complete */
sys_sleep(4);</
{{out}}
Line 1,315 ⟶ 1,369:
=={{header|Nim}}==
Compile with <code>nim --threads:on c concurrent</code>:
<
var thr: array[3, Thread[int32]]
Line 1,324 ⟶ 1,378:
for i in 0..thr.high:
createThread(thr[i], f, int32(i))
joinThreads(thr)</
===OpenMP===
Compile with <code>nim --passC:"-fopenmp" --passL:"-fopenmp" c concurrent</code>:
<
for i in 0||2:
echo str[i]</
===Thread Pools===
Compile with <code>nim --threads:on c concurrent</code>:
<
const str = ["Enjoy", "Rosetta", "Code"]
Line 1,343 ⟶ 1,397:
for i in 0..str.high:
spawn f(i)
sync()</
=={{header|Objeck}}==
<
bundle Default {
class MyThread from Thread {
Line 1,374 ⟶ 1,428:
}
}
</syntaxhighlight>
=={{header|OCaml}}==
<
#load "unix.cma"
#load "threads.cma"
Line 1,391 ⟶ 1,445:
let () =
Random.self_init ();
List.iter (Thread.join) threads</
=={{header|Oforth}}==
Line 1,397 ⟶ 1,451:
Oforth uses tasks to implement concurrent computing. A task is scheduled using #& on a function, method, block, ...
<
#[ "Rosetta" println ] &
#[ "Code" println ] &</
mapParallel method can be used to map a runnable on each element of a collection and returns a collection of results. Here, we println the string and return string size.
<
=={{header|Ol}}==
<syntaxhighlight lang="scheme">
(import (otus random!))
(for-each (lambda (str)
(define timeout (rand! 999))
(async (lambda ()
(sleep timeout)
(print str))))
'("Enjoy" "Rosetta" "Code"))
</syntaxhighlight>
{{Out}}
<pre>Code
Enjoy
Rosetta
</pre>
=={{header|ooRexx}}==
<syntaxhighlight lang="oorexx">
-- this will launch 3 threads, with each thread given a message to print out.
-- I've added a stoplight to make each thread wait until given a go signal,
Line 1,441 ⟶ 1,512:
call syssleep .5 -- add another sleep here
say text
</syntaxhighlight>
=={{header|Oz}}==
The randomness comes from the unpredictability of thread scheduling (this is how I understand this exercise).
<
thread
{System.showInfo Msg}
end
end
</syntaxhighlight>
=={{header|PARI/GP}}==
Here is a GP implementation using the [http://pari.math.u-bordeaux.fr/cgi-bin/gitweb.cgi?p=pari.git;a=tree;h=refs/heads/bill-mt;hb=refs/heads/bill-mt bill-mt] branch:
<
func(n)=print(["Enjoy","Rosetta","Code"][n]);
parapply(func,[1..3]);</
This is a PARI implementation which uses <code>fork()</code> internally. Note that the [[#C|C]] solutions can be used instead if desired; this program demonstrates the native PARI capabilities instead.
For serious concurrency, see Appendix B of the User's Guide to the PARI Library which discusses a solution using [[wp:Thread-local storage|tls]] on [[wp:POSIX Threads|pthreads]]. (There are nontrivial issues with using PARI in this environment, do not attempt to blindly implement a [[#C|C]] solution.)
<
foo()
{
Line 1,478 ⟶ 1,549:
pari_printf("Rosetta\n");
}
}</
See also [http://pari.math.u-bordeaux1.fr/Events/PARI2012/talks/pareval.pdf Bill Allombert's slides on parallel programming in GP].
Line 1,486 ⟶ 1,557:
Output of difference of sleep time and true sleep time ( running with 0..1999 threads you see once a while 1)
<
{$IFdef FPC}
{$MODE DELPHI}
Line 1,545 ⟶ 1,616:
while gblRunThdCnt > 0 do
sleep(125);
end.</
{{out}}
<pre>
Line 1,559 ⟶ 1,630:
=={{header|Perl}}==
<
use Time::HiRes qw(sleep);
Line 1,567 ⟶ 1,638:
print shift, "\n";
}, $_)
} qw(Enjoy Rosetta Code);</
Or using coroutines provided by {{libheader|Coro}}
<
use Coro;
use Coro::Timer qw( sleep );
Line 1,580 ⟶ 1,651:
} $_
} qw( Enjoy Rosetta Code );
</syntaxhighlight>
=={{header|Phix}}==
Without the sleep it is almost always Enjoy Rosetta Code, because create_thread() is more costly than echo(), as the former has to create a new call stack etc.<br>
The lock prevents the displays from mangling each other.
<!--<syntaxhighlight lang="phix">(notonline)-->
<span style="color: #008080;">without</span> <span style="color: #008080;">js</span> <span style="color: #000080;font-style:italic;">-- (threads)</span>
<span style="color: #008080;">procedure</span> <span style="color: #000000;">echo</span><span style="color: #0000FF;">(</span><span style="color: #004080;">string</span> <span style="color: #000000;">s</span><span style="color: #0000FF;">)</span>
<span style="color: #7060A8;">sleep</span><span style="color: #0000FF;">(</span><span style="color: #7060A8;">rand</span><span style="color: #0000FF;">(</span><span style="color: #000000;">100</span><span style="color: #0000FF;">)/</span><span style="color: #000000;">100</span><span style="color: #0000FF;">)</span>
<span style="color: #7060A8;">enter_cs</span><span style="color: #0000FF;">()</span>
<span style="color: #7060A8;">puts</span><span style="color: #0000FF;">(</span><span style="color: #000000;">1</span><span style="color: #0000FF;">,</span><span style="color: #000000;">s</span><span style="color: #0000FF;">)</span>
<span style="color: #7060A8;">puts</span><span style="color: #0000FF;">(</span><span style="color: #000000;">1</span><span style="color: #0000FF;">,</span><span style="color: #008000;">'\n'</span><span style="color: #0000FF;">)</span>
<span style="color: #7060A8;">leave_cs</span><span style="color: #0000FF;">()</span>
<span style="color: #008080;">end</span> <span style="color: #008080;">procedure</span>
<span style="color: #008080;">constant</span> <span style="color: #000000;">threads</span> <span style="color: #0000FF;">=</span> <span style="color: #0000FF;">{</span><span style="color: #000000;">create_thread</span><span style="color: #0000FF;">(</span><span style="color: #7060A8;">routine_id</span><span style="color: #0000FF;">(</span><span style="color: #008000;">"echo"</span><span style="color: #0000FF;">),{</span><span style="color: #008000;">"Enjoy"</span><span style="color: #0000FF;">}),</span>
<span style="color: #000000;">create_thread</span><span style="color: #0000FF;">(</span><span style="color: #7060A8;">routine_id</span><span style="color: #0000FF;">(</span><span style="color: #008000;">"echo"</span><span style="color: #0000FF;">),{</span><span style="color: #008000;">"Rosetta"</span><span style="color: #0000FF;">}),</span>
<span style="color: #000000;">create_thread</span><span style="color: #0000FF;">(</span><span style="color: #7060A8;">routine_id</span><span style="color: #0000FF;">(</span><span style="color: #008000;">"echo"</span><span style="color: #0000FF;">),{</span><span style="color: #008000;">"Code"</span><span style="color: #0000FF;">})}</span>
<span style="color: #000000;">wait_thread</span><span style="color: #0000FF;">(</span><span style="color: #000000;">threads</span><span style="color: #0000FF;">)</span>
<span style="color: #7060A8;">puts</span><span style="color: #0000FF;">(</span><span style="color: #000000;">1</span><span style="color: #0000FF;">,</span><span style="color: #008000;">"done"</span><span style="color: #0000FF;">)</span>
<span style="color: #0000FF;">{}</span> <span style="color: #0000FF;">=</span> <span style="color: #7060A8;">wait_key</span><span style="color: #0000FF;">()</span>
<!--</syntaxhighlight>-->
=={{header|PicoLisp}}==
===Using background tasks===
<
(task (- N) (rand 1000 4000) # Random start time 1 .. 4 sec
Str Str # Closure with string value
(println Str) # Task body: Print the string
(task @) ) ) # and stop the task</
===Using child processes===
<
(let N (rand 1000 4000) # Randomize
(unless (fork) # Create child process
(wait N) # Wait 1 .. 4 sec
(println Str) # Print string
(bye) ) ) ) # Terminate child process</
=={{header|Pike}}==
Using POSIX threads:
<
// Start threads and wait for them to finish
({
Line 1,628 ⟶ 1,702:
// Exit program
exit(0);
}</
Output:
Enjoy
Line 1,635 ⟶ 1,709:
Using Pike's backend:
<
{
call_out(write, random(1.0), "Enjoy\n");
Line 1,642 ⟶ 1,716:
call_out(exit, 1, 0);
return -1; // return -1 starts the backend which makes Pike run until exit() is called.
}</
Output:
Rosetta
Line 1,650 ⟶ 1,724:
=={{header|PowerShell}}==
Using Background Jobs:
<
$SB = {param($String)Write-Output $String}
Line 1,659 ⟶ 1,733:
Get-Job | Wait-Job | Receive-Job
Get-Job | Remove-Job</
Using .NET Runspaces:
<
$SB = {param($String)Write-Output $String}
Line 1,677 ⟶ 1,751:
$Pipeline.Dispose()
}
$Pool.Close()</
=={{header|Prolog}}==
Line 1,685 ⟶ 1,759:
Create a separate thread for each word. Join the threads to make sure they complete before the program exits.
<
thread_create(say("Enjoy"),A,[]),
thread_create(say("Rosetta"),B,[]),
Line 1,696 ⟶ 1,770:
Delay is random_float,
sleep(Delay),
writeln(Message).</
=={{header|PureBasic}}==
<
Procedure Printer(*str)
Line 1,724 ⟶ 1,798:
EndIf
FreeMutex(mutex)</
=={{header|Python}}==
{{works with|Python|3.7}}
Using asyncio module (I know almost nothing about it, so feel free to improve it :-)):
<
Line 1,743 ⟶ 1,817:
if __name__ == '__main__':
asyncio.run(main())</
{{works with|Python|3.2}}
Using the new to Python 3.2 [http://docs.python.org/release/3.2/library/concurrent.futures.html concurrent.futures library] and choosing to use processes over threads; the example will use up to as many processes as your machine has cores. This doesn't however guarantee an order of sub-process results.
<
Type "help", "copyright", "credits" or "license" for more information.
>>> from concurrent import futures
Line 1,757 ⟶ 1,831:
Rosetta
Code
>>></
{{works with|Python|2.5}}
<
import random
Line 1,769 ⟶ 1,843:
threading.Timer(random.random(), echo, ("Enjoy",)).start()
threading.Timer(random.random(), echo, ("Rosetta",)).start()
threading.Timer(random.random(), echo, ("Code",)).start()</
Or, by using a for loop to start one thread per list entry, where our list is our set of source strings:
<
import random
Line 1,780 ⟶ 1,854:
for text in ["Enjoy", "Rosetta", "Code"]:
threading.Timer(random.random(), echo, (text,)).start()</
=== threading.Thread ===
<
import threading
Line 1,796 ⟶ 1,870:
for line in 'Enjoy Rosetta Code'.split():
threading.Thread(target=echo, args=(line,)).start()</
=== multiprocessing ===
{{works with|Python|2.6}}
<
from multiprocessing import Pool
Line 1,809 ⟶ 1,883:
if __name__=="__main__":
main()</
=== twisted ===
<
from twisted.internet import reactor, task, defer
from twisted.python.util import println
Line 1,820 ⟶ 1,894:
for line in 'Enjoy Rosetta Code'.split()])
d.addBoth(lambda _: reactor.stop())
reactor.run()</
=== gevent ===
<
import random
import gevent
Line 1,829 ⟶ 1,903:
delay = lambda: 1e-4*random.random()
gevent.joinall([gevent.spawn_later(delay(), print, line)
for line in 'Enjoy Rosetta Code'.split()])</
=={{header|Racket}}==
Threads provide a simple API for concurrent programming.
<
#lang racket
(for ([str '("Enjoy" "Rosetta" "Code")])
(thread (λ () (displayln str))))
</syntaxhighlight>
In addition to "thread" which is implemented as green threads (useful for IO etc), Racket has "futures" and "places" which are similar tools for using multiple OS cores.
Line 1,845 ⟶ 1,919:
(formerly Perl 6)
{{works with|Rakudo|2018.9}}
<syntaxhighlight lang="raku"
@words.race(:batch(1)).map: { sleep rand; say $_ };</
{{out}}
<pre>Code
Line 1,853 ⟶ 1,927:
=={{header|Raven}}==
<
thread talker
Line 1,862 ⟶ 1,936:
talker as a
talker as b
talker as c</
=={{header|Rhope}}==
{{works with|Rhope|alpha 1}}
<
|:
Print["Enjoy"]
Print["Rosetta"]
Print["Code"]
:|</
In Rhope, expressions with no shared dependencies run in parallel by default.
=={{header|Ruby}}==
<
Thread.new do
sleep rand
Line 1,882 ⟶ 1,956:
end.each do |t|
t.join
end</
=={{header|Rust}}==
{{libheader|rand}}
<
use std::thread;
use rand::thread_rng;
Line 1,903 ⟶ 1,977:
}
thread::sleep_ms(1000);
}</
=={{header|Scala}}==
<
List("Enjoy", "Rosetta", "Code").map { x =>
Futures.future {
Line 1,912 ⟶ 1,986:
println(x)
}
}.foreach(_())</
=={{header|Scheme}}==
<
(lambda () (print "Rosetta"))
(lambda () (print "Code")))</
If your implementation doesn't provide parallel-execute, it can be implemented with [https://srfi.schemers.org/srfi-18/srfi-18.html SRFI-18].
<
(define (parallel-execute . thunks)
(let ((threads (map make-thread thunks)))
(for-each thread-start! threads)
(for-each thread-join! threads)))</
=={{header|Sidef}}==
A very basic threading support is provided by the '''Block.fork()''' method:
<
a.map{|str|
Line 1,934 ⟶ 2,008:
say str
}.fork
}.map{|thr| thr.wait }</
{{out}}
Line 1,942 ⟶ 2,016:
Rosetta
</pre>
=={{header|Slope}}==
<syntaxhighlight lang="slope">(coeval
(display "Enjoy")
(display "Rosetta")
(display "Code"))</syntaxhighlight>
=={{header|Swift}}==
Using Grand Central Dispatch with concurrent queues.
<
let myList = ["Enjoy", "Rosetta", "Code"]
Line 1,955 ⟶ 2,035:
}
dispatch_main()</
{{out}}
<pre>
Line 1,965 ⟶ 2,045:
=={{header|Standard ML}}==
Works with PolyML
<
structure TTm = Thread.Mutex ;
Line 1,988 ⟶ 2,068:
end ;
</syntaxhighlight>
call
threadedStringList [ "Enjoy","Rosetta","Code" ];
Line 1,997 ⟶ 2,077:
Assuming that "random" means that we really want the words to appear in random (rather then "undefined" or "arbitrary") order:
<
after [expr int(1000*rand())] {puts "Rosetta"}
after [expr int(1000*rand())] {puts "Code"}</
will execute each line after a randomly chosen number (0...1000) of milliseconds.
Line 2,005 ⟶ 2,085:
A step towards "undefined" would be to use <tt>after idle</tt>, which is Tcl for "do this whenever you get around to it". Thus:
<
after idle {puts "Rosetta"}
after idle {puts "Code"}</
(While no particular order is guaranteed by the Tcl spec, the current implementations will all execute these in the order in which they were added to the idle queue).
It's also possible to use threads for this. Here we do this with the built-in thread-pool support:
<
set pool [tpool::create -initcmd {
proc delayPrint msg {
Line 2,024 ⟶ 2,104:
tpool::release $pool
after 1200 ;# Give threads time to do their work
exit</
=={{header|UnixPipes}}==
<
=={{header|VBA}}==
Three tasks scheduled for the same time with OnTime. The last scheduled task gets executed first.
<
Debug.Print "Enjoy"
End Sub
Line 2,045 ⟶ 2,125:
Application.OnTime when, "Rosetta"
Application.OnTime when, "Code"
End Sub</
=={{header|Visual Basic .NET}}==
<
Module Module1
Line 2,074 ⟶ 2,154:
End Sub
End Module</
===Alternative version===
[https://tio.run/##TY9PC8IwDMXv@xRhpw60oODFm@gEQUWs4Llbg6t0zWjrn3362bmBvssjCfnl5VlMS3LYdbu6IRc8iNYHrPmlciiVtrfkQOphEAabJRC10TU4q2Dl4YgvOEurqGbZdyYeBRyktmPZ6ySdNAYN35LLZVmxNLd3auFMHkOQsCaFKReN0YGlkGaTHsL8D9BrCMSFQWxYPM6P@A5svsgyWEaC9WSQX50OuNcW/7fzmDQCh8ZYJL0PL3XdBw Try It Online!]
<
Module Module1
Dim rnd As New Random()
Line 2,085 ⟶ 2,165:
End Sub)
End Sub
End Module</
{{out}}
<pre>Rosetta
Line 2,091 ⟶ 2,171:
Code</pre>
=={{header|V (Vlang)}}==
===Porting of Go code===
<
import rand
import rand.pcg32
Line 2,115 ⟶ 2,195:
println(<-q)
}
}</
===Vlang Idiomatic version===
<
import rand
import rand.pcg32
Line 2,136 ⟶ 2,216:
}
threads.wait() // join the thread waiting. wait() is defined for threads and arrays of threads
}</
{{out}}<pre>Code
Rosetta
Line 2,146 ⟶ 2,226:
=={{header|Wren}}==
<
var words = ["Enjoy", "Rosetta", "Code"]
Line 2,164 ⟶ 2,244:
}
System.print()
}</
{{out}}
Line 2,180 ⟶ 2,260:
Enjoy
Code
</pre>
=={{header|XPL0}}==
Works on Raspberry Pi using XPL0 version 3.2. Processes actually execute
simultaneously, one per CPU core (beyond single-core RPi-1). Lock is
necessary to enable one line to finish printing before another line starts.
<syntaxhighlight lang="xpl0">int Key, Process;
[Key:= SharedMem(4); \allocate 4 bytes of memory common to all processes
Process:= Fork(2); \start 2 child processes
case Process of
0: [Lock(Key); Text(0, "Enjoy"); CrLf(0); Unlock(Key)]; \parent process
1: [Lock(Key); Text(0, "Rosetta"); CrLf(0); Unlock(Key)]; \child process
2: [Lock(Key); Text(0, "Code"); CrLf(0); Unlock(Key)] \child process
other [Lock(Key); Text(0, "Error"); CrLf(0); Unlock(Key)];
Join(Process); \wait for all child processes to finish
]</syntaxhighlight>
{{out}}
<pre>
Code
Enjoy
Rosetta
</pre>
=={{header|zkl}}==
<
fcn{println("Rosetta")}.strand(); // co-op thread
fcn{println("Code")}.future(); // another thread type</
{{out}}
<pre>
|