Concurrent computing: Difference between revisions
Revert to revision as of November 5
(→{{header|Perl}}: added header pascal {trans|Delphi}}) |
Thundergnat (talk | contribs) (Revert to revision as of November 5) |
||
Line 1:
{{task|Concurrency}}
[[Category:Basic language learning]]
;Task:
Using either native language concurrency syntax or freely available libraries, write a program to display the strings "Enjoy" "Rosetta" "Code", one string per line, in random order.
Concurrency syntax must use [[thread|threads]], tasks, co-routines, or whatever concurrency is called in your language.
<br><br>
=={{header|Ada}}==
<lang ada>with Ada.Text_IO, Ada.Numerics.Float_Random;
Line 50 ⟶ 51:
{{libheader|gomp}}
{{works with|OpenMP}}
BaCon is a BASIC-to-C compiler. Assuming GCC compiler in this demonstration. Based on the C OpenMP source.
<lang freebasic>' Concurrent computing using the OpenMP extension in GCC. Requires BaCon 3.6 or higher.
' Specify compiler flag
PRAGMA OPTIONS -fopenmp
' Sepcify linker flag
PRAGMA LDFLAGS -lgomp
' Declare array with text
DECLARE str$[] = { "Enjoy", "Rosetta", "Code" }
' Indicate MP optimization for FOR loop
PRAGMA omp parallel for num_threads(3)
' The actual FOR loop
FOR i = 0 TO 2
PRINT str$[i]
NEXT
</lang>
{{out}}
<pre>prompt$ bacon concurrent-computing
Converting 'concurrent-computing.bac'... done, 11 lines were processed in 0.002 seconds.
Compiling 'concurrent-computing.bac'... cc -fopenmp -c concurrent-computing.bac.c
cc -o concurrent-computing concurrent-computing.bac.o -lbacon -lm -lgomp
Done, program 'concurrent-computing' ready.
prompt$ ./concurrent-computing
Code
Enjoy
Rosetta</pre>
=={{header|BBC BASIC}}==
Line 94 ⟶ 120:
=={{header|C}}==
{{works with|POSIX}}
{{libheader|pthread}}
<lang c>#include <stdio.h>
#include <unistd.h>
#include <pthread.h>
pthread_mutex_t condm = PTHREAD_MUTEX_INITIALIZER;
pthread_cond_t cond = PTHREAD_COND_INITIALIZER;
int bang = 0;
#define WAITBANG() do { \
{ \
pthread_cond_wait(&cond, &condm); \
} \
pthread_mutex_unlock(&condm); } while(0);\
void *t_enjoy(void *p)
{
printf("Enjoy\n");
pthread_exit(0);
}
void *t_rosetta(void *p)
{
printf("Rosetta\n");
pthread_exit(0);
}
void *t_code(void *p)
{
printf("Code\n");
pthread_exit(0);
}
typedef void *(*threadfunc)(void *);
{
int i;
pthread_t a[3];
threadfunc p[3] = {t_enjoy, t_rosetta, t_code};
pthread_create(&a[i], NULL, p[i], NULL);
}
sleep(1);
bang
pthread_cond_broadcast(&cond);
for(i=0;i<3;i++)
{
pthread_join(a[i], NULL);
}
}</lang>
'''Note''': since threads are created one after another, it is likely that the execution of their code follows the order of creation. To make this less evident, I've added the ''bang'' idea using condition: the thread really executes their code once the gun bang is heard. Nonetheless, I still obtain the same order of creation (Enjoy, Rosetta, Code), and maybe it is because of the order locks are acquired. The only way to obtain randomness seems to be to add random wait in each thread (or wait for special cpu load condition)
===OpenMP===
Compile with <code>gcc -std=c99 -fopenmp</code>:
<lang C>#include <stdio.h>
#include <omp.h>
int main()
{
const char *str[] = { "Enjoy", "Rosetta", "Code" };
#pragma omp parallel for num_threads(3)
for (int i = 0; i < 3; i++)
printf("%s\n", str[i]);
return 0;
}</lang>
=={{header|c sharp|C#}}==
===With Threads===
<lang csharp>
static
static void Main(string[] args)
{
Thread t = new Thread(new ParameterizedThreadStart(WriteText));
t.Start("Enjoy");
t = new Thread(new ParameterizedThreadStart(WriteText));
t.Start("Rosetta");
t = new Thread(new ParameterizedThreadStart(WriteText));
t.Start("Code");
Console.ReadLine();
}
private static void WriteText(object p)
{
Thread.Sleep(tRand.Next(1000, 4000));
Console.WriteLine(p);
}
</lang>
An example result:
<pre>
Enjoy
Code
Rosetta
</pre>
===With Tasks===
{{works with|C sharp|7.1}}
<lang csharp>using System;
using System.Threading.Tasks;
public class Program
{
static async Task Main() {
Task t1 = Task.Run(() => Console.WriteLine("Enjoy"));
Task t2 = Task.Run(() => Console.WriteLine("Rosetta"));
Task t3 = Task.Run(() => Console.WriteLine("Code"));
await Task.WhenAll(t1, t2, t3);
}
Line 169 ⟶ 247:
<lang csharp>using System;
using System.Threading.Tasks;
public class Program
{
static void Main() => Parallel.ForEach(new[] {"Enjoy", "Rosetta", "Code"}, s => Console.WriteLine(s));
}</lang>
Line 176 ⟶ 256:
{{works with|C++11}}
The following example compiles with GCC 4.7.
<code>g++ -std=c++11 -D_GLIBCXX_USE_NANOSLEEP -o concomp concomp.cpp</code>
<lang cpp>#include <thread>
#include <iostream>
Line 182 ⟶ 264:
#include <random>
#include <chrono>
int main()
{
std::random_device rd;
std::mt19937 eng(rd()); // mt19937 generator with a hardware random seed.
std::uniform_int_distribution<> dist(1,1000);
std::vector<std::thread> threads;
for(const auto& str: {"Enjoy\n", "Rosetta\n", "Code\n"}) {
// between 1 and 1000ms per our
std::chrono::milliseconds duration(dist(eng));
threads.emplace_back([str, duration](){
std::cout << str;
});
}
for(auto& t: threads) t.join();
return 0;
}</lang>
Output:
<pre>Enjoy
Code
Rosetta</pre>
{{libheader|Microsoft Parallel Patterns Library (PPL)}}
<lang cpp>#include <iostream>
#include <ppl.h> // MSVC++
void a(void) { std::cout << "Eat\n"; }
void b(void) { std::cout << "At\n"; }
void c(void) { std::cout << "Joe's\n"; }
int main()
{
// function pointers
Concurrency::parallel_invoke(&a, &b, &c);
// C++11 lambda functions
Concurrency::parallel_invoke(
[]{ std::cout << "Enjoy\n"; },
[]{ std::cout << "Rosetta\n"; },
[]{ std::cout << "Code\n"; }
);
return 0;
}</lang>
Output:
<pre>
Joe's
Eat
At
Enjoy
Code
Rosetta
</pre>
=={{header|Cind}}==
Line 485 ⟶ 610:
=={{header|Elixir}}==
<lang Elixir>defmodule Concurrent do
Process.sleep(1000)
end
Concurrent.computing ["Enjoy", "Rosetta", "Code"]</lang>
{{out}}
<pre>
Rosetta
Code
Enjoy
</pre>
=={{header|Erlang}}==
Line 530 ⟶ 661:
puts(1,'\n')
end procedure
atom task1,task2,task3
task1 = task_create(routine_id("echo"),{"Enjoy"})
task_schedule(task1,1)
task2 = task_create(routine_id("echo"),{"Rosetta"})
task_schedule(task2,1)
task3 = task_create(routine_id("echo"),{"Code"})
task_schedule(task3,1)
task_yield()</lang>
Output:
Code
Rosetta
Enjoy
=={{header|F_Sharp|F#}}==
Line 632 ⟶ 773:
=={{header|FreeBASIC}}==
<lang freebasic>
' Compiled with -mt switch (to use threadsafe runtiume)
' The 'ThreadCall' functionality in FB is based internally on LibFFi (see [https://github.com/libffi/libffi/blob/master/LICENSE] for license)
Sub thread1()
Print "Enjoy"
End Sub
Sub thread2()
End Sub
Sub thread3()
End Sub
Print "Press any key to print next batch of 3 strings or ESC to quit"
Print
Do
Loop While Inkey <> Chr(27)</lang>
Sample output
{{out}}
<pre>
Press any key to print next batch of 3 strings or ESC to quit
Enjoy
Code
Rosetta
Enjoy
Rosetta
Code
</pre>
=={{header|Go}}==
Line 716 ⟶ 880:
multiple channel operations.
<lang go>package main
import "fmt"
func main() {
w1 := make(chan bool, 1)
Line 729 ⟶ 895:
select {
case <-w1:
fmt.Println("
case <-w2:
fmt.Println("
case <-w3:
fmt.Println("
}
}
}
}</lang>
Output:
<pre>
Code
Rosetta
Enjoy
Enjoy
Rosetta
Code
Rosetta
Enjoy
Code
</pre>
=={{header|Groovy}}==
Line 877 ⟶ 1,057:
=={{header|Kotlin}}==
{{trans|Java}}
<lang scala>
import java.util.concurrent.CyclicBarrier
class DelayedMessagePrinter(val barrier: CyclicBarrier, val msg: String) : Runnable {
override fun run() {
Line 884 ⟶ 1,067:
}
}
fun main(args: Array<String>) {
val msgs = listOf("Enjoy", "Rosetta", "Code")
Line 889 ⟶ 1,073:
for (msg in msgs) Thread(DelayedMessagePrinter(barrier, msg)).start()
}</lang>
{{out}}
Sample output:
<pre>
Code
Rosetta
Enjoy
</pre>
=={{header|LFE}}==
Line 1,042 ⟶ 1,234:
=={{header|Neko}}==
<lang ActionScript>
Concurrent computing, in Neko
*/
var thread_create = $loader.loadprim("std@thread_create", 2);
var subtask = function(message) {
}
/* The thread functions happen so fast as to look sequential */
thread_create(subtask, "Enjoy");
thread_create(subtask, "Rosetta");
thread_create(subtask, "Code");
/* slow things down */
var sys_sleep = $loader.loadprim("std@sys_sleep", 1);
var random_new = $loader.loadprim("std@random_new", 0);
var random_int = $loader.loadprim("std@random_int", 2);
var randomsleep = function(message) {
}
$print("\nWith random delays\n");
thread_create(randomsleep, "
thread_create(randomsleep, "
thread_create(randomsleep, "
/* Let the threads complete */
sys_sleep(4);</lang>
{{out}}
<pre>prompt$ nekoc threading.neko
prompt$ neko threading
Enjoy
Rosetta
Code
With random delays
Rosetta
Enjoy
Code</pre>
=={{header|Nim}}==
Line 1,233 ⟶ 1,450:
See also [http://pari.math.u-bordeaux1.fr/Events/PARI2012/talks/pareval.pdf Bill Allombert's slides on parallel programming in GP].
=={{header|Perl}}==
{{libheader|Time::HiRes}}
Line 1,366 ⟶ 1,512:
Using POSIX threads:
<lang Pike>int main() {
// Start threads and wait for them to finish
({
Thread.Thread(write, "Code\n")
// Exit program
exit(0);
}</lang>
Output:
Enjoy
Rosetta
Code
Using Pike's backend:
<lang Pike>int main(int argc, array argv)
{
call_out(write, random(1.0), "
call_out(write, random(1.0), "
call_out(write, random(1.0), "Code\n");
call_out(exit, 1, 0);
return -1; // return -1 starts the backend which makes Pike run until exit() is called.
}</lang>
Output:
Rosetta
Code
Enjoy
=={{header|PowerShell}}==
Line 1,461 ⟶ 1,620:
=={{header|Python}}==
{{works with|Python|3.7}}
Using asyncio module (I know almost nothing about it, so feel free to improve it :-)):
<lang python>import asyncio
async def print_(string: str) -> None:
print(string)
async def main():
strings = ['Enjoy', 'Rosetta', 'Code']
coroutines = map(print_, strings)
await asyncio.gather(*coroutines)
if __name__ == '__main__':
asyncio.run(main())</lang>
{{works with|Python|3.2}}
Using the new to Python 3.2 [http://docs.python.org/release/3.2/library/concurrent.futures.html concurrent.futures library] and choosing to use processes over threads; the example will use up to as many processes as your machine has cores. This doesn't however guarantee an order of sub-process results.
<lang python>Python 3.2 (r32:88445, Feb 20 2011, 21:30:00) [MSC v.1500 64 bit (AMD64)] on win 32
Type "help", "copyright", "credits" or "license" for more information.
>>> from concurrent import futures
>>> with futures.ProcessPoolExecutor() as executor:
... _ = list(executor.map(print, 'Enjoy Rosetta Code'.split()))
...
Enjoy
Rosetta
Code
>>></lang>
{{works with|Python|2.5}}
<lang python>import threading
import random
def echo(text):
print(text)
threading.Timer(random.random(), echo, ("Enjoy",)).start()
threading.Timer(random.random(), echo, ("Rosetta",)).start()
threading.Timer(random.random(), echo, ("Code",)).start()</lang>
Or, by using a for loop to start one thread per list entry, where our list is our set of source strings:
<lang python>import threading
import random
def echo(text):
print(text)
for text in ["Enjoy", "Rosetta", "Code"]:
threading.Timer(random.random(), echo, (text,)).start()</lang>
=== threading.Thread ===
<lang python>import random, sys, time
import threading
lock = threading.Lock()
def echo(s):
time.sleep(1e-2*random.random())
Line 1,480 ⟶ 1,687:
sys.stdout.write(s)
sys.stdout.write('\n')
for line in 'Enjoy Rosetta Code'.split():
threading.Thread(target=echo, args=(line,)).start()</lang>
=== multiprocessing ===
{{works with|Python|2.6}}
<lang python>from __future__ import print_function
from multiprocessing import Pool
def main():
p = Pool()
p.map(print, 'Enjoy Rosetta Code'.split())
if __name__=="__main__":
main()</lang>
=== twisted ===
<lang python>import random
from twisted.internet import reactor, task, defer
from twisted.python.util import println
delay = lambda: 1e-4*random.random()
d = defer.DeferredList([task.deferLater(reactor, delay(), println, line)
for line in 'Enjoy Rosetta Code'.split()])
d.addBoth(lambda _: reactor.stop())
reactor.run()</lang>
=== gevent ===
<lang python>from __future__ import print_function
import random
import gevent
delay = lambda: 1e-4*random.random()
gevent.joinall([gevent.spawn_later(delay(), print, line)
for line in 'Enjoy Rosetta Code'.split()])</lang>
=={{header|Racket}}==
Line 1,505 ⟶ 1,738:
(formerly Perl 6)
{{works with|Rakudo|2018.9}}
<lang perl6>my @words = <
@words.race(:batch(1)).map: { sleep rand; say $_ };</lang>
{{out}}
<pre>Code
Rosetta
Enjoy</pre>
=={{header|Raven}}==
Line 1,583 ⟶ 1,820:
=={{header|Sidef}}==
A very basic threading support is provided by the '''Block.fork()''' method:
<lang ruby>var a = <Enjoy Rosetta Code>
a.map{|str|
{ Sys.sleep(1.rand)
say str
}.fork
}.map{|thr| thr.wait }</lang>
{{out}}
<pre>
Enjoy
Code
Rosetta
</pre>
=={{header|Swift}}==
Using Grand Central Dispatch with concurrent queues.
<lang Swift>import Foundation
let myList = ["Enjoy", "Rosetta", "Code"]
for word in myList {
dispatch_async(dispatch_get_global_queue(0, 0)) {
Line 1,599 ⟶ 1,847:
}
}
dispatch_main()</lang>
{{out}}
<pre>
2015-02-05 10:15:01.831 rosettaconcurrency[1917:37905] Code
2015-02-05 10:15:01.831 rosettaconcurrency[1917:37902] Enjoy
2015-02-05 10:15:01.831 rosettaconcurrency[1917:37904] Rosetta
</pre>
=={{header|Standard ML}}==
Line 1,714 ⟶ 1,969:
End Module</lang>
===Alternative version===
[https://tio.run/##TY9PC8IwDMXv@xRhpw60oODFm@gEQUWs4Llbg6t0zWjrn3362bmBvssjCfnl5VlMS3LYdbu6IRc8iNYHrPmlciiVtrfkQOphEAabJRC10TU4q2Dl4YgvOEurqGbZdyYeBRyktmPZ6ySdNAYN35LLZVmxNLd3auFMHkOQsCaFKReN0YGlkGaTHsL8D9BrCMSFQWxYPM6P@A5svsgyWEaC9WSQX50OuNcW/7fzmDQCh8ZYJL0PL3XdBw Try It Online!]
<lang vbnet>Imports System.Threading
Module Module1
Line 1,723 ⟶ 1,979:
End Sub
End Module</lang>
{{out}}
<pre>Rosetta
Enjoy
Code</pre>
=={{header|Wren}}==
<lang ecmascript>import "random" for Random
var words = ["Enjoy", "Rosetta", "Code"]
var rand = Random.new()
Line 1,743 ⟶ 2,004:
System.print()
}</lang>
{{out}}
Sample run:
<pre>
Enjoy
Code
Rosetta
Code
Enjoy
Rosetta
Rosetta
Enjoy
Code
</pre>
=={{header|zkl}}==
<lang zkl>fcn{println("Enjoy")}.launch(); // thread
fcn{println("Rosetta")}.strand(); // co-op thread
fcn{println("Code")}.future(); // another thread type</lang>
{{out}}
<pre>
Rosetta
Code
Enjoy
</pre>
{{omit from|AWK}}
{{omit from|bc}}
{{omit from|Brlcad}}
{{omit from|dc}}
{{omit from|GUISS}}
{{omit from|Lilypond}}
{{omit from|Maxima}}
{{Omit From|Metafont}}
{{omit from|Openscad}}
{{omit from|TI-83 BASIC|Does not have concurrency or background processes.}}
{{omit from|TI-89 BASIC|Does not have concurrency or background processes.}}
{{omit from|TPP}}
{{omit from|Vim Script}}
{{omit from|ZX Spectrum Basic}}
{{omit from|Axe}}
|