Stream merge: Difference between revisions

Content added Content deleted
m (→‎{{header|Phix}}: added syntax colouring, marked p2js incompatible)
m (syntax highlighting fixup automation)
Line 14: Line 14:
=={{header|360 Assembly}}==
=={{header|360 Assembly}}==
No usage of tricks such as forbiden records in the streams.
No usage of tricks such as forbiden records in the streams.
<lang 360asm>* Stream Merge 07/02/2017
<syntaxhighlight lang="360asm">* Stream Merge 07/02/2017
STRMERGE CSECT
STRMERGE CSECT
USING STRMERGE,R13 base register
USING STRMERGE,R13 base register
Line 130: Line 130:
PG DS CL64
PG DS CL64
YREGS
YREGS
END STRMERGE</lang>
END STRMERGE</syntaxhighlight>
{{in}}
{{in}}
<pre style="height:20ex">
<pre style="height:20ex">
Line 167: Line 167:


=={{header|Ada}}==
=={{header|Ada}}==
<lang Ada>with Ada.Text_Io;
<syntaxhighlight lang="ada">with Ada.Text_Io;
with Ada.Command_Line;
with Ada.Command_Line;
with Ada.Containers.Indefinite_Holders;
with Ada.Containers.Indefinite_Holders;
Line 238: Line 238:
end loop;
end loop;


end Stream_Merge;</lang>
end Stream_Merge;</syntaxhighlight>


=={{header|ALGOL 68}}==
=={{header|ALGOL 68}}==
NB, all the files (including the output files) must exist before running this. The output files are overwritten with the merged records.
NB, all the files (including the output files) must exist before running this. The output files are overwritten with the merged records.
<lang algol68># merge a number of input files to an output file #
<syntaxhighlight lang="algol68"># merge a number of input files to an output file #
PROC mergenf = ( []REF FILE inf, REF FILE out )VOID:
PROC mergenf = ( []REF FILE inf, REF FILE out )VOID:
BEGIN
BEGIN
Line 344: Line 344:
# test the file merge #
# test the file merge #
merge2( "in1.txt", "in2.txt", "out2.txt" );
merge2( "in1.txt", "in2.txt", "out2.txt" );
mergen( ( "in1.txt", "in2.txt", "in3.txt", "in4.txt" ), "outn.txt" )</lang>
mergen( ( "in1.txt", "in2.txt", "in3.txt", "in4.txt" ), "outn.txt" )</syntaxhighlight>
{{out}}
{{out}}
<pre>
<pre>
Line 350: Line 350:


=={{header|ATS}}==
=={{header|ATS}}==
<syntaxhighlight lang="ats">
<lang ATS>
(* ****** ****** *)
(* ****** ****** *)
//
//
Line 539: Line 539:
//
//
} (* end of [main0] *)
} (* end of [main0] *)
</syntaxhighlight>
</lang>


=={{header|AWK}}==
=={{header|AWK}}==
<syntaxhighlight lang="awk">
<lang AWK>
# syntax: GAWK -f STREAM_MERGE.AWK filename(s) >output
# syntax: GAWK -f STREAM_MERGE.AWK filename(s) >output
# handles 1 .. N files
# handles 1 .. N files
Line 608: Line 608:
errors++
errors++
}
}
</syntaxhighlight>
</lang>


=={{header|C}}==
=={{header|C}}==
<syntaxhighlight lang="c">/*
<lang C>/*
* Rosetta Code - stream merge in C.
* Rosetta Code - stream merge in C.
*
*
Line 654: Line 654:
return EXIT_SUCCESS;
return EXIT_SUCCESS;
}
}
</syntaxhighlight>
</lang>


=={{header|C sharp|C#}}==
=={{header|C sharp|C#}}==
<lang csharp>
<syntaxhighlight lang="csharp">
using System;
using System;
using System.Collections.Generic;
using System.Collections.Generic;
Line 711: Line 711:
}
}
}
}
}</lang>
}</syntaxhighlight>
{{out}}
{{out}}
<pre>1 2 4 5 7 8 10 11
<pre>1 2 4 5 7 8 10 11
Line 718: Line 718:
=={{header|C++}}==
=={{header|C++}}==
{{trans|C#}}
{{trans|C#}}
<lang cpp>//#include <functional>
<syntaxhighlight lang="cpp">//#include <functional>
#include <iostream>
#include <iostream>
#include <vector>
#include <vector>
Line 813: Line 813:
mergeN(display, { v3, v2, v1 });
mergeN(display, { v3, v2, v1 });
std::cout << '\n';
std::cout << '\n';
}</lang>
}</syntaxhighlight>
{{out}}
{{out}}
<pre>0 1 3 4 6 7
<pre>0 1 3 4 6 7
Line 820: Line 820:


=={{header|D}}==
=={{header|D}}==
<lang D>import std.range.primitives;
<syntaxhighlight lang="d">import std.range.primitives;
import std.stdio;
import std.stdio;


Line 892: Line 892:
}
}
} while (!done);
} while (!done);
}</lang>
}</syntaxhighlight>


{{out}}
{{out}}
Line 902: Line 902:


=={{header|Elixir}}==
=={{header|Elixir}}==
<lang elixir>defmodule StreamMerge do
<syntaxhighlight lang="elixir">defmodule StreamMerge do
def merge2(file1, file2), do: mergeN([file1, file2])
def merge2(file1, file2), do: mergeN([file1, file2])
Line 930: Line 930:
StreamMerge.merge2("temp1.dat", "temp2.dat")
StreamMerge.merge2("temp1.dat", "temp2.dat")
IO.puts "\nN-stream merge:"
IO.puts "\nN-stream merge:"
StreamMerge.mergeN(filenames)</lang>
StreamMerge.mergeN(filenames)</syntaxhighlight>


{{out}}
{{out}}
Line 980: Line 980:


=={{header|Fortran}}==
=={{header|Fortran}}==
This is a classic problem, but even so, Fortran does not supply a library routine for this. So...<lang Fortran> SUBROUTINE FILEMERGE(N,INF,OUTF) !Merge multiple inputs into one output.
This is a classic problem, but even so, Fortran does not supply a library routine for this. So...<syntaxhighlight lang="fortran"> SUBROUTINE FILEMERGE(N,INF,OUTF) !Merge multiple inputs into one output.
INTEGER N !The number of input files.
INTEGER N !The number of input files.
INTEGER INF(*) !Their unit numbers.
INTEGER INF(*) !Their unit numbers.
Line 1,047: Line 1,047:
CALL FILEMERGE(MANY,FI,F) !E pluribus unum.
CALL FILEMERGE(MANY,FI,F) !E pluribus unum.


END !That was easy.</lang>
END !That was easy.</syntaxhighlight>
Obviously, there would be variations according to the nature of the data streams being merged, and whatever sort key was involved. For this example, input from disc files will do and the sort key is the entire record's text. This means there is no need to worry over the case where, having written a record from stream S and obtained the next record from stream S, it proves to have equal precedence with the waiting record for some other stream. Which now should take precedence? With entirely-equal records it obviously doesn't matter but if the sort key is only partial then different record content could be deemed equal and then a choice has an effect.
Obviously, there would be variations according to the nature of the data streams being merged, and whatever sort key was involved. For this example, input from disc files will do and the sort key is the entire record's text. This means there is no need to worry over the case where, having written a record from stream S and obtained the next record from stream S, it proves to have equal precedence with the waiting record for some other stream. Which now should take precedence? With entirely-equal records it obviously doesn't matter but if the sort key is only partial then different record content could be deemed equal and then a choice has an effect.


Line 1,060: Line 1,060:
=={{header|Go}}==
=={{header|Go}}==
'''Using standard library binary heap for mergeN:'''
'''Using standard library binary heap for mergeN:'''
<lang go>package main
<syntaxhighlight lang="go">package main


import (
import (
Line 1,154: Line 1,154:
}
}
}
}
}</lang>
}</syntaxhighlight>
{{out}}
{{out}}
<pre>
<pre>
Line 1,161: Line 1,161:
</pre>
</pre>
'''MergeN using package from [[Fibonacci heap]] task:'''
'''MergeN using package from [[Fibonacci heap]] task:'''
<lang go>package main
<syntaxhighlight lang="go">package main


import (
import (
Line 1,220: Line 1,220:
}
}
}
}
}</lang>
}</syntaxhighlight>
{{out}}
{{out}}
<pre>
<pre>
Line 1,232: Line 1,232:
=== conduit ===
=== conduit ===


<lang haskell>-- stack runhaskell --package=conduit-extra --package=conduit-merge
<syntaxhighlight lang="haskell">-- stack runhaskell --package=conduit-extra --package=conduit-merge


import Control.Monad.Trans.Resource (runResourceT)
import Control.Monad.Trans.Resource (runResourceT)
Line 1,250: Line 1,250:
runResourceT $ mergeSources inputs $$ sinkStdoutLn
runResourceT $ mergeSources inputs $$ sinkStdoutLn
where
where
sinkStdoutLn = Conduit.map (`BS.snoc` '\n') =$= sinkHandle stdout</lang>
sinkStdoutLn = Conduit.map (`BS.snoc` '\n') =$= sinkHandle stdout</syntaxhighlight>


See implementation in https://github.com/cblp/conduit-merge/blob/master/src/Data/Conduit/Merge.hs
See implementation in https://github.com/cblp/conduit-merge/blob/master/src/Data/Conduit/Merge.hs
Line 1,256: Line 1,256:
=== pipes ===
=== pipes ===


<lang haskell>-- stack runhaskell --package=pipes-safe --package=pipes-interleave
<syntaxhighlight lang="haskell">-- stack runhaskell --package=pipes-safe --package=pipes-interleave


import Pipes (runEffect, (>->))
import Pipes (runEffect, (>->))
Line 1,270: Line 1,270:
sourceFileNames <- getArgs
sourceFileNames <- getArgs
let sources = map readFile sourceFileNames
let sources = map readFile sourceFileNames
runSafeT . runEffect $ interleave compare sources >-> stdoutLn</lang>
runSafeT . runEffect $ interleave compare sources >-> stdoutLn</syntaxhighlight>


See implementation in https://github.com/bgamari/pipes-interleave/blob/master/Pipes/Interleave.hs
See implementation in https://github.com/bgamari/pipes-interleave/blob/master/Pipes/Interleave.hs


=={{header|Java}}==
=={{header|Java}}==
<lang Java>import java.util.Iterator;
<syntaxhighlight lang="java">import java.util.Iterator;
import java.util.List;
import java.util.List;
import java.util.Objects;
import java.util.Objects;
Line 1,374: Line 1,374:
System.out.flush();
System.out.flush();
}
}
}</lang>
}</syntaxhighlight>
{{out}}
{{out}}
<pre>1245781011
<pre>1245781011
Line 1,382: Line 1,382:
{{trans|C}}
{{trans|C}}
The IOStream type in Julia encompasses any data stream, including file I/O and TCP/IP. The IOBuffer used here maps a stream to a buffer in memory, and so allows an easy simulation of two streams without opening files.
The IOStream type in Julia encompasses any data stream, including file I/O and TCP/IP. The IOBuffer used here maps a stream to a buffer in memory, and so allows an easy simulation of two streams without opening files.
<syntaxhighlight lang="julia">
<lang Julia>
function merge(stream1, stream2, T=Char)
function merge(stream1, stream2, T=Char)
if !eof(stream1) && !eof(stream2)
if !eof(stream1) && !eof(stream2)
Line 1,421: Line 1,421:
println("\nDone.")
println("\nDone.")


</lang>{{output}}<pre>
</syntaxhighlight>{{output}}<pre>
abcdefghijklmnopqrstuvwyxz
abcdefghijklmnopqrstuvwyxz
Done.
Done.
Line 1,428: Line 1,428:
=={{header|Kotlin}}==
=={{header|Kotlin}}==
Uses the same data as the REXX entry. As Kotlin lacks a Heap class, when merging N files, we use a nullable MutableList instead. All comparisons are text based even when the files contain nothing but numbers.
Uses the same data as the REXX entry. As Kotlin lacks a Heap class, when merging N files, we use a nullable MutableList instead. All comparisons are text based even when the files contain nothing but numbers.
<lang scala>// version 1.2.21
<syntaxhighlight lang="scala">// version 1.2.21


import java.io.File
import java.io.File
Line 1,487: Line 1,487:
println(File("merged2.txt").readText())
println(File("merged2.txt").readText())
println(File("mergedN.txt").readText())
println(File("mergedN.txt").readText())
}</lang>
}</syntaxhighlight>


{{out}}
{{out}}
Line 1,514: Line 1,514:
Optimized for clarity and simplicity, not performance.
Optimized for clarity and simplicity, not performance.
assumes two files containing sorted integers separated by newlines
assumes two files containing sorted integers separated by newlines
<lang nim>import streams,strutils
<syntaxhighlight lang="nim">import streams,strutils
let
let
stream1 = newFileStream("file1")
stream1 = newFileStream("file1")
Line 1,524: Line 1,524:
echo line
echo line
for line in stream2.lines:
for line in stream2.lines:
echo line</lang>
echo line</syntaxhighlight>


===Merge N streams===
===Merge N streams===
Line 1,530: Line 1,530:
Of course, as Phix and Nim are very different languages, the code is quite different, but as Phix, we use a priority queue (which is provided by the standard module <code>heapqueue</code>. We work with files built from the “Data” constant, but we destroy them after usage. We have also put the whole merging code in an procedure.
Of course, as Phix and Nim are very different languages, the code is quite different, but as Phix, we use a priority queue (which is provided by the standard module <code>heapqueue</code>. We work with files built from the “Data” constant, but we destroy them after usage. We have also put the whole merging code in an procedure.


<lang Nim>import heapqueue, os, sequtils, streams
<syntaxhighlight lang="nim">import heapqueue, os, sequtils, streams


type
type
Line 1,586: Line 1,586:
# Clean-up: delete the files.
# Clean-up: delete the files.
for name in Filenames:
for name in Filenames:
removeFile(name)</lang>
removeFile(name)</syntaxhighlight>


{{out}}
{{out}}
Line 1,604: Line 1,604:
=={{header|Perl}}==
=={{header|Perl}}==
We make use of an iterator interface which String::Tokenizer provides. Credit: we obtained all the sample text from http://www.lipsum.com/.
We make use of an iterator interface which String::Tokenizer provides. Credit: we obtained all the sample text from http://www.lipsum.com/.
<lang perl>use strict;
<syntaxhighlight lang="perl">use strict;
use warnings;
use warnings;
use English;
use English;
Line 1,729: Line 1,729:
# At this point every iterator has been exhausted.
# At this point every iterator has been exhausted.
return;
return;
}</lang>
}</syntaxhighlight>
{{out}}
{{out}}
<pre>Merge of 2 streams:
<pre>Merge of 2 streams:
Line 1,739: Line 1,739:
=={{header|Phix}}==
=={{header|Phix}}==
Using a priority queue
Using a priority queue
<!--<lang Phix>(notonline)-->
<!--<syntaxhighlight lang="phix">(notonline)-->
<span style="color: #008080;">without</span> <span style="color: #008080;">js</span> <span style="color: #000080;font-style:italic;">-- file i/o</span>
<span style="color: #008080;">without</span> <span style="color: #008080;">js</span> <span style="color: #000080;font-style:italic;">-- file i/o</span>
<span style="color: #008080;">include</span> <span style="color: #000000;">builtins</span><span style="color: #0000FF;">/</span><span style="color: #000000;">pqueue</span><span style="color: #0000FF;">.</span><span style="color: #000000;">e</span>
<span style="color: #008080;">include</span> <span style="color: #000000;">builtins</span><span style="color: #0000FF;">/</span><span style="color: #000000;">pqueue</span><span style="color: #0000FF;">.</span><span style="color: #000000;">e</span>
Line 1,787: Line 1,787:
<span style="color: #0000FF;">{}</span> <span style="color: #0000FF;">=</span> <span style="color: #7060A8;">delete_file</span><span style="color: #0000FF;">(</span><span style="color: #000000;">filenames</span><span style="color: #0000FF;">[</span><span style="color: #000000;">i</span><span style="color: #0000FF;">])</span>
<span style="color: #0000FF;">{}</span> <span style="color: #0000FF;">=</span> <span style="color: #7060A8;">delete_file</span><span style="color: #0000FF;">(</span><span style="color: #000000;">filenames</span><span style="color: #0000FF;">[</span><span style="color: #000000;">i</span><span style="color: #0000FF;">])</span>
<span style="color: #008080;">end</span> <span style="color: #008080;">for</span>
<span style="color: #008080;">end</span> <span style="color: #008080;">for</span>
<!--</lang>-->
<!--</syntaxhighlight>-->
{{out}}
{{out}}
<pre>
<pre>
Line 1,805: Line 1,805:


=={{header|PicoLisp}}==
=={{header|PicoLisp}}==
<lang PicoLisp>(de streamMerge @
<syntaxhighlight lang="picolisp">(de streamMerge @
(let Heap
(let Heap
(make
(make
Line 1,818: Line 1,818:
(if (in (cdar Heap) (read))
(if (in (cdar Heap) (read))
(set (car Heap) @)
(set (car Heap) @)
(close (cdr (pop 'Heap))) ) ) ) ) )</lang>
(close (cdr (pop 'Heap))) ) ) ) ) )</syntaxhighlight>
<pre>$ cat a
<pre>$ cat a
3 14 15
3 14 15
Line 1,830: Line 1,830:
2 3 5 7</pre>
2 3 5 7</pre>
Test:
Test:
<lang PicoLisp>(test (2 3 14 15 17 18)
<syntaxhighlight lang="picolisp">(test (2 3 14 15 17 18)
(streamMerge
(streamMerge
(open "a")
(open "a")
Line 1,840: Line 1,840:
(open "b")
(open "b")
(open "c")
(open "c")
(open "d") ) )</lang>
(open "d") ) )</syntaxhighlight>
'streamMerge' works with non-numeric data as well, and also - instead of calling
'streamMerge' works with non-numeric data as well, and also - instead of calling
'open' on a file or named pipe - with the results of 'connect' or 'listen' (i.e.
'open' on a file or named pipe - with the results of 'connect' or 'listen' (i.e.
Line 1,851: Line 1,851:
There exists a standard library function <code>heapq.merge</code> that takes any number of sorted stream iterators and merges them into one sorted iterator, using a [[heap]].
There exists a standard library function <code>heapq.merge</code> that takes any number of sorted stream iterators and merges them into one sorted iterator, using a [[heap]].


<lang python>import heapq
<syntaxhighlight lang="python">import heapq
import sys
import sys


sources = sys.argv[1:]
sources = sys.argv[1:]
for item in heapq.merge(open(source) for source in sources):
for item in heapq.merge(open(source) for source in sources):
print(item)</lang>
print(item)</syntaxhighlight>


=={{header|Racket}}==
=={{header|Racket}}==


<lang racket>;; This module produces a sequence that merges streams in order (by <)
<syntaxhighlight lang="racket">;; This module produces a sequence that merges streams in order (by <)
#lang racket/base
#lang racket/base
(require racket/stream)
(require racket/stream)
Line 1,932: Line 1,932:
'(1 2 3 4 5 6 7 8 9 10))
'(1 2 3 4 5 6 7 8 9 10))
(check-equal? (for/list ((i (merge-sequences/< '(2 4 6 7 8 9 10) '(1 3 5)))) i)
(check-equal? (for/list ((i (merge-sequences/< '(2 4 6 7 8 9 10) '(1 3 5)))) i)
'(1 2 3 4 5 6 7 8 9 10)))</lang>
'(1 2 3 4 5 6 7 8 9 10)))</syntaxhighlight>


{{out}}
{{out}}
Line 1,948: Line 1,948:
=={{header|REXX}}==
=={{header|REXX}}==
===version 1===
===version 1===
<lang rexx>/* REXX ***************************************************************
<syntaxhighlight lang="rexx">/* REXX ***************************************************************
* Merge 1.txt ... n.txt into all.txt
* Merge 1.txt ... n.txt into all.txt
* 1.txt 2.txt 3.txt 4.txt
* 1.txt 2.txt 3.txt 4.txt
Line 2,027: Line 2,027:
Return
Return


o: Return lineout(oid,arg(1))</lang>
o: Return lineout(oid,arg(1))</syntaxhighlight>
{{out}}
{{out}}
<pre>1
<pre>1
Line 2,050: Line 2,050:


No &nbsp; ''heap'' &nbsp; is needed to keep track of which record was written, nor needs replenishing from its input file.
No &nbsp; ''heap'' &nbsp; is needed to keep track of which record was written, nor needs replenishing from its input file.
<lang rexx>/*REXX pgm reads sorted files (1.TXT, 2.TXT, ···), and writes sorted data ───► ALL.TXT */
<syntaxhighlight lang="rexx">/*REXX pgm reads sorted files (1.TXT, 2.TXT, ···), and writes sorted data ───► ALL.TXT */
@.=copies('ff'x, 1e4); call lineout 'ALL.TXT',,1 /*no value should be larger than this. */
@.=copies('ff'x, 1e4); call lineout 'ALL.TXT',,1 /*no value should be larger than this. */
do n=1 until @.n==@.; call rdr n; end /*read any number of appropriate files.*/
do n=1 until @.n==@.; call rdr n; end /*read any number of appropriate files.*/
Line 2,063: Line 2,063:
end /*forever*/ /*keep reading/merging until exhausted.*/
end /*forever*/ /*keep reading/merging until exhausted.*/
/*──────────────────────────────────────────────────────────────────────────────────────*/
/*──────────────────────────────────────────────────────────────────────────────────────*/
rdr: arg z; @.z= @.; f= z'.TXT'; if lines(f)\==0 then @.z= linein(f); return</lang>
rdr: arg z; @.z= @.; f= z'.TXT'; if lines(f)\==0 then @.z= linein(f); return</syntaxhighlight>
{{out|output|text=&nbsp; is the same as the 1<sup>st</sup> REXX version when using identical input files, &nbsp; except the output file is named &nbsp; '''ALL.TXT'''}} <br><br>
{{out|output|text=&nbsp; is the same as the 1<sup>st</sup> REXX version when using identical input files, &nbsp; except the output file is named &nbsp; '''ALL.TXT'''}} <br><br>


Line 2,070: Line 2,070:
{{works with|Rakudo|2018.02}}
{{works with|Rakudo|2018.02}}


<lang perl6>sub merge_streams ( @streams ) {
<syntaxhighlight lang="raku" line>sub merge_streams ( @streams ) {
my @s = @streams.map({ hash( STREAM => $_, HEAD => .get ) })\
my @s = @streams.map({ hash( STREAM => $_, HEAD => .get ) })\
.grep({ .<HEAD>.defined });
.grep({ .<HEAD>.defined });
Line 2,082: Line 2,082:
}
}


say merge_streams([ @*ARGS».&open ]);</lang>
say merge_streams([ @*ARGS».&open ]);</syntaxhighlight>


=={{header|Ruby}}==
=={{header|Ruby}}==
<lang ruby>def stream_merge(*files)
<syntaxhighlight lang="ruby">def stream_merge(*files)
fio = files.map{|fname| open(fname)}
fio = files.map{|fname| open(fname)}
merge(fio.map{|io| [io, io.gets]})
merge(fio.map{|io| [io, io.gets]})
Line 2,109: Line 2,109:
puts "#{fname}: #{data}"
puts "#{fname}: #{data}"
end
end
stream_merge(*files)</lang>
stream_merge(*files)</syntaxhighlight>


{{out}}
{{out}}
Line 2,139: Line 2,139:


=={{header|Scala}}==
=={{header|Scala}}==
<lang scala>def mergeN[A : Ordering](is: Iterator[A]*): Iterator[A] = is.reduce((a, b) => merge2(a, b))
<syntaxhighlight lang="scala">def mergeN[A : Ordering](is: Iterator[A]*): Iterator[A] = is.reduce((a, b) => merge2(a, b))


def merge2[A : Ordering](i1: Iterator[A], i2: Iterator[A]): Iterator[A] = {
def merge2[A : Ordering](i1: Iterator[A], i2: Iterator[A]): Iterator[A] = {
Line 2,158: Line 2,158:
nextHead ++ merge2Buffered(i1, i2)
nextHead ++ merge2Buffered(i1, i2)
}
}
}</lang>
}</syntaxhighlight>


Example usage, demonstrating lazyness:
Example usage, demonstrating lazyness:


<lang scala>val i1 = Iterator.tabulate(5) { i =>
<syntaxhighlight lang="scala">val i1 = Iterator.tabulate(5) { i =>
val x = i * 3
val x = i * 3
println(s"generating $x")
println(s"generating $x")
Line 2,185: Line 2,185:
val x = merged.next
val x = merged.next
println(s"output: $x")
println(s"output: $x")
}</lang>
}</syntaxhighlight>


{{out}}
{{out}}
Line 2,221: Line 2,221:
=={{header|Sidef}}==
=={{header|Sidef}}==
{{trans|Raku}}
{{trans|Raku}}
<lang ruby>func merge_streams(streams) {
<syntaxhighlight lang="ruby">func merge_streams(streams) {
var s = streams.map { |stream|
var s = streams.map { |stream|
Pair(stream, stream.readline)
Pair(stream, stream.readline)
Line 2,235: Line 2,235:
}
}


say merge_streams(ARGV.map {|f| File(f).open_r }).join("\n")</lang>
say merge_streams(ARGV.map {|f| File(f).open_r }).join("\n")</syntaxhighlight>


=={{header|Tcl}}==
=={{header|Tcl}}==
Line 2,242: Line 2,242:
A careful reader will notice that '''$peeks''' is treated alternately as a dictionary ('''dict set''', '''dict get''') and as a list ('''lsort''', '''lassign'''), exploiting the fact that dictionaries are simply lists of even length. For large dictionaries this would not be recommended, as it causes [https://wiki.tcl.tk/3033 "shimmering"], but in this example the impact is too small to matter.
A careful reader will notice that '''$peeks''' is treated alternately as a dictionary ('''dict set''', '''dict get''') and as a list ('''lsort''', '''lassign'''), exploiting the fact that dictionaries are simply lists of even length. For large dictionaries this would not be recommended, as it causes [https://wiki.tcl.tk/3033 "shimmering"], but in this example the impact is too small to matter.


<lang Tcl>#!/usr/bin/env tclsh
<syntaxhighlight lang="tcl">#!/usr/bin/env tclsh
proc merge {args} {
proc merge {args} {
set peeks {}
set peeks {}
Line 2,262: Line 2,262:


merge {*}[lmap f $::argv {open $f r}]
merge {*}[lmap f $::argv {open $f r}]
</syntaxhighlight>
</lang>


=={{header|UNIX Shell}}==
=={{header|UNIX Shell}}==
Line 2,274: Line 2,274:
{{libheader|Wren-seq}}
{{libheader|Wren-seq}}
No Heap class, so we use a List. Comparisons are text based even for numbers.
No Heap class, so we use a List. Comparisons are text based even for numbers.
<lang ecmascript>import "io" for File
<syntaxhighlight lang="ecmascript">import "io" for File
import "/ioutil" for FileUtil
import "/ioutil" for FileUtil
import "/str" for Str
import "/str" for Str
Line 2,325: Line 2,325:
// check it worked
// check it worked
System.print(File.read("merged2.txt"))
System.print(File.read("merged2.txt"))
System.print(File.read("mergedN.txt"))</lang>
System.print(File.read("mergedN.txt"))</syntaxhighlight>


{{out}}
{{out}}
Line 2,351: Line 2,351:
=={{header|zkl}}==
=={{header|zkl}}==
This solution uses iterators, doesn't care where the streams orginate and only keeps the head of the stream on hand.
This solution uses iterators, doesn't care where the streams orginate and only keeps the head of the stream on hand.
<lang zkl>fcn mergeStreams(s1,s2,etc){ //-->Walker
<syntaxhighlight lang="zkl">fcn mergeStreams(s1,s2,etc){ //-->Walker
streams:=vm.arglist.pump(List(),fcn(s){ // prime and prune
streams:=vm.arglist.pump(List(),fcn(s){ // prime and prune
if( (w:=s.walker())._next() ) return(w);
if( (w:=s.walker())._next() ) return(w);
Line 2,364: Line 2,364:
v
v
}.fp(streams));
}.fp(streams));
}</lang>
}</syntaxhighlight>
Using infinite streams:
Using infinite streams:
<lang zkl>w:=mergeStreams([0..],[2..*,2],[3..*,3],T(5));
<syntaxhighlight lang="zkl">w:=mergeStreams([0..],[2..*,2],[3..*,3],T(5));
w.walk(20).println();</lang>
w.walk(20).println();</syntaxhighlight>
{{out}}
{{out}}
<pre>
<pre>
Line 2,373: Line 2,373:
</pre>
</pre>
Using files:
Using files:
<lang zkl>w:=mergeStreams(File("unixdict.txt"),File("2hkprimes.txt"),File("/dev/null"));
<syntaxhighlight lang="zkl">w:=mergeStreams(File("unixdict.txt"),File("2hkprimes.txt"),File("/dev/null"));
do(10){ w.read().print() }</lang>
do(10){ w.read().print() }</syntaxhighlight>
{{out}}
{{out}}
<pre>
<pre>
Line 2,389: Line 2,389:
</pre>
</pre>
Using the above example to squirt the merged stream to a file:
Using the above example to squirt the merged stream to a file:
<lang zkl>mergeStreams(File("unixdict.txt"),File("2hkprimes.txt"),File("/dev/null"))
<syntaxhighlight lang="zkl">mergeStreams(File("unixdict.txt"),File("2hkprimes.txt"),File("/dev/null"))
.pump(File("foo.txt","w"));</lang>
.pump(File("foo.txt","w"));</syntaxhighlight>
{{out}}
{{out}}
<pre>
<pre>