Anonymous user
Compare sorting algorithms' performance: Difference between revisions
Compare sorting algorithms' performance (view source)
Revision as of 16:00, 19 May 2009
, 15 years ago→Plot timings and Figures: gnuplot nothing to do with GNU (!!)
(→{{header|Tcl}}: Marked wrong: missing plot part of task) |
m (→Plot timings and Figures: gnuplot nothing to do with GNU (!!)) |
||
Line 256:
<pre>for el in *.dat ; do fitdata <$el >${el%.dat}.fd ; done</pre>
===Plot timings and Figures===
Once we have all the ".dat" files and associated ".fd", we can use
<pre>
gnuplot> f(x) = C0 + C1*x
Line 266:
</pre>
(The <tt>_u.dat</tt> are produced by a modified version of the code in order to write timings in microseconds instead of seconds)
We can easily write another shell script/one-liner to produce a single file ''driver'' for
These graphs show that the linear (in log scale) fit do not always fit the data... I haven't repeated the tests; the problems are when the sequence length becomes huge; for some algorithm that uses extra memory (like implementation of the merge sort), this could depend on the allocation of the needed memory. Another ''extraneous'' factor could be system load (the CLOCK_MONOTONIC used by the timing function is system wide rather than per process, so counting time spent in other processes too?).
The "most stable" algorithms seem to be quick sort (but not qsort, which indeed is just the libc quick sort, here not plotted!) and shell sort (except for reversed range).
Line 274:
* [http://i43.tinypic.com/2nqrzfs.png reversed range]
* [http://i41.tinypic.com/24q8hmg.png shuffled range]
=={{header|Python}}==
|