Jump to content

Compare sorting algorithms' performance: Difference between revisions

m
→‎Plot timings and Figures: gnuplot nothing to do with GNU (!!)
(→‎{{header|Tcl}}: Marked wrong: missing plot part of task)
m (→‎Plot timings and Figures: gnuplot nothing to do with GNU (!!))
Line 256:
<pre>for el in *.dat ; do fitdata <$el >${el%.dat}.fd ; done</pre>
===Plot timings and Figures===
Once we have all the ".dat" files and associated ".fd", we can use GNUplotGnuplot to draw our '''data''' and think about conclusions (we could also use the idea in [[Plot x, y arrays]], but it needs too much enhancements to be usable for this analysis). Here an example of such a draw for a single file (using GNUplotGnuplot)
<pre>
gnuplot> f(x) = C0 + C1*x
Line 266:
</pre>
(The <tt>_u.dat</tt> are produced by a modified version of the code in order to write timings in microseconds instead of seconds)
We can easily write another shell script/one-liner to produce a single file ''driver'' for GNUplotGnuplot in order to produce all the graph we can be interested in.
These graphs show that the linear (in log scale) fit do not always fit the data... I haven't repeated the tests; the problems are when the sequence length becomes huge; for some algorithm that uses extra memory (like implementation of the merge sort), this could depend on the allocation of the needed memory. Another ''extraneous'' factor could be system load (the CLOCK_MONOTONIC used by the timing function is system wide rather than per process, so counting time spent in other processes too?).
The "most stable" algorithms seem to be quick sort (but not qsort, which indeed is just the libc quick sort, here not plotted!) and shell sort (except for reversed range).
Line 274:
* [http://i43.tinypic.com/2nqrzfs.png reversed range]
* [http://i41.tinypic.com/24q8hmg.png shuffled range]
 
 
=={{header|Python}}==
Cookies help us deliver our services. By using our services, you agree to our use of cookies.