Time a function: Difference between revisions

From Rosetta Code
Content added Content deleted
(first attempt at perl)
Line 356: Line 356:
Sum(4) takes 0.084005 seconds.
Sum(4) takes 0.084005 seconds.
- : unit = ()
- : unit = ()

=={{header|Perl}}==
<lang perl>sub cpu_time {
my ($user,$system,$cuser,$csystem) = times;
$user + $system
}

sub time_it {
my $action = shift;
my $startTime = cpu_time();
$action->(@_);
my $finishTime = cpu_time();
$finishTime - $startTime
}

printf "Identity(4) takes %f seconds.\n", time_it(sub {shift}, 4);
# outputs "Identity(4) takes 0.000000 seconds."

sub sum {
my $x = shift;
foreach (0 .. 999999) {
$x += $_;
}
$x
}

printf "Sum(4) takes %f seconds.\n", time_it(\&sum, 4);
# outputs "Sum(4) takes 0.280000 seconds."</lang>


=={{header|Python}}==
=={{header|Python}}==

Revision as of 07:53, 2 August 2009

Task
Time a function
You are encouraged to solve this task according to the task description, using any language you may know.

Write a program which uses a timer (with the least granularity available on your system) to time how long a function takes to execute.

Whenever possible, use methods which measure only the processing time used by the current process; instead of the difference in system time between start and finish, which could include time used by other processes on the computer.

This task is intended as a subtask for Measure relative performance of sorting algorithms implementations.

Ada

<lang ada> with Ada.Calendar; use Ada.Calendar;

with Ada.Text_Io; use Ada.Text_Io;

procedure Query_Performance is
   type Proc_Access is access procedure(X : in out Integer);
   function Time_It(Action : Proc_Access; Arg : Integer) return Duration is
      Start_Time : Time := Clock;
      Finis_Time : Time;
      Func_Arg : Integer := Arg;
   begin
      Action(Func_Arg);
      Finis_Time := Clock;
      return Finis_Time - Start_Time;
   end Time_It;
   procedure Identity(X : in out Integer) is
   begin
      X := X;
   end Identity;
   procedure Sum (Num : in out Integer) is
   begin
      for I in 1..1000 loop
         Num := Num + I;
      end loop;
   end Sum;
   Id_Access : Proc_Access := Identity'access;
   Sum_Access : Proc_Access := Sum'access;
   
begin
   Put_Line("Identity(4) takes" & Duration'Image(Time_It(Id_Access, 4)) & " seconds.");
   Put_Line("Sum(4) takes:" & Duration'Image(Time_It(Sum_Access, 4)) & " seconds.");
end Query_Performance;</lang>

Example

Identity(4) takes 0.000001117 seconds.
Sum(4) takes: 0.000003632 seconds.

AutoHotkey

System time

Uses system time, not process time <lang AutoHotkey>MsgBox % time("fx") Return

fx() {

 Sleep, 1000

}

time(function, parameter=0) {

 SetBatchLines -1  ; don't sleep for other green threads
 StartTime := A_TickCount
 %function%(parameter)
 Return ElapsedTime := A_TickCount - StartTime . " milliseconds"

}</lang>

C

Works with: POSIX version .1-2001

On some system (like GNU/Linux) to be able to use the clock_gettime function you must link with the rt (RealTime) library.

<lang c>#include <stdio.h>

  1. include <time.h>

int identity(int x) { return x; }

int sum(int s) {

 int i;
 for(i=0; i < 1000000; i++) s += i;
 return s;

}

  1. define CLOCKTYPE CLOCK_MONOTONIC

/* this one should be appropriate to avoid errors on multiprocessors systems */

double time_it(int (*action)(int), int arg) {

 struct timespec tsi, tsf;
 clock_gettime(CLOCKTYPE, &tsi);
 action(arg);
 clock_gettime(CLOCKTYPE, &tsf);
 double elaps_s = difftime(tsf.tv_sec, tsi.tv_sec);
 long elaps_ns = tsf.tv_nsec - tsi.tv_nsec;
 return elaps_s + ((double)elaps_ns) / 1.0e9;

}

int main() {

 printf("identity (4) takes %lf s\n", time_it(identity, 4));
 printf("sum      (4) takes %lf s\n", time_it(sum, 4));
 return 0;

}</lang>

C++

<lang cpp>#include <ctime>

  1. include <iostream>

using namespace std;

int identity(int x) { return x; } int sum(int num) {

 for (int i = 0; i < 1000000; i++)
   num += i;
 return num;

}

double time_it(int (*action)(int), int arg) {

 clock_t start_time = clock();
 action(arg);
 clock_t finis_time = clock();
 return ((double) (finis_time - start_time)) / CLOCKS_PER_SEC;

}

int main() {

 cout << "Identity(4) takes " << time_it(identity, 4) << " seconds." << endl;
 cout << "Sum(4) takes " << time_it(sum, 4) << " seconds." << endl;
 return 0;

}</lang>

Example

Identity(4) takes 0 seconds.
Sum(4) takes 0.01 seconds.

C#

Using Stopwatch.

<lang csharp>using System; using System.Linq; using System.Threading; using System.Diagnostics;

class Program {

   static void Main(string[] args) {
       Stopwatch sw = new Stopwatch();
       sw.Start();
       DoSomething();
       sw.Stop();
       Console.WriteLine("DoSomething() took {0}ms.", sw.Elapsed.TotalMilliseconds);
   }
   static void DoSomething() {
       Thread.Sleep(1000);
       Enumerable.Range(1, 10000).Where(x => x % 2 == 0).Sum();  // Sum even numers from 1 to 10000
   }

}</lang>

Using DateTime.

<lang csharp>using System; using System.Linq; using System.Threading;

class Program {

   static void Main(string[] args) {
       DateTime start, end;
       start = DateTime.Now;
       DoSomething();
       end = DateTime.Now;
       Console.WriteLine("DoSomething() took " + (end - start).TotalMilliseconds + "ms");
   }    
   static void DoSomething() {
       Thread.Sleep(1000);
       Enumerable.Range(1, 10000).Where(x => x % 2 == 0).Sum();  // Sum even numers from 1 to 10000
   }

}</lang>

Output:

DoSomething() took 1071,5408ms

Common Lisp

Common Lisp provides a standard utility for performance measurement, time:

CL-USER> (time (reduce #'+ (make-list 100000 :initial-element 1)))
Evaluation took:
  0.151 seconds of real time
  0.019035 seconds of user run time
  0.01807 seconds of system run time
  0 calls to %EVAL
  0 page faults and
  2,400,256 bytes consed.

(The example output here is from SBCL.)

However, it merely prints textual information to trace output, so the information is not readily available for further processing (except by parsing it in a CL-implementation-specific manner).

The functions get-internal-run-time and get-internal-real-time may be used to get time information programmatically, with at least one-second granularity (and usually more). Here is a function which uses them to measure the time taken for one execution of a provided function:

(defun timings (function)
  (let ((real-base (get-internal-real-time))
        (run-base (get-internal-run-time)))
    (funcall function)
    (values (/ (- (get-internal-real-time) real-base) internal-time-units-per-second)
            (/ (- (get-internal-run-time) run-base) internal-time-units-per-second))))
CL-USER> (timings (lambda () (reduce #'+ (make-list 100000 :initial-element 1))))
17/500
7/250

E

Translation of: Java

— E has no standardized facility for CPU time measurement; this

Works with: E-on-Java

.

<lang e>def countTo(x) { println("Counting...") for _ in 1..x {} println("Done!") }

def MX := <unsafe:java.lang.management.makeManagementFactory> def threadMX := MX.getThreadMXBean() require(threadMX.isCurrentThreadCpuTimeSupported()) threadMX.setThreadCpuTimeEnabled(true)

for count in [10000, 100000] { def start := threadMX.getCurrentThreadCpuTime() countTo(count) def finish := threadMX.getCurrentThreadCpuTime() println(`Counting to $count takes ${(finish-start)//1000000}ms`) }</lang>

Forth

Works with: GNU Forth
: time: ( "word" -- )
  utime 2>R ' EXECUTE
  utime 2R> D-
  <# # # # # # # [CHAR] . HOLD #S #> TYPE ."  seconds" ;
1000 time: MS  \ 1.000081 seconds ok

Haskell

import System.CPUTime

-- We assume the function we are timing is an IO monad computation
timeIt :: (Fractional c) => (a -> IO b) -> a -> IO c
timeIt action arg =
  do startTime <- getCPUTime
     action arg
     finishTime <- getCPUTime
     return $ fromIntegral (finishTime - startTime) / 1000000000000

-- Version for use with evaluating regular non-monadic functions
timeIt' :: (Fractional c) => (a -> b) -> a -> IO c
timeIt' f = timeIt (\x -> f x `seq` return ())

Example

*Main> :m + Text.Printf Data.List
*Main Data.List Text.Printf> timeIt' id 4 >>= printf "Identity(4) takes %f seconds.\n"
Identity(4) takes 0.0 seconds.
*Main Data.List Text.Printf> timeIt' (\x -> foldl' (+) x [1..1000000]) 4 >>= printf "Sum(4) takes %f seconds.\n"
Sum(4) takes 0.248015 seconds.

J

Time and space requirements are tested using verbs obtained through the Foreign conjunction (!:). 6!:2 returns time required for execution, in floating-point measurement of seconds. 7!:2 returns a measurement of space required to execute. Both receive as input a sentence for execution.
When the Memoize feature or similar techniques are used, execution time and space can both be affected by prior calculations.

Example

   (6!:2,7!:2) '|: 50 50 50 $ i. 50^3'
0.00387912 1.57414e6

Java

Works with: Java version 1.5+

<lang java>import java.lang.management.ManagementFactory; import java.lang.management.ThreadMXBean;

public class TimeIt { public static void main(String[] args) { final ThreadMXBean threadMX = ManagementFactory.getThreadMXBean(); assert threadMX.isCurrentThreadCpuTimeSupported(); threadMX.setThreadCpuTimeEnabled(true);

long start, end; start = threadMX.getCurrentThreadCpuTime(); countTo(100000000); end = threadMX.getCurrentThreadCpuTime(); System.out.println("Counting to 100000000 takes "+(end-start)/1000000+"ms"); start = threadMX.getCurrentThreadCpuTime(); countTo(1000000000L); end = threadMX.getCurrentThreadCpuTime(); System.out.println("Counting to 1000000000 takes "+(end-start)/1000000+"ms");

}

public static void countTo(long x){ System.out.println("Counting..."); for(long i=0;i<x;i++); System.out.println("Done!"); } }</lang>

Measures real time rather than CPU time:

Works with: Java version (all versions)

<lang java> public static void main(String[] args){ long start, end; start = System.currentTimeMillis(); countTo(100000000); end = System.currentTimeMillis(); System.out.println("Counting to 100000000 takes "+(end-start)+"ms"); start = System.currentTimeMillis(); countTo(1000000000L); end = System.currentTimeMillis(); System.out.println("Counting to 1000000000 takes "+(end-start)+"ms");

}</lang> Output:

Counting...
Done!
Counting to 100000000 takes 370ms
Counting...
Done!
Counting to 1000000000 takes 3391ms

Mathematica

<lang Mathematica> AbsoluteTiming[x]; </lang> where x is an operation. Example calculating a million digits of Sqrt[3]: <lang Mathematica> AbsoluteTiming[N[Sqrt[3], 10^6]] </lang> gives: <lang Mathematica> {0.000657, 1.7320508075688772935274463......} </lang> First elements if the time in seconds, second elements if the result from the operation. Note that I truncated the result.

OCaml

<lang ocaml>let time_it action arg =

 let start_time = Sys.time () in
 ignore (action arg);
 let finish_time = Sys.time () in
 finish_time -. start_time</lang>

Example

# Printf.printf "Identity(4) takes %f seconds.\n" (time_it (fun x -> x) 4);;
Identity(4) takes 0.000000 seconds.
- : unit = ()
# let sum x = let num = ref x in for i = 0 to 999999 do num := !num + i done; !num;;
val sum : int -> int = <fun>
# Printf.printf "Sum(4) takes %f seconds.\n" (time_it sum 4);;
Sum(4) takes 0.084005 seconds.
- : unit = ()

Perl

<lang perl>sub cpu_time {

 my ($user,$system,$cuser,$csystem) = times;
 $user + $system

}

sub time_it {

 my $action = shift;
 my $startTime = cpu_time();
 $action->(@_);
 my $finishTime = cpu_time();
 $finishTime - $startTime

}

printf "Identity(4) takes %f seconds.\n", time_it(sub {shift}, 4);

  1. outputs "Identity(4) takes 0.000000 seconds."

sub sum {

 my $x = shift;
 foreach (0 .. 999999) {
   $x += $_;
 }
 $x

}

printf "Sum(4) takes %f seconds.\n", time_it(\&sum, 4);

  1. outputs "Sum(4) takes 0.280000 seconds."</lang>

Python

Given function and arguments return a time (in microseconds) it takes to make the call.

Note: There is an overhead in executing a function that does nothing. <lang python>import sys, timeit def usec(function, arguments):

   modname, funcname = __name__, function.__name__
   timer = timeit.Timer(stmt='%(funcname)s(*args)' % vars(),
                        setup='from %(modname)s import %(funcname)s; args=%(arguments)r' % vars())
   try:
       t, N = 0, 1
       while t < 0.2:            
           t = min(timer.repeat(repeat=3, number=N))            
           N *= 10
       microseconds = round(10000000 * t / N, 1) # per loop
       return microseconds 
   except:
       timer.print_exc(file=sys.stderr)
       raise
def nothing(): pass
def identity(x): return x</lang>

Example

>>> print usec(nothing, [])
1.7
>>> print usec(identity, [1])
2.2
>>> print usec(pow, (2, 100))
3.3
>>> print map(lambda n: str(usec(qsort, (range(n),))), range(10))
['2.7', '2.8', '31.4', '38.1', '58.0', '76.2', '100.5', '130.0', '149.3', '180.0']

using qsort() from Quicksort. Timings show that the implementation of qsort() has quadratic dependence on sequence length N for already sorted sequences (instead of O(N*log(N)) in average).

Ruby

Ruby's Benchmark module provides a way to generate nice reports (numbers are in seconds): <lang ruby>require 'benchmark'

Benchmark.bm(8) do |x|

 x.report("nothing:")  {  }
 x.report("sum:")  { (1..1_000_000).inject(4) {|sum, x| sum + x} }

end</lang> Output:

              user     system      total        real
nothing:  0.000000   0.000000   0.000000 (  0.000014)
sum:      2.700000   0.400000   3.100000 (  3.258348)

You can get the total time as a number for later processing like this: <lang ruby>Benchmark.measure { whatever }.total</lang>

Standard ML

<lang sml>fun time_it (action, arg) = let

 val timer = Timer.startCPUTimer ()
 val _ = action arg
 val times = Timer.checkCPUTimer timer

in

 Time.+ (#usr times, #sys times)

end</lang>

Example

- print ("Identity(4) takes " ^ Time.toString (time_it (fn x => x, 4)) ^ " seconds.\n");
Identity(4) takes 0.000 seconds.
val it = () : unit
- fun sum (x:IntInf.int) = let
    fun loop (i, sum) =
      if i >= 1000000 then sum
      else loop (i + 1, sum + i)
  in loop (0, x)
  end;
val sum = fn : IntInf.int -> IntInf.int
- print ("Sum(4) takes " ^ Time.toString (time_it (sum, 4)) ^ " seconds.\n");
Sum(4) takes 0.220 seconds.
val it = () : unit

Tcl

The Tcl time command returns the real time elapsed averaged over a number of iterations. <lang tcl>proc sum_n {n} {

   for {set i 1; set sum 0.0} {$i <= $n} {incr i} {set sum [expr {$sum + $i}]}
   return [expr {wide($sum)}]

}

puts [time {sum_n 1e6} 100] puts [time {} 100]</lang> Results in

163551.0 microseconds per iteration
0.2 microseconds per iteration

UNIX Shell

$ time sleep 1

real    0m1.074s
user    0m0.001s
sys     0m0.006s