Gradient descent: Difference between revisions

Content added Content deleted
m (→‎{{header|Phix}}: added syntax colouring, made p2js compatible, lowered/improved tolerance, added iteration check/limit)
m (syntax highlighting fixup automation)
Line 18:
{{Trans|Fortran}}
THe results agree with the Fortran sample and the Julia sample to 6 places.
<langsyntaxhighlight lang="algol68">PROC steepest descent = ( REF[]LONG REAL x, LONG REAL alphap, tolerance )VOID:
BEGIN
LONG REAL alpha := alphap;
Line 89:
print( ( "Testing steepest descent method:", newline ) );
print( ( "The minimum is at x[0] = ", fixed( x[ 0 ], -10, 6 ), ", x[1] = ", fixed( x[ 1 ], -10, 6 ), newline ) )
END</langsyntaxhighlight>
{{out}}
<pre>
Line 99:
{{Trans|ALGOL 68}} which is a {{Trans|Go}} with the gradient function from {{Trans|Fortran}}
The results agree (to 6 places) with the Fortran and Julia samples.
<langsyntaxhighlight lang="algolw">begin
procedure steepestDescent ( long real array x ( * ); long real value alphap, tolerance ) ;
begin
Line 163:
write( "The minimum is at x(0) = ", x( 0 ), ", x(1) = ", x( 1 ) )
end
end.</langsyntaxhighlight>
{{out}}
<pre>
Line 193:
(%o7) done
The optimization subroutine GD sets the reverse communication variable IFLAG. This allows the evaluation of the gradient to be done separately.
<langsyntaxhighlight Fortranlang="fortran"> SUBROUTINE EVALFG (N, X, F, G)
IMPLICIT NONE
INTEGER N
Line 274:
STOP 'program complete'
END
</syntaxhighlight>
</lang>
{{out}}
<pre>After 31 steps, found minimum at x= 0.107626843548372 y= -1.223259663839920 of f= -0.750063420551493
Line 284:
 
However, since it was originally written, I've substituted Fortran's gradient function for the original one (see Talk page) which now gives results which agree (to 6 decimal places) with those of the Fortran, Julia, Algol 68 and Algol W solutions. As a number of other solutions are based on this one, I suggest their authors update them accordingly.
<langsyntaxhighlight lang="go">package main
 
import (
Line 361:
fmt.Println("Testing steepest descent method:")
fmt.Printf("The minimum is at x = %f, y = %f for which f(x, y) = %f.\n", x[0], x[1], g(x))
}</langsyntaxhighlight>
 
{{out}}
Line 370:
 
=={{header|Julia}}==
<langsyntaxhighlight lang="julia">using Optim, Base.MathConstants
 
f(x) = (x[1] - 1) * (x[1] - 1) * e^(-x[2]^2) + x[2] * (x[2] + 2) * e^(-2 * x[1]^2)
 
println(optimize(f, [0.1, -1.0], GradientDescent()))
</langsyntaxhighlight>{{out}}
<pre>
Results of Optimization Algorithm
Line 400:
The gradient function has been rewritten to remove a term in the partial derivative with respect to y (two terms instead of three). This doesn’t change the result which agrees with those of Go, Fortran, Julia, etc.
 
<langsyntaxhighlight Nimlang="nim">import math, strformat
 
 
Line 466:
steepDescent(g, gradG, x, Alpha, Tolerance)
echo "Testing steepest descent method:"
echo &"The minimum is at x = {x[0]:.12f}, y = {x[1]:.12f} for which f(x, y) = {g(x):.12f}"</langsyntaxhighlight>
 
{{out}}
Line 475:
Calculate with <code>bignum</code> for numerical stability.
{{trans|Raku}}
<langsyntaxhighlight lang="perl">use strict;
use warnings;
use bignum;
Line 528:
my @x = <0.1 -1>; # Initial guess of location of minimum.
 
printf "The minimum is at x[0] = %.6f, x[1] = %.6f", steepestDescent($alpha, $tolerance, @x);</langsyntaxhighlight>
{{out}}
<pre>The minimum is at x[0] = 0.107653, x[1] = -1.223370</pre>
Line 534:
=={{header|Phix}}==
{{trans|Go}}
<!--<langsyntaxhighlight Phixlang="phix">(phixonline)-->
<span style="color: #008080;">with</span> <span style="color: #008080;">javascript_semantics</span>
<span style="color: #000080;font-style:italic;">-- Function for which minimum is to be found.</span>
Line 582:
<span style="color: #7060A8;">printf</span><span style="color: #0000FF;">(</span><span style="color: #000000;">1</span><span style="color: #0000FF;">,</span><span style="color: #008000;">"Testing steepest descent method:\n"</span><span style="color: #0000FF;">)</span>
<span style="color: #7060A8;">printf</span><span style="color: #0000FF;">(</span><span style="color: #000000;">1</span><span style="color: #0000FF;">,</span><span style="color: #008000;">"The minimum is at x = %.13f, y = %.13f for which f(x, y) = %.15f\n"</span><span style="color: #0000FF;">,</span> <span style="color: #0000FF;">{</span><span style="color: #000000;">x</span><span style="color: #0000FF;">[</span><span style="color: #000000;">1</span><span style="color: #0000FF;">],</span> <span style="color: #000000;">x</span><span style="color: #0000FF;">[</span><span style="color: #000000;">2</span><span style="color: #0000FF;">],</span> <span style="color: #000000;">g</span><span style="color: #0000FF;">(</span><span style="color: #000000;">x</span><span style="color: #0000FF;">)})</span>
<!--</langsyntaxhighlight>-->
{{out}}
Results match Fortran, most others to 6 or 7dp<br>
Line 600:
I could have used ∇ and Δ in the variable names, but it looked too confusing, so I've gone with <var>grad-</var> and <var>del-</var>
 
<langsyntaxhighlight lang="racket">#lang racket
 
(define (apply-vector f v)
Line 647:
(module+ main
(Gradient-descent))
</syntaxhighlight>
</lang>
 
{{out}}
Line 655:
(formerly Perl 6)
{{trans|Go}}
<syntaxhighlight lang="raku" perl6line># 20200904 Updated Raku programming solution
 
sub steepestDescent(@x, $alpha is copy, $h) {
Line 694:
say "Testing steepest descent method:";
say "The minimum is at x[0] = ", @x[0], ", x[1] = ", @x[1];
</syntaxhighlight>
</lang>
{{out}}
<pre>Testing steepest descent method:
Line 702:
=={{header|REXX}}==
The &nbsp; ''tolerance'' &nbsp; can be much smaller; &nbsp; a tolerance of &nbsp; '''1e-200''' &nbsp; was tested. &nbsp; It works, but causes the program to execute a bit slower, but still sub-second execution time.
<langsyntaxhighlight lang="rexx">/*REXX pgm searches for minimum values of the bi─variate function (AKA steepest descent)*/
numeric digits (length( e() ) - length(.) ) % 2 /*use half of number decimal digs in E.*/
tolerance= 1e-30 /*use a much smaller tolerance for REXX*/
Line 743:
numeric form; m.=9; parse value format(x,2,1,,0) 'E0' with g "E" _ .; g=g *.5'e'_ %2
do j=0 while h>9; m.j=h; h= h % 2 + 1; end /*j*/
do k=j+5 to 0 by -1; numeric digits m.k; g=(g+x/g)*.5; end /*k*/; return g</langsyntaxhighlight>
{{out|output|text=&nbsp; when using the internal default inputs:}}
<pre>
Line 752:
=={{header|Scala}}==
{{trans|Go}}
<langsyntaxhighlight lang="scala">object GradientDescent {
 
/** Steepest descent method modifying input values*/
Line 826:
}
}
</syntaxhighlight>
</lang>
{{out}}
<pre>
Line 838:
<br><br>
 
<syntaxhighlight lang="typescript">
<lang Typescript>
// Using the steepest-descent method to search
// for minimum values of a multi-variable function
Line 935:
gradientDescentMain();
 
</syntaxhighlight>
</lang>
{{out}}
<pre>
Line 946:
* &nbsp; [Linear Regression using Gradient Descent by Adarsh Menon]
<br><br>
<syntaxhighlight lang="typescript">
<lang Typescript>
let data: number[][] =
[[32.5023452694530, 31.70700584656990],
Line 1,094:
 
gradientDescentMain();
</syntaxhighlight>
</lang>
 
=={{header|Wren}}==
{{trans|Go}}
{{libheader|Wren-fmt}}
<langsyntaxhighlight lang="ecmascript">import "/fmt" for Fmt
 
// Function for which minimum is to be found.
Line 1,161:
steepestDescent.call(x, alpha, tolerance)
System.print("Testing steepest descent method:")
Fmt.print("The minimum is at x = $f, y = $f for which f(x, y) = $f.", x[0], x[1], g.call(x))</langsyntaxhighlight>
 
{{out}}
Line 1,171:
=={{header|zkl}}==
{{trans|Go}} with tweaked gradG
<langsyntaxhighlight lang="zkl">fcn steepestDescent(f, x,y, alpha, h){
g0:=f(x,y); # Initial estimate of result.
fix,fiy := gradG(f,x,y,h); # Calculate initial gradient
Line 1,190:
g0:=f(x,y);
return((f(x + h, y) - g0)/h, (f(x, y + h) - g0)/h)
}</langsyntaxhighlight>
<langsyntaxhighlight lang="zkl">fcn f(x,y){ # Function for which minimum is to be found.
(x - 1).pow(2)*(-y.pow(2)).exp() +
y*(y + 2)*(-2.0*x.pow(2)).exp()
Line 1,202:
println("Testing steepest descent method:");
println("The minimum is at (x,y) = (%f,%f). f(x,y) = %f".fmt(x,y,f(x,y)));</langsyntaxhighlight>
{{out}}
<pre>