Gradient descent: Difference between revisions

Content added Content deleted
m (→‎{{header|Raku}}: Fix code: Perl 6 --> Raku)
(add Fortran example)
Line 12: Line 12:
[https://books.google.co.uk/books?id=dFHvBQAAQBAJ&pg=PA543&lpg=PA543&dq=c%23+steepest+descent+method+to+find+minima+of+two+variable+function&source=bl&ots=TCyD-ts9ui&sig=ACfU3U306Og2fOhTjRv2Ms-BW00IhomoBg&hl=en&sa=X&ved=2ahUKEwitzrmL3aXjAhWwVRUIHSEYCU8Q6AEwCXoECAgQAQ#v=onepage&q=c%23%20steepest%20descent%20method%20to%20find%20minima%20of%20two%20variable%20function&f=false This book excerpt] shows sample C# code for solving this task.
[https://books.google.co.uk/books?id=dFHvBQAAQBAJ&pg=PA543&lpg=PA543&dq=c%23+steepest+descent+method+to+find+minima+of+two+variable+function&source=bl&ots=TCyD-ts9ui&sig=ACfU3U306Og2fOhTjRv2Ms-BW00IhomoBg&hl=en&sa=X&ved=2ahUKEwitzrmL3aXjAhWwVRUIHSEYCU8Q6AEwCXoECAgQAQ#v=onepage&q=c%23%20steepest%20descent%20method%20to%20find%20minima%20of%20two%20variable%20function&f=false This book excerpt] shows sample C# code for solving this task.
<br><br>
<br><br>

=={{header|Fortran}}==
'''Compiler:''' [[gfortran 8.3.0]]<br>
The way a FORTRAN programmer would do this would be to automatically differentiate the function using the diff command in Maxima:
(%i3) (x-1)*(x-1)*exp(-y^2)+y*(y+2)*exp(-2*x^2);
2 2
2 - y - 2 x
(%o3) (x - 1) %e + %e y (y + 2)
(%i4) diff(%o3,x);
2 2
- y - 2 x
(%o4) 2 (x - 1) %e - 4 x %e y (y + 2)
(%i5) diff(%o3,y);
2 2 2
2 - y - 2 x - 2 x
(%o5) (- 2 (x - 1) y %e ) + %e (y + 2) + %e y
and then have it automatically turned into statements with the fortran command:
(%i6) fortran(%o4);
2*(x-1)*exp(-y**2)-4*x*exp(-2*x**2)*y*(y+2)
(%o6) done
(%i7) fortran(%o5);
(-2*(x-1)**2*y*exp(-y**2))+exp(-2*x**2)*(y+2)+exp(-2*x**2)*y
(%o7) done
The optimization subroutine GD sets the reverse communication variable IFLAG. This allows the evaluation of the function and the gradient to be done separately.
<lang Fortran> SUBROUTINE EVALFG (N, X, F, G)
IMPLICIT NONE
INTEGER N
REAL X(N), F, G(N)
F = (X(1) - 1.)**2 * EXP(-X(2)**2) +
$ X(2) * (X(2) + 2.) * EXP(-2. * X(1)**2)
G(1) = 2. * (X(1) - 1.) * EXP(-X(2)**2) - 4. * X(1) *
$ EXP(-2. * X(1)**2) * X(2) * (X(2) + 2.)
G(2) = (-2. * (X(1) - 1.)**2 * X(2) * EXP(-X(2)**2)) +
$ EXP(-2. * X(1)**2) * (X(2) + 2.) +
$ EXP(-2. * X(1)**2) * X(2)
RETURN
END

*-----------------------------------------------------------------------
* gd - Gradient descent
* G must be set correctly at the initial point X.
*
*___Name______Type________In/Out___Description_________________________
* N Integer In Number of Variables.
* X(N) Real Both Variables
* G(N) Real Both Gradient
* TOL Real In Relative convergence tolerance
* IFLAG Integer Both Reverse Communication Flag
* 0 on input: begin
* 0 on output: done
* 1 compute G
*-----------------------------------------------------------------------
SUBROUTINE GD (N, X, G, TOL, IFLAG)
IMPLICIT NONE
INTEGER N, IFLAG
REAL X(N), G(N), TOL
REAL ETA
PARAMETER (ETA = 0.3) ! Learning rate
INTEGER I
REAL GNORM ! norm of gradient

IF (IFLAG .EQ. 1) GO TO 20

10 DO I = 1, N ! take step
X(I) = X(I) - ETA * G(I)
END DO
IFLAG = 1
RETURN ! let main program evaluate G

20 GNORM = 0. ! convergence test
DO I = 1, N
GNORM = GNORM + G(I)**2
END DO
GNORM = SQRT(GNORM)
IF (GNORM < TOL) THEN
IFLAG = 0
RETURN ! success
END IF
GO TO 10
END ! of gd

PROGRAM GDDEMO
IMPLICIT NONE
INTEGER N
PARAMETER (N = 2)
INTEGER ITER, J, IFLAG
REAL X(N), F, G(N), TOL

X(1) = -0.1 ! initial values
X(2) = -1.0
TOL = 1.E-6
IFLAG = 0
DO J = 1, 1 000 000
CALL GD (N, X, G, TOL, IFLAG)
IF (IFLAG .EQ. 1) THEN
CALL EVALFG (N, X, F, G)
ELSE
ITER = J
GO TO 50
END IF
END DO
STOP 'too many iterations!'

50 PRINT '(A, I7, A, F10.6, A, F10.6, A, F10.6)',
$ 'After ', ITER, ' steps, found minimum at x=',
$ X(1), ' y=', X(2), ' of f=', F
STOP 'program complete'
END
</lang>
{{out}}
<pre>After 14 steps, found minimum at x= 0.107627 y= -1.223260 of f= -0.750063
STOP program complete
</pre>


=={{header|Go}}==
=={{header|Go}}==