Gradient descent: Difference between revisions

Content added Content deleted
m (→‎{{header|REXX}}: changed to use a much smaller tolerance, added wording to the REXX section header..)
m (→‎{{header|Phix}}: <small> unnecessary)
Line 514: Line 514:
printf(1,"The minimum is at x = %.13f, y = %.13f for which f(x, y) = %.16f\n", {x[1], x[2], g(x)})</lang>
printf(1,"The minimum is at x = %.13f, y = %.13f for which f(x, y) = %.16f\n", {x[1], x[2], g(x)})</lang>
{{out}}
{{out}}
Results now match (at least) Algol 68/W, Fortran, Go, Julia, Raku, REXX, and Wren [at least to 6dp or more].<br>
Results now match (at least) Algol 68/W, Fortran, Go, Julia, Raku, REXX, and Wren [to 6dp or better anyway].<br>
Note that specifying a tolerance < 1e-7 causes an infinite loop. &nbsp; <small>(except for REXX which will tolerate a much smaller tolerance.</small>)<br>
Note that specifying a tolerance < 1e-7 causes an infinite loop on Phix, whereas REXX copes with a much smaller tolerance.<br>
Results on 32/64 bit Phix agree to 13dp, which I therefore choose to show in full here (but otherwise would not really trust).
Results on 32/64 bit Phix agree to 13dp, which I therefore choose to show in full here (but otherwise would not really trust).
<pre>
<pre>