Talk:Gradient descent: Difference between revisions

→‎Needs more information: using the actual gradient instead of the aproximation
(→‎Needs more information: Responded to Tigerofdarkness.)
(→‎Needs more information: using the actual gradient instead of the aproximation)
Line 17:
::I first noticed that there was a discrepancy a couple of days back when I was trying to add a Wren example. My first attempt was a straight translation of the Go code which gave results of: x[0] = 0.10781894131876, x[1] = -1.2231932529554.
 
::I then decided to switch horses and use zkl's 'tweaked' gradG function which gave results very close to zkl itself so I posted that. Incidentally, I wasn't surprised that there was a small discrepancy here as I'm using a rather crude Math.exp function (basically I apply the power function to e = 2.71828182845904523536) pending the inclusion of a more accurate one in the next version of Wren's standard library which will call thet:::he C library function exp().
 
::So I don't know where all this leaves us. There are doubtless several factors at work here and, as you say changing the initial guess leads to different results. Something else which leads to different results is whether one allows gradG to mutate 'x'. As the Go code stands it copies 'x' to 'y' and so doesn't mutate the former. However, it looks to me as though some translations may be indirectly mutating 'x' (depending on whether arrays are reference or value types in those languages) by simply assigning 'x' to 'y'. If I make this change in the Go code, the results are: x[0] = 0.10773473656605767, x[1] = -1.2231782829927973 and in the Wren code: x[0] = 0.10757894411096, x[1] = -1.2230849416131 so it does make quite a difference. --[[User:PureFox|PureFox]] ([[User talk:PureFox|talk]]) 10:11, 3 September 2020 (UTC)
 
:::Interesting.
:::I looked at the Go sample's gradG (which as you say, a lot of the others use). I'm not sufficiently au-fait with the mathematics to say how good an approximation the gradG function is but I see it involves dividing by h which starts out set to the tolerance and then gets halved on each iteration. It must be something like the actual gradient as the samples sort-of agree. I hadn't noticed the possibility of the mutation of x - that's a good point.
 
:::I substituted the actual gradient function (as used in the Fortran sample) and removed h and again, I get the same results as Fortran and Julia (to 6 places). That the original Algol 68 sample agreed with those is possibly a coincidence but I am now more confident that the result is in the region of the Julia/Fortran results.
 
:::I suspect that Julia is also using the actual gradient function as it is (I presume) using a built-in minimising function that uses the actual gradient function.
:::--[[User:Tigerofdarkness|Tigerofdarkness]] ([[User talk:Tigerofdarkness|talk]]) 12:08, 3 September 2020 (UTC)
 
== promoted from draft? ==
3,031

edits