Gradient descent: Difference between revisions

m
Line 213:
Note the different implementation of <code>grad</code>. I believe that the vector should be reset and only the partial derivative in a particular dimension is to be used. For this reason, I've _yet another_ result!
 
I could have used ∇ and Δ in the variable names, but it looked too confusing, so I've gone with _grad<var>grad-_</var> and _del<var>del-_</var>
 
<lang racket>#lang racket
569

edits