Gradient descent: Difference between revisions

Content added Content deleted
Line 213: Line 213:
Note the different implementation of <code>grad</code>. I believe that the vector should be reset and only the partial derivative in a particular dimension is to be used. For this reason, I've _yet another_ result!
Note the different implementation of <code>grad</code>. I believe that the vector should be reset and only the partial derivative in a particular dimension is to be used. For this reason, I've _yet another_ result!


I could have used ∇ and Δ in the variable names, but it looked too confusing, so I've gone with _grad-_ and _del-_
I could have used ∇ and Δ in the variable names, but it looked too confusing, so I've gone with <var>grad-</var> and <var>del-</var>


<lang racket>#lang racket
<lang racket>#lang racket