neuralnetworks.optimizer.gradient-descent

gradient-descent

(gradient-descent initial-learning-rate learning-rate-update-rate)

Creates new instance of gradient descent optimizer. It uses Backtracking Line Search to find the good value of learning-rate (alpha) to allows it converge faster

To disable backtracking line search, simply set the learning-rate-update-rate to 1.0

Learning-rate-update-rate must be between (0, 1]

Cost function must returns both gradients and cost value

{:cost 1.5142
 :gradients [1.2 -0.5]}

update-thetas

(update-thetas thetas theta-gradients alpha)

Updates the thetas (weight) based on the provided gradients and alpha. Gradients must be the same dimension as thetas