ribs.emitters.opt.GradientOptBase¶
- class ribs.emitters.opt.GradientOptBase(theta0, lr)[source]¶
Base class for gradient-based optimizers.
Note
These optimizers are designed for gradient ascent rather than gradient descent.
These optimizers maintain a current solution point \(\theta\). The solution point is obtained with the
theta
property, and it is updated by passing a gradient tostep()
. Finally, the point can be reset to a new value withreset()
.Your constructor may take in additional arguments beyond
theta0
andlr
, but expect that these two arguments will always be passed in.- Parameters
theta0 (array-like) – Initial solution. 1D array.
lr (float) – Learning rate for the update.
Methods
reset
(theta0)Resets the solution point to a new value.
step
(gradient)Ascends the solution based on the given gradient.
Attributes
The current solution point.
- abstract reset(theta0)[source]¶
Resets the solution point to a new value.
- Parameters
theta0 (array-like) – The new solution point. 1D array.
- abstract step(gradient)[source]¶
Ascends the solution based on the given gradient.
- Parameters
gradient (array-like) – The (estimated) gradient of the current solution point. 1D array.
- abstract property theta¶
The current solution point.