ribs.emitters.opt.GradientOptBase

class ribs.emitters.opt.GradientOptBase(theta0, lr)[source]

Base class for gradient-based optimizers.

Note

These optimizers are designed for gradient ascent rather than gradient descent.

These optimizers maintain a current solution point \(\theta\). The solution point is obtained with the theta property, and it is updated by passing a gradient to step(). Finally, the point can be reset to a new value with reset().

Your constructor may take in additional arguments beyond theta0 and lr, but expect that these two arguments will always be passed in.

Parameters
  • theta0 (array-like) – Initial solution. 1D array.

  • lr (float) – Learning rate for the update.

Methods

reset(theta0)

Resets the solution point to a new value.

step(gradient)

Ascends the solution based on the given gradient.

Attributes

theta

The current solution point.

abstract reset(theta0)[source]

Resets the solution point to a new value.

Parameters

theta0 (array-like) – The new solution point. 1D array.

abstract step(gradient)[source]

Ascends the solution based on the given gradient.

Parameters

gradient (array-like) – The (estimated) gradient of the current solution point. 1D array.

abstract property theta

The current solution point.