ribs.emitters.GradientArborescenceEmitter¶
- class ribs.emitters.GradientArborescenceEmitter(archive, *, x0, sigma0, lr, ranker='2imp', selection_rule='filter', restart_rule='no_improvement', grad_opt='adam', grad_opt_kwargs=None, es='cma_es', es_kwargs=None, normalize_grad=True, bounds=None, batch_size=None, epsilon=1e-08, seed=None)[source]¶
Generates solutions with a gradient arborescence, with coefficients parameterized by an evolution strategy.
This emitter originates in Fontaine 2021. It leverages the gradient information of the objective and measure functions, generating new solutions around a solution point \(\boldsymbol{\theta}\) using gradient arborescence, with coefficients drawn from a Gaussian distribution. Essentially, this means that the emitter samples coefficients \(\boldsymbol{c_i} \sim \mathcal{N}(\boldsymbol{\mu}, \boldsymbol{\Sigma})\) and creates new solutions \(\boldsymbol{\theta'_i}\) according to
\[\boldsymbol{\theta'_i} \gets \boldsymbol{\theta} + c_{i,0} \boldsymbol{\nabla} f(\boldsymbol{\theta}) + \sum_{j=1}^k c_{i,j}\boldsymbol{\nabla}m_j(\boldsymbol{\theta})\]Where \(k\) is the number of measures, and \(\boldsymbol{\nabla} f(\boldsymbol{\theta})\) and \(\boldsymbol{\nabla} m_j(\boldsymbol{\theta})\) are the objective and measure gradients of the solution point \(\boldsymbol{\theta}\), respectively.
Based on how the solutions are ranked after being inserted into the archive (see
ranker
), the solution point \(\boldsymbol{\theta}\) is updated with gradient ascent, and the coefficient distribution parameters \(\boldsymbol{\mu}\) and \(\boldsymbol{\Sigma}\) are updated with an ES (the default ES is CMA-ES).Note
Unlike non-gradient emitters, GradientArborescenceEmitter requires calling
ask_dqd()
andtell_dqd()
(in this order) before callingask()
andtell()
to communicate the gradient information to the emitter.See also
Our DQD tutorial goes into detail on how to use this emitter: Generating Tom Cruise Images with DQD Algorithms
- Parameters
archive (ribs.archives.ArchiveBase) – An archive to use when creating and inserting solutions. For instance, this can be
ribs.archives.GridArchive
.x0 (np.ndarray) – Initial solution.
sigma0 (float) – Initial step size / standard deviation of the distribution of gradient coefficients.
lr (float) – Learning rate for the gradient optimizer.
ranker (Callable or str) – The ranker is a
RankerBase
object that orders the solutions after they have been evaluated in the environment. This parameter may be a callable (e.g. a class or a lambda function) that takes in no parameters and returns an instance ofRankerBase
, or it may be a full or abbreviated ranker name as described inribs.emitters.rankers
.selection_rule ("mu" or "filter") – Method for selecting parents in CMA-ES. With “mu” selection, the first half of the solutions will be selected as parents, while in “filter”, any solutions that were added to the archive will be selected.
restart_rule (int, "no_improvement", and "basic") – Method to use when checking for restarts. If given an integer, then the emitter will restart after this many iterations, where each iteration is a call to
tell()
. With “basic”, only the default CMA-ES convergence rules will be used, while with “no_improvement”, the emitter will restart when none of the proposed solutions were added to the archive.grad_opt (Callable or str) – Gradient optimizer to use for the gradient ascent step of the algorithm. The optimizer is a
GradientOptBase
object. This parameter may be a callable (e.g. a class or a lambda function) which takes in thetheta0
andlr
arguments, or it may be a full or abbreviated name as described inribs.emitters.opt
.grad_opt_kwargs (dict) – Additional arguments to pass to the gradient optimizer. See the gradient-based optimizers in
ribs.emitters.opt
for the arguments allowed by each optimizer. Note that we already pass intheta0
andlr
.es (Callable or str) – The evolution strategy is an
EvolutionStrategyBase
object that is used to adapt the distribution from which new gradient coefficients are sampled. This parameter may be a callable (e.g. a class or a lambda function) that takes in the parameters ofEvolutionStrategyBase
along with kwargs provided by thees_kwargs
argument, or it may be a full or abbreviated optimizer name as described inribs.emitters.opt
.es_kwargs (dict) – Additional arguments to pass to the evolution strategy optimizer. See the evolution-strategy-based optimizers in
ribs.emitters.opt
for the arguments allowed by each optimizer.normalize_grad (bool) – If true (default), then gradient infomation will be normalized. Otherwise, it will not be normalized.
bounds – This argument may be used for providing solution space bounds in the future. This emitter does not currently support solution space bounds, as bounding solutions for DQD algorithms such as CMA-MEGA is an open problem. Hence, this argument must be set to None.
batch_size (int) – Number of solutions to return in
ask()
. If not passed in, a batch size will be automatically calculated using the default CMA-ES rules. This does not account for the one solution returned byask_dqd()
, which is the solution point maintained by the gradient optimizer.epsilon (float) – For numerical stability, we add a small epsilon when normalizing gradients in
tell_dqd()
– refer to the implementation here. Pass this parameter to configure that epsilon.seed (int) – Value to seed the random number generator. Set to None to avoid a fixed seed.
- Raises
ValueError – There is an error in x0 or initial_solutions.
ValueError –
bounds
is set even though it is not currently supported.ValueError – If
restart_rule
,selection_rule
, orranker
is invalid.
Methods
ask
()Samples new solutions from a gradient arborescence parameterized by a multivariate Gaussian distribution.
ask_dqd
()Samples a new solution from the gradient optimizer.
tell
(solution_batch, objective_batch, ...[, ...])Gives the emitter results from evaluating solutions.
tell_dqd
(solution_batch, objective_batch, ...)Gives the emitter results from evaluating the gradient of the solutions.
Attributes
The archive which stores solutions generated by this emitter.
Number of solutions to return in
ask()
.Number of solutions to return in
ask_dqd()
.The epsilon added for numerical stability when normalizing gradients in
tell_dqd()
.The number of iterations for this emitter.
(solution_dim,)
array with lower bounds of solution space.The number of restarts for this emitter.
The dimension of solutions produced by this emitter.
(solution_dim,)
array with upper bounds of solution space.Initial solution for the optimizer.
- ask()[source]¶
Samples new solutions from a gradient arborescence parameterized by a multivariate Gaussian distribution.
The multivariate Gaussian is parameterized by the evolution strategy optimizer
self._opt
.This method returns
batch_size
solutions, even though one solution is returned viaask_dqd
.- Returns
(
batch_size
,solution_dim
) array – a batch of new solutions to evaluate.- Raises
RuntimeError – This method was called without first passing gradients with calls to ask_dqd() and tell_dqd().
- ask_dqd()[source]¶
Samples a new solution from the gradient optimizer.
Call :meth:`ask_dqd` and :meth:`tell_dqd` (in this order) before calling :meth:`ask` and :meth:`tell`.
- Returns
a new solution to evaluate.
- tell(solution_batch, objective_batch, measures_batch, status_batch, value_batch, metadata_batch=None)[source]¶
Gives the emitter results from evaluating solutions.
The solutions are ranked based on the rank() function defined by self._ranker.
- Parameters
solution_batch (array-like) – (batch_size,
solution_dim
) array of solutions generated by this emitter’sask()
method.objective_batch (array-like) – 1d array containing the objective function value of each solution.
measures_batch (array-like) – (batch_size, measure space dimension) array with the measure space coordinates of each solution.
status_batch (array-like) – 1d array of
ribs.archive.addstatus
returned by a series of calls to archive’sadd()
method.value_batch (array-like) – 1d array of floats returned by a series of calls to archive’s
add()
method. for what these floats represent, refer toribs.archives.add()
.metadata_batch (array-like) – 1d object array containing a metadata object for each solution.
- Raises
RuntimeError – This method was called without first passing gradients with calls to ask_dqd() and tell_dqd().
- tell_dqd(solution_batch, objective_batch, measures_batch, jacobian_batch, status_batch, value_batch, metadata_batch=None)[source]¶
Gives the emitter results from evaluating the gradient of the solutions.
- Parameters
solution_batch (array-like) – (batch_size,
solution_dim
) array of solutions generated by this emitter’sask_dqd()
method.objective_batch (array-like) – 1d array containing the objective function value of each solution.
measures_batch (array-like) – (batch_size, measure space dimension) array with the measure space coordinates of each solution.
jacobian_batch (array-like) – (batch_size, 1 + measure_dim, solution_dim) array consisting of Jacobian matrices of the solutions obtained from
ask_dqd()
. Each matrix should consist of the objective gradient of the solution followed by the measure gradients.status_batch (array-like) – 1d array of
ribs.archive.addstatus
returned by a series of calls to archive’sadd()
method.value_batch (array-like) – 1d array of floats returned by a series of calls to archive’s
add()
method. for what these floats represent, refer toribs.archives.add()
.metadata_batch (array-like) – 1d object array containing a metadata object for each solution.
- property archive¶
The archive which stores solutions generated by this emitter.
- property batch_size_dqd¶
Number of solutions to return in
ask_dqd()
.This is always 1, as we only return the solution point in
ask_dqd()
.- Type
- property epsilon¶
The epsilon added for numerical stability when normalizing gradients in
tell_dqd()
.- Type
- property lower_bounds¶
(solution_dim,)
array with lower bounds of solution space.For instance,
[-1, -1, -1]
indicates that every dimension of the solution space has a lower bound of -1.- Type
- property upper_bounds¶
(solution_dim,)
array with upper bounds of solution space.For instance,
[1, 1, 1]
indicates that every dimension of the solution space has an upper bound of 1.- Type
- property x0¶
Initial solution for the optimizer.
- Type