symforce.opt.optimizer module#
- class Optimizer(factors, optimized_keys=None, params=None, debug_stats=None, include_jacobians=None)[source]#
Bases:
object
A nonlinear least-squares optimizer
Typical usage is to construct an Optimizer from a set of factors and keys to optimize, and then call
optimize()
repeatedly with aValues
.Example creation with a single
Factor
:factor = Factor( [my_key_0, my_key_1, my_key_2], my_residual_function ) optimizer = Optimizer( factors=[factor], optimized_keys=[my_key_0, my_key_1], )
And usage:
initial_guess = Values(...) result = optimizer.optimize(initial_guess) print(result.optimized_values)
Example creation with an
optimization_problem.OptimizationProblem
usingmake_numeric_factors()
. The linearization functions are generated inmake_numeric_factors()
and are linearized with respect toproblem.optimized_keys()
:problem = OptimizationProblem(subproblems=[...], residual_blocks=...) factors = problem.make_numeric_factors("my_problem") optimizer = Optimizer(factors)
Example creation with an
optimization_problem.OptimizationProblem
usingmake_symbolic_factors()
. The symbolic factors are converted into numeric factors when the optimizer is created, and are linearized with respect to the “optimized keys” passed to the optimizer. The linearization functions are generated when converting to numeric factors when the optimizer is created:problem = OptimizationProblem(subproblems=[...], residual_blocks=...) factors = problem.make_symbolic_factors("my_problem") optimizer = Optimizer(factors, problem.optimized_keys())
Wraps the C++
sym::Optimizer
class inopt/optimizer.h
, so the API is mostly the same and optimization results will be identical.- Parameters:
factors (T.Iterable[T.Union[Factor, NumericFactor]]) – A sequence of either Factor or NumericFactor objects representing the residuals in the problem. If (symbolic) Factors are passed, they are convered to NumericFactors by generating linearization functions of the residual with respect to the keys in
optimized_keys
.optimized_keys (T.Optional[T.Sequence[str]]) – A set of the keys to be optimized. Only required if symbolic factors are passed to the optimizer.
params (T.Optional[OptimizerParams]) – Params for the optimizer. Defaults are in OptimizerParams, except that verbose is True by default.
debug_stats (T.Optional[bool]) –
include_jacobians (T.Optional[bool]) –
- Params#
alias of
OptimizerParams
- Status#
alias of
optimization_status_t
- FailureReason#
alias of
levenberg_marquardt_solver_failure_reason_t
- class Result(initial_values, optimized_values, _stats)[source]#
Bases:
object
The result of an optimization, with additional stats and debug information
- Parameters:
initial_values (Values) –
optimized_values (Values) –
_stats (OptimizationStats) –
- initial_values#
The initial guess used for this optimization
- optimized_values#
The best Values achieved during the optimization (Values with the smallest error)
- iterations#
Per-iteration stats, if requested, like the error per iteration. If debug stats are turned on, also the Values and linearization per iteration.
- best_index#
The index into iterations for the iteration that produced the smallest error. I.e.
result.iterations[best_index].values == optimized_values
. This is not guaranteed to be the last iteration, if the optimizer tried additional steps which did not reduce the error
- status#
What was the result of the optimization? (did it converge, fail, etc.)
- failure_reason#
If
status == FAILED
, why?
- best_linearization#
The linearization at best_index (at optimized_values), filled out if
populate_best_linearization=True
- jacobian_sparsity#
The sparsity pattern of the jacobian, filled out if
debug_stats=True
andinclude_jacobians=True
- linear_solver_ordering#
The ordering used for the linear solver, filled out if
debug_stats=True
- cholesky_factor_sparsity#
The sparsity pattern of the cholesky factor L, filled out if
debug_stats=True
- property status: optimization_status_t#
- property failure_reason: levenberg_marquardt_solver_failure_reason_t#
- property best_linearization: Linearization | None#
- property jacobian_sparsity: sparse_matrix_structure_t#
- property cholesky_factor_sparsity: sparse_matrix_structure_t#
- compute_all_covariances(optimized_value)[source]#
Compute the covariance matrix (J^T@J)^-1 for all optimized keys about a given linearization point
- compute_covariances(optimized_value, keys)[source]#
Get covariances for the given subset of keys at the given linearization
This version is potentially much more efficient than computing the covariances for all keys in the problem.
Currently requires that keys corresponds to a set of keys at the start of the list of keys for the full problem, and in the same order. It uses the Schur complement trick, so will be most efficient if the hessian is of the following form, with C block diagonal:
A = ( B E ) ( E^T C )
- compute_full_covariance(optimized_value)[source]#
Get the full problem covariance at the given linearization
Unlike compute_covariance and compute_all_covariances, this includes the off-diagonal blocks, i.e. the cross-covariances between different keys.
The ordering of entries here is the same as the ordering of the keys in the linearization, which can be accessed via linearization_index().
May not be called before either optimize or linearize has been called.
- optimize(initial_guess, **kwargs)[source]#
Optimize from the given initial guess, and return the optimized Values and stats
- Parameters:
initial_guess (Values) – A Values containing the initial guess, should contain at least all the keys required by the
factors
passed to the constructornum_iterations – If < 0 (the default), uses the number of iterations specified by the params at construction
populate_best_linearization – If true, the linearization at the best values will be filled out in the stats
kwargs (Any) –
- Returns:
The optimization results, with additional stats and debug information. See the
:class:`Optimizer.Result` documentation for more information
- Return type:
- linearize(values)[source]#
Compute and return the linearization at the given Values
- Parameters:
values (Values) –
- Return type:
- load_iteration_values(values_msg)[source]#
Load a
values_t
message into a PythonValues
by first creating a C++ Values, then converting back to the python key names.- Parameters:
values_msg (values_t) –
- Return type:
- linearization_index()[source]#
Get the index mapping keys to their positions in the linearized state vector. Useful for extracting blocks from the problem jacobian, hessian, or RHS
Returns: The index for the Optimizer’s problem linearization
- linearization_index_entry(key)[source]#
Get the index entry for a given key in the linearized state vector. Useful for extracting blocks from the problem jacobian, hessian, or RHS
- Parameters:
key (str) – The string key for a variable in the Python Values
- Return type:
index_entry_t
Returns: The index entry for the variable in the Optimizer’s problem linearization