File marginalization.h

namespace sym

Typedefs

using MarginalizationFactorf = MarginalizationFactor<float>
using MarginalizationFactord = MarginalizationFactor<double>

Functions

std::vector<Key> ComputeMarginalizationKeyOrder(const std::unordered_set<Key> &keys_to_optimize, const std::unordered_set<Key> &keys_to_marginalize)
template<typename Scalar>
std::variant<MarginalizationFactor<Scalar>, Eigen::ComputationInfo> ComputeSchurComplement(const MatrixX<Scalar> &H, const VectorX<Scalar> &rhs, Scalar c, int delimiter)
template<typename Scalar>
std::variant<MarginalizationFactor<Scalar>, Eigen::ComputationInfo> Marginalize(const std::vector<Factor<Scalar>> &factors, const Values<Scalar> &values, const std::unordered_set<Key> &keys_to_optimize, const std::unordered_set<Key> &keys_to_marginalize)

Given the set of factors and keys to marginalize, computes the data needed for a marginalization factor. This assumes all factors passed in should be included in the computation.

Given the factors, we compute a linearization at the provided values. The system becomes: E = 0.5 | x_u x_l | * | H_uu H_ul | * | x_u | + | rhs_u rhs_l | * | x_u | + 0.5 C | H_lu H_ll | | x_l | | x_l | where x_u are the states to be marginalized and x_l are the Markov blanket (the states that remain and have factors that connect to the marginalized states). We use the the Schur complement to eliminate x_u, giving the final system: E = 0.5 x_l.T * H * x_l + rhs.T * x_l + 0.5 * c’, where H = H_ll - H_lu * H_uu^{-1} * H_ul = H_ll - H_ul.T * H_uu^{-1} * H_ul rhs = rhs_l - H_lu * H_uu^{-1} * rhs_u = rhs_l - H_ul.T * H_uu^{-1} * rhs_u c’ = c - rhs_u.T * A^{-1} rhs_u There are a few references for the H and rhs expressions above (ex: OKVIS paper). For the constant term, find the optimum for x_u (by taking the partial derivative wrt x_u), then x_u* = - A^{-1} * ( B * x_l + rhs_u ) (after substituting and simplifying).

template<typename Scalar>
Factor<Scalar> CreateMarginalizationFactor(const MarginalizationFactor<Scalar> &marginalization_factor)

Create a symforce::Factord representing the marginalization prior to be used in an optimization or future marginalization operation. Building on the derivation above, we can substitute in a new dx = dx’ + delta_x and simplify to get: e(x) ~= 0.5 * (dx’ + delta_x).T * H * (dx’ + delta_x) + rhs.T * (dx’ + delta_x) + 0.5 * f = 0.5 * dx’.T * H * dx’ + (rhs + H * delta_x).T * dx’

  • 0.5 * (delta_x.T * H * delta_x + 2 * rhs.T * delta_x + c) Thus, the Hessian remains unchanged, the updated rhs is (rhs + H * delta_x), and the updated constant term is (delta_x.T * H * delta_x + 2 * rhs.T * delta_x + c).

template<typename ScalarType>
struct MarginalizationFactor
#include <marginalization.h>

Marginalization factors are linear approximations of information we remove from the optimization problem to bound its size. The struct contains all the data needed to create a marginalization factor.

We are storing the factor in a general quadratic form which will be a term used in a non-linear least squares optimization. Taking inspiration from the LevenbergMarquardtSolver doc string: e(x) ~= 0.5 * dx.T * J.T * J * dx + b.T * J * dx + 0.5 * b.T * b where J is the Jacobian of the residual function f(x) and b = f(x0) is the residual at the linearization point.

We store H = J.T * J (ie. the Gauss-Newton approximation of the Hessian), rhs = J.T * b, c = b.T * b (dropping the 0.5 factor by convention). This simplifies to: e(x) ~= 0.5 * dx.T * H * dx + rhs.T * dx + 0.5 * c We end up solving the larger system in the form of H’ * x = b’ to find the optimal value of x.

Public Types

using Scalar = ScalarType
using LcmType = internal::MarginalizationFactorLcmTypeT<Scalar>

Public Functions

LcmType GetLcmType() const

Public Members

MatrixX<Scalar> H = {}
VectorX<Scalar> rhs = {}
Scalar c = {}
Values<Scalar> linearization_values = {}
std::vector<Key> keys

Public Static Functions

static MarginalizationFactor FromLcmType(const LcmType &msg)