symforce.opt.noise_models module¶
- class NoiseModel[source]¶
Bases:
object
Base class for whitening unwhitened residuals and/or computing their associated error in a least-squares problem.
- abstract whiten(unwhitened_residual)[source]¶
Whiten the residual vector.
- Parameters:
unwhitened_residual (MatrixT) –
- Return type:
MatrixT
- class ScalarNoiseModel[source]¶
Bases:
NoiseModel
Base class for noise models that apply a whitening function to each element of the unwhitened residual
I.e. if
f()
is thewhiten_scalar()
function, each element of the whitened residual can be written as:whitened_residual[i] = f(unwhitened_residual[i])
- abstract whiten_scalar(x, bounded_away_from_zero=False)[source]¶
A scalar-valued whitening function which is applied to each element of the unwhitened residual.
- whiten(unwhitened_residual)[source]¶
Whiten the unwhitened residual vector by applying
whiten_scalar()
to each element.- Parameters:
unwhitened_residual (MatrixT) –
- Return type:
MatrixT
- whiten_norm(residual, epsilon=0.0)[source]¶
Whiten the norm of the residual vector.
Let
f(x)
be the whitening function here, and letx
be vector of residuals. We compute the whitened residual vector asw(x) = f(||x||)/||x|| * x
. Then, the overall residual is later computed as||w(x)|| == f(||x||)
, and so we’re minimizing the whitened norm of the full residual for each point.- Parameters:
residual (MatrixT) –
epsilon (float) –
- Return type:
MatrixT
- class IsotropicNoiseModel(scalar_information=None, scalar_sqrt_information=None)[source]¶
Bases:
ScalarNoiseModel
Isotropic noise model; equivalent to multiplying the squared residual by a scalar
The cost used in the optimization is:
cost = 0.5 * information * unwhitened_residual.T * unwhitened_residual
such that:
cost = 0.5 * whitened_residual.T * whitened_residual
The whitened residual is:
whitened_residual = sqrt(information) * unwhitened_residual
- Parameters:
scalar_information (T.Optional[sf.Scalar]) – Scalar by which the least-squares error will be multiplied. In the context of probability theory, the information is the inverse of the variance of the unwhitened residual. The information represents the weight given to a specific unwhitened residual relative to other residuals used in the least-squares optimization.
scalar_sqrt_information (T.Optional[sf.Scalar]) – Square-root of
scalar_information
. Ifscalar_sqrt_information
is specified, we avoid needing to take the square root ofscalar_information
. Note that only one ofscalar_information
andscalar_sqrt_information
needs to be specified.
- classmethod from_variance(variance)[source]¶
Returns an IsotropicNoiseModel given a variance. Typically used when we treat the residual as a random variable with known variance, and wish to weight its cost according to the information gained by that measurement (i.e. the inverse of the variance).
- Parameters:
variance (float) – Typically the variance of the residual elements. Results in cost
cost = 0.5 * (1 / variance) * unwhitened_residual.T * unwhitened_residual
- Return type:
- classmethod from_sigma(standard_deviation)[source]¶
Returns an IsotropicNoiseModel given a standard deviation. Typically used when we treat the residual as a random variable with known standard deviation, and wish to weight its cost according to the information gained by that measurement (i.e. the inverse of the variance).
- Parameters:
standard_deviation (float) – The standard deviation of the residual elements. Results in
cost = 0.5 * (1 / sigma^2) * unwhitened_residual.T * unwhitened_residual
- Return type:
- whiten_scalar(x, bounded_away_from_zero=False)[source]¶
Multiplies a single element of the unwhitened residual by
sqrt(information)
so that the least-squares cost associated with the element is scaled byinformation
.- Parameters:
x (float) – Single element of the unwhitened residual
bounded_away_from_zero (bool) – True if x is guaranteed to not be zero. Typically used to avoid extra ops incurred by using epsilons to avoid singularities at x = 0 when it’s known that x != 0. However, this argument is unused because there is no singularity at x = 0 for this whitening function.
- Return type:
- class DiagonalNoiseModel(information_diag=None, sqrt_information_diag=None)[source]¶
Bases:
NoiseModel
Noise model with diagonal weighting matrix
The cost used in the optimization is:
cost = 0.5 * unwhitened_residual.T * sf.diag(information_diag) * unwhitened_residual
where
information_diag
is a vector of scalars representing the relative importance of each element of the unwhitened residual.The total cost is then:
cost = 0.5 * whitened_residual.T * whitened_residual
Thus, the whitened residual is:
whitened_residual = sf.diag(sqrt_information_diag) * unwhitened_residual
where
sqrt_information_diag
is the element-wise square root ofinformation_diag
.- Parameters:
information_diag (T.Optional[T.Sequence[sf.Scalar]]) – List of elements of the diagonal of the information matrix. In the context of probability theory, this vector represents the inverse of the variance of each element of the unwhitened residual, assuming that each element is an independent random variable.
sqrt_information_diag (T.Optional[T.Sequence[sf.Scalar]]) – Element-wise square-root of
information_diag
. If specified, we avoid needing to take the square root of each element ofinformation_diag
. Note that only one ofinformation_diag
andsqrt_information_diag
needs to be specified.
- classmethod from_variances(variances)[source]¶
Returns an DiagonalNoiseModel given a list of variances of each element of the unwhitened residual
Typically used when we treat the unwhitened residual as a sequence of independent random variables with known variances.
- classmethod from_sigmas(standard_deviations)[source]¶
Returns an DiagonalNoiseModel given a list of standard deviations of each element of the unwhitened residual
Typically used when we treat the unwhitened residual as a sequence of independent random variables with known standard deviations.
- class PseudoHuberNoiseModel(delta, scalar_information, epsilon=0.0)[source]¶
Bases:
ScalarNoiseModel
A smooth loss function that behaves like the L2 loss for small x and the L1 loss for large x.
The cost used in the least-squares optimization will be:
cost = sum( pseudo_huber_loss(unwhitened_residual[i]) ) cost = 0.5 * whitened_residual.T * whitened_residual
where the sum is taken over the elements of the unwhitened residual.
This noise model applies the square-root of the pseudo-huber loss function to each element of the unwhitened residual such that the resulting cost used in the least-squares problem is the pseudo-huber loss. The pseudo-huber loss is defined as:
pseudo_huber_loss(x) = delta^2 * ( sqrt( 1 + scalar_information * (x/delta)^2 ) - 1)
The whitened residual is then:
whitened_residual[i] = sqrt( 2 * pseudo_huber_loss(unwhitened_residual[i]) )
- Parameters:
delta (sf.Scalar) – Controls the point at which the loss function transitions from the L2 to L1 loss. Must be greater than zero.
scalar_information (sf.Scalar) – Constant scalar weight that changes the steepness of the loss function. Can be considered the inverse of the variance of an element of the unwhitened residual.
epsilon (sf.Scalar) – Small value used to handle singularity at x = 0.
- class BarronNoiseModel(alpha, scalar_information, x_epsilon, delta=1, alpha_epsilon=None)[source]¶
Bases:
ScalarNoiseModel
- Noise model adapted from:
Barron, Jonathan T. “A general and adaptive robust loss function.” Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2019.
This noise model applies a modified version of the “practical implementation” from Appendix B of the paper to each scalar element of an unwhitened residual. The Barron loss function is defined as:
barron_loss(x) = delta^2 * (b/d) * (( scalar_information * (x/delta)^2 / b + 1)^(d/2) - 1)
where:
b = |alpha - 2| + epsilon d = alpha + epsilon if alpha >= 0 else alpha - epsilon
Here delta controls the point at which the loss function transitions from quadratic to robust. This is different from the original Barron loss function, and is designed to match the pseudo- huber loss function.
Thus, the cost used in the optimization will be:
cost = sum( barron_loss(unwhitened_residual[i]) ) cost = 0.5 * whitened_residual.T * whitened_residual
where the sum is taken over the elements of the unwhitened residual.
Thus, the whitened residual is:
whitened_residual[i] = sqrt( 2 * barron_loss(unwhitened_residual[i]) )
- Parameters:
alpha (sf.Scalar) –
Controls shape and convexity of the loss function. Notable values:
alpha = 2 -> L2 loss
alpha = 1 -> Pseudo-huber loss
alpha = 0 -> Cauchy loss
alpha = -2 -> Geman-McClure loss
alpha = -inf -> Welsch loss
delta (sf.Scalar) – Determines the transition point from quadratic to robust. Similar to “delta” as used by the pseudo-huber loss function.
scalar_information (sf.Scalar) – Scalar representing the inverse of the variance of an element of the unwhitened residual. Conceptually, we use
scalar_information
to whiten (in a probabilistic sense) the unwhitened residual before passing it through the Barron loss.x_epsilon (sf.Scalar) – Small value used for handling the singularity at x == 0.
alpha_epsilon (T.Optional[sf.Scalar]) – Small value used for handling singularities around alpha.
- static compute_alpha_from_mu(mu, epsilon)[source]¶
Transform mu, which ranges from 0->1, to alpha by alpha=2-1/(1-mu). This transformation means alpha will range from 1 to -inf, so that the noise model starts as a pseudo-huber and goes to a robust Welsch cost.