# Cameras Tutorial#

This is an introductory walkthrough of the symbolic cameras package in symforce.

[1]:

import symforce.symbolic as sf
from symforce.notebook_util import display


Let’s start by creating a linear camera calibration object:

[2]:

linear_camera_cal = sf.LinearCameraCal.symbolic("cal")
display(linear_camera_cal)

<LinearCameraCal
focal_length=[cal.f_x, cal.f_y],
principal_point=[cal.c_x, cal.c_y],
distortion_coeffs=[]>


This lets us project/deproject points written in the camera frame as so:

[3]:

camera_pixel = sf.V2.symbolic("p")
camera_ray, _ = linear_camera_cal.camera_ray_from_pixel(camera_pixel)
display(camera_ray)

$\displaystyle \left[\begin{matrix}\frac{- cal.c_{x} + p_{0}}{cal.f_{x}}\\\frac{- cal.c_{y} + p_{1}}{cal.f_{y}}\\1\end{matrix}\right]$
[4]:

camera_point_reprojected, _ = linear_camera_cal.pixel_from_camera_point(
camera_ray,
)
display(camera_point_reprojected)

$\displaystyle \left[\begin{matrix}p_{0}\\p_{1}\end{matrix}\right]$

Using camera calibration objects, we can create cameras with additional parameters, such as an image size:

[5]:

linear_camera = sf.Camera(
calibration=sf.LinearCameraCal(
focal_length=(440, 400),
principal_point=(320, 240),
),
image_size=(640, 480),
)
display(linear_camera)

<Camera
CameraCal=<LinearCameraCal
focal_length=[440, 400],
principal_point=[320, 240],
distortion_coeffs=[]>
image_size=[640, 480]>


Now, when projecting points into the image frame, we can check whether the resulting point is in the bounds determined by image_size:

[6]:

point_in_FOV = sf.V3(0, 0, 1)
point_outside_FOV = sf.V3(100, 0, 1)
for point in (point_in_FOV, point_outside_FOV):
pixel, is_valid = linear_camera.pixel_from_camera_point(point)
print(
"point={} -> pixel={}, is_valid={}".format(
point.to_storage(),
pixel.to_storage(),
is_valid,
)
)

point=[0, 0, 1] -> pixel=[320, 240], is_valid=1
point=[100, 0, 1] -> pixel=[44320, 240], is_valid=0


We can also create a camera with a given pose:

[7]:

linear_posed_camera = sf.PosedCamera(
pose=sf.Pose3(
# camera is spun 180 degrees about y-axis
R=sf.Rot3.from_yaw_pitch_roll(0, sf.pi, 0),
t=sf.V3(),
),
calibration=linear_camera.calibration,
image_size=(640, 480),
)
display(linear_posed_camera)

<PosedCamera
Pose=<Pose3 R=<Rot3 <Q xyzw=[0, 1.0, 0, 0]>>, t=(0, 0, 0)>
Camera=<PosedCamera
CameraCal=<LinearCameraCal
focal_length=[440, 400],
principal_point=[320, 240],
distortion_coeffs=[]>
image_size=[640, 480]>>


The given pose can be used to transform points between a global frame and the image frame:

[8]:

global_point = sf.V3(0, 0, -1)
print(
"point in global coordinates={} (in camera coordinates={})".format(
global_point.to_storage(),
(linear_posed_camera.pose * global_point).to_storage(),
)
)

pixel, is_valid = linear_posed_camera.pixel_from_global_point(global_point)
print(
"global_point={} -> pixel={}, is_valid={}".format(
global_point.to_storage(), pixel.to_storage(), is_valid
)
)

point in global coordinates=[0, 0, -1] (in camera coordinates=[0, 0, 1.0])
global_point=[0, 0, -1] -> pixel=[320, 240], is_valid=1


We can also transform points in pixel coordinates back into the global frame (given a range):

[9]:

range_to_point = (global_point - linear_posed_camera.pose.t).norm()
global_point_reprojected, is_valid = linear_posed_camera.global_point_from_pixel(
pixel, range_to_point=range_to_point
)
display(global_point_reprojected)

$\displaystyle \left[\begin{matrix}0\\0\\-1.0\end{matrix}\right]$

Finally, we can warp points between two posed cameras given the location of the pixel in the source camera, the inverse range to the point, and the target camera to warp the point into.

[10]:

# Perturb second camera slightly from first (small angular change in roll)
perturbed_rotation = linear_posed_camera.pose.R * sf.Rot3.from_yaw_pitch_roll(0, 0, 0.5)
target_posed_cam = sf.PosedCamera(
pose=sf.Pose3(R=perturbed_rotation, t=sf.V3()),
calibration=linear_camera.calibration,
)
# Warp pixel from source camera into target camera given inverse range
target_pixel, is_valid = linear_posed_camera.warp_pixel(
pixel=sf.V2(320, 240),
inverse_range=1.0,
target_cam=target_posed_cam,
)
display(target_pixel)

$\displaystyle \left[\begin{matrix}320\\458.520995937516\end{matrix}\right]$

In the examples above we used a linear calibration, but we can use other types of calibrations as well:

[11]:

atan_cam = sf.ATANCameraCal(
focal_length=[380.0, 380.0],
principal_point=[320.0, 240.0],
omega=0.35,
)
camera_ray, is_valid = atan_cam.camera_ray_from_pixel(sf.V2(50.0, 50.0))
display(camera_ray)
pixel, is_valid = atan_cam.pixel_from_camera_point(camera_ray)
display(pixel)

$\displaystyle \left[\begin{matrix}-0.72576759882138\\-0.510725347318749\\1\end{matrix}\right]$
$\displaystyle \left[\begin{matrix}49.9999999999999\\50.0\end{matrix}\right]$