Spectrum analysis of motion parallax:

Heading computation in a 3D cluttered scene

(Joint work with Mike Langer, McGill University)

Previous methods for estimating observer motion in a rigid 3D scene assume that image velocities can be measured at isolated points.  When the observer is moving through a cluttered 3D scene such as a forest, however, point-wise measurements of image velocity are more challenging to obtain because multiple depths and hence multiple velocities are present in most local image regions.

This work introduces a method for estimating egomotion which avoids point-wise image velocity estimation as a first step.  In its place, the direction of motion parallax in local image regions is estimated, using a spectrum-based method, and these directions are then combined to directly estimate 3D observer motion.


Synthetic Sequences

Three different sequences were used: random (textured) squares, two layer (additive) transparency, and random oriented (untextured) cylinders.  Three types of motion were used.

Click on the images below to see the (MPEG compressed) movies.  The sequences we used (uncompressed) are available on the links below.  We performed twenty runs with each sequence.  Only the first one is given here.

Case (i): Forward motion




Uncompressed (PGM) equence
PGM sequence
PGM sequence


Case (ii): Forward motion + pan (rotation about Y axis)




PGM Sequence
PGM sequence
PGM sequence


Case (iii): Lateral motion (left) + roll (rotation about Z axis)




PGM sequence
PGM sequence
PGM sequence


Real sequences



PGM sequence
PGM sequence