Duke Computer Science Shield

Textured Occupancy Grids for Monocular
Localization Without Features

Julian Mason, Susanna Ricco, and Ronald Parr
Duke University Computer Science

Algorithm

In the paper Textured Occupancy Grids for Monocular Localization Without Features, we present a simple, novel algorithm for localizing a mobile robot. Our technique requires only a single camera, and requires no feature extraction. Instead, it relies on a three-dimensional textured occupancy grid, in which each voxel has an associated color. This makes it possible to render the map as it would be seen from an arbitrary location:

 A hallway scene, as seen from a robot-mounted camera. A render of a textured occupancy grid, from (roughly) the same location.
A hallway scene, as seen from a robot-mounted camera. A render of a textured occupancy grid, from (roughly) the same location.

We extend the classic particle filter algorithm for robot localization with a new sensor model that exploits the fidelity of these rendered views. We demonstrate empirically that our technique is competitive with laser rangefinder localization, even when faced with substantial map errors due to changes in structure and lighting.

Robot trajectories.
An example comparing dead reckoning (green line) with our algorithm (red line) and laser rangefinder localization (blue line). The mean disagreement between our algorithm and laser rangefinder localization is 0.25 meters. This is a two-dimensional slice through a full 3D textured map, available in our datasets download, and visible in more detail below.

Datasets

In addition to our code, we have posted all of the data used in this paper. Our dataset includes two extremely accurate ground truth maps, and three robot trajectories. Our ground truth maps were built using a Riegl LMS-Z390 Laser Scanner, which generates three-dimensional colorized point clouds. These point clouds were then registered using laser retroreflectors, and show perfect loop closure with no alignment errors, even at 1cm resolution. Our maps are:

  • The "lab" dataset, of a classic indoor laboratory environment 12x15x2.5 meters in size. This dataset contains roughly 20 million points.
  • The "hallway" dataset, seen above (in a two-dimensional slice) and below (as a flythrough, and as animated slices). This dataset is 17.5x22.5x3 meters in size, and contains roughly 21 million points.

Our robot mounts six cameras and a tilt-mounted Hokuyo laser rangefinder. The tilting Hokuyo makes allows us to collect three-dimensional point clouds from the mobile robot, while the cameras allow detailed appearance measurement. Although only two-dimensional laser data, and one camera, were used in this paper, we have included everything. Our robot trajectories are:

  • A run through the lab dataset.
  • A run through the hallway dataset, taken at the same time the map was being built. As a result, the lighting and physical structure of the map matches that observed by the robot.
  • A second run through the hallway dataset, taken three months after map creation, and at a different time of day. As a result, there are major changes in both lighting and the physical structure of the world; our algorithm is robust to these changes.

The videos below show different parts of our dataset: camera data from a hallway trajectory, three-dimensional renders of the hallway map, and a two-dimensional slices view of the hallway map.

The camera images taken from one of our robot's cameras during a long loop of the hallway environment. Our algorithm used these data, along with the hallway map (visible in slices above and below) to generate the localized trajectory seen in the next video.

The rendered views corresponding to the camera images in the previous video. This example is from a robot tracking example, and demonstrates the excellent quality of our textured occupancy grids. This trajectory corresponds to the red line in the figure above. The bright green and yellow areas correspond to areas of missing data; see our paper for details.

A slice-by-slice view of the hallway environment, in true color (where black signifies unoccupied space). This map is included in our dataset at a resolution of 5cm per voxel; the original laser range data used to generate it are also included.

What's next?

We invite you to read our paper, published in the 2011 IEEE International Conference on Robotics and Automation (ICRA). Then, download our datasets and code, and try it out for yourself! The download links are to the right. The data file includes the data used to build our maps, the maps used in the paper, and the camera and laser data from the robot trajectories. The code file includes our implementation, including installation instructions. Both files include detailed READMEs.

Questions

If you have questions, direct them to Julian Mason.

Downloads