Paper

Robust Visual Localization in Changing Lighting Conditions


Goal

Provide an illumination-robust visual localization alogrithm for Astrobee*
(* a free-flying robot designed to autonomously navigate on the International Space Station (ISS))


Keypoints

  • the proposed algorithm follow the three steps as below
    • fetch the current illumination level
    • select an appropriate map
      (find the nearest brightness by symmetric KL-divergence value)
    • adjust camera exposure time
      (if the estimated lighting condition is too dark,
      then increasing exposure time, else decreasing)
  • some constraints
    • camera with adjustable exposure time
    • operate in the fixed region

Astrobee’s Current Localization System

  • Offline Map Construction + Online Image Localization
    • Offline Map Construction
      • why: fixed operating region + higher stability and accuracy than existing SLAM systems
      • how: the following four steps
        1. SfM (SURF)
        2. rebuilding Map with BRISK feautres
          • tradeoff between (accurate & robust) and efficiency with respect to SURF features
            • less accurate and robust (critical for accurate offline map building)
            • faster (critical for online localization)
        3. registration to ISS Coordinate
          • the map is registered to the pre-defined ISS coordinate system
        4. vocabulary Tree Database
          • constructed for fast retrieval of similar images for localization
    • Online Image Localization
      • how: the following four steps
        1. detect Features with BRISK
        2. vocabulary Tree DB Query
        3. P3P with RANSAC
        4. Pose-only Bundle Adjustment

Investigation for changing lighting condition (Setting)

  • images taken under ten different conditions
    simulating day, night and intermediate lighting levels on the ISS
    • levels are 5, 15, 30, 45, 60, 75, 90, 105, 120, and 135 Lux
    • for each lighting level:
      • nearly 2500 images were recorded
      • a digital light meter on a fixed point on the wall measured the luminance in Lux
  • the robot slides freely on the surface of the table, constrained to motion on the two dimensional plane
  • localization algorithm computes the full 6 DoF camera pose
  • an overhead camera and an AR tag on Astrobee measure the ground truth pose
  • an image is labelled as a failure if
    the estimated pose is outside the plane of the granite table (a 1.5x1.5 m area)
    or if localization fails

Observation for changing lighting condition

  • dark maps below 45 lux show over an 80% success rate regardless of lighting conditions.
  • bright maps over 60 lux work well only with bright images.
    • why: the feature descriptors cannot describe features in bright lighting conditions well
      because many image intensity values are saturated.
      In dark conditions, saturation rarely occurs.
  • the most effective combination for stable localization is
    to use a map constructed in the same lighting conditions as the test images.

Proposed algorithm

  • be inserted after step 1 and before step 2 of the Online Image Localization
  • Brightness Recognition + Map Recommendation System
    • Brightness Recognition
      • on the preise that images taken at similar places under similar lighting
        will have similar intensity distributions
      • for each level:
        • DB query in each lighting level (find the most similar one in terms of BRISK features)
        • compute the symmetric KL-divergence (similar to the concept of relative entropy)
      • among all levels, choose the most similar image (the smallest symmetric KL-divergence value)
        • the smaller the KL-divergence, the more similar the two images are (the nearest brightness)
      • Note that the proposed algorithm for a camera with adjustable exposure time, not for a camera with automatic exposure control
    • Map Recommendation System
      • requires the construction of multiple sparse maps,
        but enables localization under changing lighting conditions

A coarse-to-fine algorithm for registration in 3D street-view cross-source point clouds