(Not finished) 3DMatch: Learning Local Geometric Descriptors from RGB-D Reconstructions
Authors: Andy Zeng, Shuran Song, Matthias Nießner, Matthew Fisher, Jianxiong Xiao, Thomas Funkhouser
Posted on October 24, 2019
Related Works
Hand-crafted 3D Local Descriptors
many of these descriptors are now available in the Point Cloud Library (PCL)
are manually designed for specific applications or 3D data types
are difficult for them to generalize to new data modalities
Learned 2D Local Descriptors
However, in addition to being limited to 2D correspondences on images, multi-view stereo is difficult to 1803 scale up in practice and is prone to error from missing correspondences on textureless or non-Lambertian surfaces, so it is not suitable for learning a 3D surface descriptor??
Learned 3D Global Descriptors
for exmample, 3D ShapeNets introduced 3D deep learning for modeling 3D shapes
their focus is centered on extracting features from complete 3D object models at a global level
Learned 3D Local Descriptors
[14] uses a 2D ConvNet descriptor to match local geometric features for mesh labeling
….
Self-supervised Deep Learning
…
Learning From Reconstructions
function ψ that maps the local volumetric region (or 3D patch) around a point on a 3D surface to a descriptor vector
learn ψ by making use of data from existing high quality RGB-D scene reconstructions
the approach is threefold
reconstruction datasets: provide more amount of training correspondences (Between different observations of the same interest point, its local 3D patches can look very different due to sensor noise, viewpoint variance, and occlusion patterns. This helps to provide a large and diverse correspondence training set.)