Efficient Large Scale Multi-View Stereo for Ultra High Resolution Image Sets

We present a new approach for large scale multi-view stereo matching, which is designed to operate on ultra high resolution image sets and efficiently compute dense 3D point clouds. We show that, by using a robust descriptor for matching purposes and high resolution images, we can skip the computationally expensive steps other algorithms require. As a result, our method has low memory requirements and low computational complexity while producing 3D point clouds containing virtually no outliers. This makes it exceedingly suitable for large scale reconstruction. The core of our algorithm is the dense matching of image pairs using DAISY descriptors, implemented so as to eliminate redundancies and optimize memory access. We use a variety of challenging data sets to validate and compare our results against other algorithms. Here, we present some of our results. Note that all the results shown here are point cloud renderings.

This research was conducted at EPFL - Computer Vision Laboratory in collaboration with Christoph Strecha and Pascal Fua and makes extensive use of the DAISY descriptor.

select one of the icons below to see the reconstruction video.

Statue Reconstruction [top]

Sequence contains 127 18-Megapixel images of a statue at different scales. Final point cloud contains 15.3 Million points which is computed in 29.5 minutes. Click on the image for the video of the colorized point cloud.

Updated: Thursday, July 24, 2014 10:29:21 +0200