Facebook has launched a new feature that can produce a 3D photo from virtually any standard 2D picture.
The new Facebook feature uses machine-learning techniques that infer the 3D structure of any image, whether its a new or old shot and turns it into a 3D image from 2D, useful for users using single-lens camera phones or tablets.
The feature was created by leveraging mobile-optimization techniques developed by Facebook AI and trained a convolutional neural network (CNN) on millions of pairs of public 3D images and their accompanying depth maps.
Also Read: Zuckerberg shares Facebook focus areas for next decade
Previously, Facebook launched three features to support the 3D format: 3D Photos in Stories, 3D Photos Creation on the Web & 3D Photos Creation on Android to grow the product.
The new feature was built through four methods:
- A network architecture built with a set of parameterizable, mobile-optimized neural building blocks.
- Automated architecture search to find an effective configuration of these blocks, enabling the system to perform the task in under a second on a wide range of devices.
- Quantization-aware training to leverage high-performance INT8 quantization on mobile while minimizing potential quality degradation from the quantization process.
- Large amounts of training data derived from public 3D photos.
Additionally, Facebook is also building a depth estimation for videos taken with mobile devices. Video-length depth estimation will open up innovative content creation tools to users.