3DensiNet: A robust neural network architecture towards 3D volumetric object prediction from 2D image

Meng Wang1,2
Lingjing Wang1,2,3
Yi Fang1,2,3

1NYU Multimedia and Visual Computing Lab
2New York University Abu Dhabi
3New York University Tandon School of Engineering


3D volumetric object generation/prediction from single 2D image is a quite challenging but meaningful task in 3D visual computing. In this paper, we propose a novel neural network architecture, named "3DensiNet", which uses density heat-map as an intermediate supervision tool for 2D-to-3D transformation. Specifically, we firstly present a 2D density heat-map to 3D volumetric object encoding-decoding network, which outperforms classical 3D autoencoder. Then we show that using 2D image to predict its density heat-map via a 2D to 2D encoding-decoding network is feasible. In addition, we leverage adversarial loss to fine tune our network, which improves the generated/predicted 3D voxel objects to be more similar to the ground truth voxel object. Experimental results on 3D volumetric prediction from 2D images demonstrates superior performance of 3DensiNet over other state-of-the-art techniques in handling 3D volumetric object generation/prediction from single 2D image.



News



Paper

Meng Wang, Lingjing Wang, Yi Fang

3DensiNet: A Robust Neural Network Architecture towards 3D Volumetric Object Prediction from 2D Image

ACM Multimedia, 2017

[Paper]
[Bibtex]


Model



Results






This webpage template was borrowed from MEPS.