3D volumetric object generation/prediction from single 2D image
is a quite challenging but meaningful task in 3D visual computing. In this paper, we propose a novel neural network architecture,
named "3DensiNet", which uses density heat-map as an intermediate supervision tool for 2D-to-3D transformation. Specifically,
we firstly present a 2D density heat-map to 3D volumetric object
encoding-decoding network, which outperforms classical 3D autoencoder. Then we show that using 2D image to predict its density
heat-map via a 2D to 2D encoding-decoding network is feasible.
In addition, we leverage adversarial loss to fine tune our network,
which improves the generated/predicted 3D voxel objects to be
more similar to the ground truth voxel object. Experimental results
on 3D volumetric prediction from 2D images demonstrates superior
performance of 3DensiNet over other state-of-the-art techniques in
handling 3D volumetric object generation/prediction from single
2D image.
|