Combining Voxel and Normal Predictions for Multi-View 3D Sketching - Archive ouverte HAL Accéder directement au contenu
Article Dans Une Revue Computers and Graphics Année : 2019

Combining Voxel and Normal Predictions for Multi-View 3D Sketching

Résumé

Recent works on data-driven sketch-based modeling use either voxel grids or normal/depth maps as geometric representations compatible with convolutional neural networks. While voxel grids can represent complete objects-including parts not visible in the sketches-their memory consumption restricts them to low-resolution predictions. In contrast, a single normal or depth map can capture fine details, but multiple maps from different viewpoints need to be predicted and fused to produce a closed surface. We propose to combine these two representations to address their respective shortcomings in the context of a multi-view sketch-based modeling system. Our method predicts a voxel grid common to all the input sketches, along with one normal map per sketch. We then use the voxel grid as a support for normal map fusion by optimizing its extracted surface such that it is consistent with the re-projected normals, while being as piecewise-smooth as possible overall. We compare our method with a recent voxel prediction system, demonstrating improved recovery of sharp features over a variety of man-made objects.
Fichier principal
Vignette du fichier
paper-RR.pdf (23.37 Mo) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)
Loading...

Dates et versions

hal-02141469 , version 1 (27-05-2019)

Identifiants

Citer

Johanna Delanoy, David Coeurjolly, Jacques-Olivier Lachaud, Adrien Bousseau. Combining Voxel and Normal Predictions for Multi-View 3D Sketching. Computers and Graphics, 2019, Proceedings of Shape Modeling International 2019, 82, pp.65--72. ⟨10.1016/j.cag.2019.05.024⟩. ⟨hal-02141469⟩
141 Consultations
182 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More