SCIENTIA SINICA Informationis, Volume 50 , Issue 5 : 675-691(2020) https://doi.org/10.1360/SSI-2019-0096

## Structure-preserving shape completion of 3D point clouds with generative adversarial network

• ReceivedMay 11, 2019
• AcceptedSep 17, 2019
• PublishedApr 17, 2020
Share
Rating

### Abstract

Due to the difficulty in maintaining the fine structures of 3D point cloud in shape completion, this study, with the help of the generative adversarial network framework, proposes a novel neural network for automatically repairing and completing the 3D shape of point clouds. This network consists of a generator and a discriminator. The generator of the proposed neural network adopts an encoder-decoder structure and takes the missing 3D point cloud shape data as the input. Firstly, it aligns the sampling point position and feature information of the input point cloud data by the input transform and feature transform. Then the weighted shared multi-layer perceptron extracts the local shape features for each sampling point and also extracts its feature codewords using the maximum pool layer and multi-layer perceptron coding. Secondly, it adds the feature codewords of sampling points with the grid coordinate data, and the decoder converts the grid data into the missing data of the underlying point cloud using two successive three-layer perceptron folding operations. Finally, it merges the missing completion data and the input data to get the complete 3D point cloud shape. Meanwhile, the proposed neural network discriminator receives the real and the completed point cloud data generated by the generator. The same encoder structure as the generator is also adopted to distinguish the true or false of the point cloud data, while the classification results are a feedback for optimizing the generator. Also, the generator will generate the “real” point cloud shape data. Experimental results illustrate that, for both the dense and sparse incomplete point cloud data, the proposed method effectively maintains the fine structures of the input point clouds while repairing the missing part of the underlying shapes.

### References

[1] Gross M, Pfister H. Point Based Graphics. Burlington: Morgan Kaufmann Publisher, 2007. Google Scholar

[2] Miao Y W, Xiao C X. Geometric Processing and Shape Modeling of 3D Point-Sampled Models. Beijing: Science Press, 2014. Google Scholar

[3] Henry P, Krainin M, Herbst E. RGB-D mapping: Using Kinect-style depth cameras for dense 3D modeling of indoor environments. Int J Robotics Res, 2012, 31: 647-663 CrossRef Google Scholar

[4] Nealen A, Igarashi T, Sorkine O, et al. Laplacian mesh optimization. In: Proceedings of the 4th International Conference on Computer Graphics and Interactive Techniques in Australasia and Southeast Asia, Kuala Lumpur, 2006. 381--389. Google Scholar

[5] Zhao W, Gao S, Lin H. A robust hole-filling algorithm for triangular mesh. Visual Comput, 2007, 23: 987-997 CrossRef Google Scholar

[6] Kazhdan M, Hoppe H. Screened poisson surface reconstruction. ACM Trans Graph, 2013, 32: 29. Google Scholar

[7] Wu J, Gao B B, Wei X S. Resource-constrained deep learning: challenges and practices. Sci Sin-Inf, 2018, 48: 501-510 CrossRef Google Scholar

[8] Chang A X, Funkhouser T, Guibas L, et al. Shapenet: an information-rich 3D model repository. 2015,. arXiv Google Scholar

[9] Su H, Maji S, Kalogerakis E, et al. Multi-view convolutional neural networks for 3D shape recognition. In: Proceedings of IEEE International Conference on Computer Vision, Santiago, 2015. 945--953. Google Scholar

[10] Sharma A, Grau O, Fritz M. Vconv-dae: deep volumetric shape learning without object labels. In: Proceedings of European Conference on Computer Vision, Amsterdam, 2016. 236--250. Google Scholar

[11] Chen X Z, Ma H M, Wan J, et al. Multi-view 3D object detection network for autonomous driving. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, 2017. 1907--1915. Google Scholar

[12] Qi C R, Su H, Nießner M, et al. Volumetric and multi-view cnns for object classification on 3D data. In: Proceedings of IEEE conference on Computer Vision and Pattern Recognition, Las Vegas, 2016. 5648--5656. Google Scholar

[13] Nguyen D T, Hua B S, Tran K, et al. A field model for repairing 3D shapes. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, 2016. 5676--5684. Google Scholar

[14] Dai A, Qi C R, Nießner M. Shape completion using 3D-encoder-predictor cnns and shape synthesis. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, 2017. 5868--5877. Google Scholar

[15] Wang W Y, Huang Q G, You S Y, et al. Shape inpainting using 3D generative adversarial network and recurrent convolutional networks. In: Proceedings of IEEE International Conference on Computer Vision, Venice, 2017. 2298--2306. Google Scholar

[16] Qi C R, Su H, Mo K, et al. Pointnet: deep learning on point sets for 3D classification and segmentation. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, 2017. 652--660. Google Scholar

[17] Yuan W T, Khot T, Held D, et al. PCN: point completion network. In: Proceedings of the 6th IEEE International Conference on 3D Vision, Verona, 2018. 728--737. Google Scholar

[18] Yang Y Q, Feng C, Shen Y R, et al. Foldingnet: point cloud auto-encoder via deep grid deformation. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, 2018. 206--215. Google Scholar

[19] Goodfellow I, Pouget-Abadie J, Mirza M, et al. Generative adversarial nets. In: Proceedings of Advances in Neural Information Processing Systems, Montreal, 2014. 2672--2680. Google Scholar

[20] Radford A, Metz L, Chintala S. Unsupervised representation learning with deep convolutional generative adversarial networks. 2015,. arXiv Google Scholar

[21] Zhang H, Goodfellow I, Metaxas D, et al. Self-attention generative adversarial networks. 2018,. arXiv Google Scholar

[22] Arjovsky M, Chintala S, Bottou L. Wasserstein generative adversarial networks. In: Proceedings of the 34th International Conference on Machine Learning, Sydney, 2017. 214--223. Google Scholar

[23] Gulrajani I, Ahmed F, Arjovsky M, et al. Improved training of wasserstein gans. In: Proceedings of Advances in Neural Information Processing Systems, Long Beach, 2017. 5767--5777. Google Scholar

[24] Pauly M, Mitra N J, Wallner J, et al. Discovering structural regularity in 3D geometry. ACM Trans Graph, 2008, 27: 43. Google Scholar

[25] Li Y, Dai A, Guibas L. Database-Assisted Object Retrieval for Real-Time 3D Reconstruction. Comput Graphics Forum, 2015, 34: 435-446 CrossRef Google Scholar

[26] Kim V G, Li W, Mitra N J. Learning part-based templates from large collections of 3D shapes. ACM Trans Graph, 2013, 32: 70 CrossRef Google Scholar

[27] Pauly M, Mitra N J, Giesen J, et al. Example-based 3D scan completion. In: Proceedings of the 3rd Eurographics Symposium on Geometry Processing, Vienna, 2005. 23--32. Google Scholar

[28] Rock J, Gupta T, Thorsen J, et al. Completing 3D object shape from one depth image. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, Boston, 2015. 2484--2493. Google Scholar

[29] Liu J, Yu F, Funkhouser T. Interactive 3D modeling with a generative adversarial network. In: Proceedings of IEEE International Conference on 3D Vision (3DV), 2017. 126--134. Google Scholar

[30] Fan H Q, Su H, Guibas L J. A point set generation network for 3D object reconstruction from a single image. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, 2017. 605--613. Google Scholar

[31] Wu Z R, Song S R, Khosla A, et al. 3D shapenets: a deep representation for volumetric shapes. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, Boston, 2015. 1912--1920. Google Scholar

• Figure 1

(Color online) Structure of our structure-preserving shape completion network

• Figure 2

(Color online) Network structure for input transformation

• Figure 3

(Color online) Shape completion results using our shape completion approach. For each point cloud model, we show the original point cloud model, completed model using our proposed method, and ground truth, respectively.

• Figure 4

(Color online) Shape completion results for different levels of missing data

• Figure 5

(Color online) Generalization experiments for shape completion. (a) and (b) are the 25%-missing input data and corresponding completion results; (c) and (d) are the 50%-missing input data and corresponding completion results; (e) and (f) are the 75%-missing input data and corresponding completion results.

• Figure 6

(Color online) Comparisons of shape completion results for dense point cloud models. (a) Input point cloud; (b)$\sim$(d) shape completion results using PCN method [17], FoldingNet method [18], and our proposed method respectively;protect łinebreak (e) ground truth.

• Figure 7

(Color online) Comparisons of shape completion results for sparse point cloud models. (a) Input point cloud; (b)$\sim$(d) shape completion results using PCN method [17], FoldingNet method [18], and our proposed method respectively;protect łinebreak (e) ground truth.

• Table 1   Number of sampling points of our point cloud models
 Data types $\#$Sampling points of $\#$Sampling points of $\#$Sampling points of input models $(N)$ 2D girds $(M)$ output models $(N+M)$ Dense point clouds 12288 4096 16384 Sparse point clouds 540 484 1024
• Table 2   Statistics of ECD error via different shape completion methods$^{\rm~a)}$
 Data types Point cloud models PCN method [17] FoldingNet method [18] Our method Desk lamp 0.00549 0.00471 0.00159 Round table 0.00406 0.00326 0.00112 Computer chair 0.00630 0.00622 0.00208 Ceiling lamp 0.00370 0.00334 0.00196 Dense point clouds Basket 0.01027 0.00781 0.00317 Bedside lamp 0.00884 0.00536 0.00153 Headset 0.01037 0.01322 0.00379 Flower vase 0.00957 0.00993 0.00293 Guitar 0.01029 0.00761 0.00812 Bar chair 0.01638 0.01346 0.01498 Sparse point clouds Desk lamp 0.01099 0.00699 0.00693 Bow chair 0.00960 0.01284 0.01074 Bow-foot table 0.02761 0.02428 0.01866 Floor lamp 0.01614 0.01092 0.00811

a) The bold numbers represent the optimal results.

Citations

• #### 0

Altmetric

Copyright 2020  CHINA SCIENCE PUBLISHING & MEDIA LTD.  中国科技出版传媒股份有限公司  版权所有