Preprint / Version 1

Noise Removal from Point Clouds through Sensor Fusion of LiDAR and Camera

##article.authors##

  • Kenta Itakura ImVisionLabs Inc.
  • Takuya Hayashi ImVisionLabs Inc.
  • Yuto Kamiwaki ImVisionLabs Inc.
  • Pang-jo Chun Institute of Engineering Innovation, School of Engineering,the University of Tokyo

DOI:

https://doi.org/10.51094/jxiv.865

Keywords:

Cross calibration, LiDAR, Noise removal, Point cloud, Sensor fusion

Abstract

In this study, the method of noise removal in 3D point clouds using sensor fusion of LiDAR and camera was introduced. First, a point cloud measurement was performed using Matterport Pro3 in Fukushima Prefecture, Japan. The bridges and other man-made objects were scanned while people were also scanned. The relative positions of the camera and LiDAR were adjusted using a checkerboard for cross-calibration. Internal and external parameters were then obtained to map the 2D image to the 3D point cloud. This allowed us to correlate each point in the point cloud with a pixel in the image and develop a segmentation method for noise removal. Next, we extracted the region of a person in a 2D image and mapped it to a 3D space to classify noise points related to the person in the point cloud. It was confirmed that this method can effectively classify people even in the presence of many other objects in the point clouds. The evaluation results using recall, precision, and F1 scores had mean values of 0.923, 0.878, and 0.889 for all samples, respectively, indicating that highly accurate noise reduction is possible. The results of this study are expected to be effective for cleaning and pre-processing 3D point clouds. Future perspectives include evaluating the accuracy of this method on data with a larger number of people.

Conflicts of Interest Disclosure

There are no conflicts of interest to disclose.

Downloads *Displays the aggregated results up to the previous day.

Download data is not yet available.

References

山下淳子, 木村沙智, 川村日成: 3 次元点群データを活用したインフラ構造物の維持管理. 精密工学会誌, Vol, 85, No. 3, pp. 228-231, 2019.

大伴真吾, 鈴木清, 土橋浩, 永田佳文, 菅野晶夫, 安中智, 乾義文: 道路・構造物維持管理支援システムにおけるポイントクラウドの高度利活用について. 写真測量とリモートセンシング, Vol. 55, No. 1, pp. 27-31, 2016.

Chen, Y., Lin, J., and Liao, X.: Early detection of tree encroachment in high voltage powerline corridor using growth model and UAV-borne LiDAR. Int. J. Appl. Earth Obs. Geoinf., Vol. 108, pp. 102740, 2022.

Inomata, T., Fernandez-Diaz, J. C., Triadan, D., García Mollinedo, M., Pinzón, F., García Hernández, M., ... and Moreno Díaz, M.: Origins and spread of formal ceremonial complexes in the Olmec and Maya regions revealed by airborne lidar. Nat. Hum. Behav., Vol. 5, pp. 1487-1501, 2021.

Itakura, K., and Hosoi, F.: Automatic individual tree detection and canopy segmentation from three-dimensional point cloud images obtained from ground-based lidar. J. Agri. Meteorol., Vol. 74, pp. 109-113, 2018.

Itakura, K., Miyatani, S., and Hosoi, F.: Estimating tree structural parameters via automatic tree segmentation from LiDAR point cloud data. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens., Vol. 15, pp. 555-564, 2021.

Itakura, K., and Hosoi, F.: Three-dimensional tree monitoring in urban cities using automatic tree detection method with mobile LiDAR data. Arti. Intel. Data Sci., Vol. 2, pp. 1-10, 2021.

峰岸樹, 江守央, 佐田達典: 点群データに含まれるノイズの統計的・幾何的手法を用いた自動的除去に関する研究. 土木学会論文集 F3 (土木情報学), Vol. 78, No. 2, pp. 49-55, 2022.

Rakotosaona, M. J., La Barbera, V., Guerrero, P., Mitra, N. J., and Ovsjanikov, M.: Pointcleannet: Learning to denoise and remove outliers from dense point clouds. Computer graphics forum, Vol. 39, No. 1, pp. 185-203, 2020.

Chen, L. C., Zhu, Y., Papandreou, G., Schroff, F., and Adam, H.: Encoder-decoder with atrous separable convolution for semantic image segmentation. ECCV, pp. 801-818, 2018.

劉佳明, 党紀, 全邦釘. (2022). DeepLabv3+ を用いた橋梁腐食損傷とその精度の向上. AI・データサイエンス論文集, 3(J2), 802-810.

He, K., Gkioxari, G., Dollár, P., and Girshick, R.: Mask r-cnn. ICCV, 2017.

Wang, X., Kong, T., Shen, C., Jiang, Y., and Li, L.: Solo: Segmenting objects by locations. ECCV, 2020.

Wang, X., Zhang, R., Kong, T., Li, L., and Shen, C. Solov2: Dynamic and fast instance segmentation. Adv Neural Inf Process Syst., Vol. 33, pp. 17721-17732, 2020.

Yamane, T., Chun, P. J., Dang, J., and Honda, R. (2023). Recording of bridge damage areas by 3D integration of multiple images and reduction of the variability in detected results. Comput. Aided Civ. Infra. Eng., 38(17), 2391-2407.

Chu, H., and Chun, P. J.: Fine-grained crack segmentation for high‐resolution images via a multiscale cascaded network. Comput. Aided Civ. Infra. Eng., Vol. 39, No.4, pp. 575-594, 2024.

Jiao, J., Chen, F., Wei, H., Wu, J., and Liu, M.: Lce-calib: automatic lidar-frame/event camera extrinsic calibration with a globally optimal solution. IEEE ASME Trans. Mechatron., 2023.

He, K., Xiangyu Z., Shaoqing R., and Jian S.: Deep residual learning for image recognition. CVPR, pp. 770-778. 2016.

Lin, T. Y., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., Zitnick, C. L., and Dollár, P.: Microsoft coco: Common objects in context. ECCV, pp. 740-755, 2014.

齊藤真衣, 沈舜聡, 伊東敏夫. LiDAR とカメラを用いたセンサフュージョンによる遠距離スパース点群の補間手法. 自動車技術会論文集, Vol. 53, No. 3, 598-604, 2022.

Nedevschi, S.: Online cross-calibration of camera and lidar, ICCP, pp. 295-301, 2017.

Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C., Gustafson, L., ... and Girshick, R. Segment anything. ICCV, pp. 4015-4026, 2023.

横田隆之, 黒田洋司: LIDAR を用いた形状的特徴による人認識, ロボティクス・メカトロニクス講演会講演概要集, 3PI-K04(1)-3PI-K04(4), 2014.

Hosoi, F., Umeyama, S., and Kuo, K. Estimating 3D chlorophyll content distribution of trees using an image fusion method between 2D camera and 3D portable scanning lidar. Remote Sens., Vol. 11, No. 18, 2134, 2019.

Posted


Submitted: 2024-08-29 07:45:45 UTC

Published: 2024-09-03 02:21:28 UTC
Section
Architecture & Civil Engineering