Owned by: China Association for Science and Technology
Sponsored by: China Coal Society
Published by: Springer Nature
About issue

The International Journal of Coal Science & Technology is a peer-reviewed open access journal. It focuses on key topics of coal scientific research and mining development, serving as a forum for scientists to present research findings and discuss challenging issues.


Coverage includes original research articles, new developments, case studies and critical reviews in all aspects of scientific and engineering research on coal, coal utilizations and coal mining. Among the broad topics receiving attention are coal geology, geochemistry, geophysics, mineralogy, and petrology; coal mining theory, technology and engineering; coal processing, utilization and conversion; coal mining environment and reclamation and related aspects.


The International Journal of Coal Science & Technology is published with China Coal Society, who also cover the publication costs so authors do not need to pay an article-processing charge.


The journal operates a single-blind peer-review system, where the reviewers are aware of the names and affiliations of the authors, but the reviewer reports provided to authors are anonymous.


  • A forum for new research findings, case studies and discussion of important challenges in coal science and mining development

  • Offers an international perspective on coal geology, coal mining, technology and engineering, coal processing, utilization and conversion, coal mining environment and reclamation and more

  • Published with the China Coal Society

Show More
Editors-in-Chief
Suping Peng, Shimin Liu
Managing Editor
Wanjie Wang
Associate Editors
Bo Hyun Kim, Dongjie Xue, Pedram Roghanchi, Wu Xiao, Zhiqiang Wu
Publishing model
Open Access. Learn about publishing OA with us
Home > Volumes and issues > Volume 12, issue 1

Quantitative analysis of different SLAM algorithms for geo-monitoring in an underground test field

Research Article

Open Access

Published: 01 February 2025

1 Accesses

International Journal of Coal Science & Technology Volume 12, article number 7, (2025)

Abstract

Geo-monitoring provides quantitative and reliable information to identify hazards and adopt appropriate measures timely. However, this task inherently exposes monitoring staff to hazardous environments, especially in underground settings. Since 2000s, robots have been widely applied in various fields and many studies have focused on establishing autonomous mobile robotic systems as well as solving the issue of underground navigation and mapping. However, only a few studies have conducted quantitative evaluations of these methods, and almost none have provided a systematic and comprehensive assessment of the suitability of mapping robots for underground geo-monitoring. In this study, a methodology for objective and quantitative assessment of the applicability of SLAM methods in underground geo-monitoring is proposed. This involves the development of an underground test field and some specific metrics, which allow detailed local accuracy analysis of point measurements, line segments, and areas using artificial targets. With this proposed methodology, a series of repeated experimental measurements has been performed with an autonomous driving robot and the selected LiDAR- and visual-based SLAM methods. The resulting point cloud was compared with the reference data measured by a total station and a terrestrial laser scanner. The accuracy and precision of the selected SLAM methods as well as the verifiability and reliability of the results are evaluated and discussed by analysing quantities such as the deviations of the control points coordinates, cloud-to-cloud distances between the test and reference point cloud, normal vector, centre point coordinates and area of the planar objects. The results demonstrate that the HDL Graph SLAM achieves satisfactory precision, accuracy, and repeatability with a mean cloud-to-cloud distance of 0.12 m (with a standard deviation of 0.13 m) in an 80 m closed-loop measurement area. Although RTAB-Map exhibits better plane-capturing capabilities, the measurement results reveal instability and inaccuracies.

1.Introduction

1.1 Research background

Monitoring in general is the process of systematic and continuous observation and recording of activities taking place in a certain structure (Settles et al. 2008). In engineering geodesy, it is defined as one of the fundamental tasks of engineering surveying, and its main content is the measurement and detection of deformations and recording of the state of technical and natural objects (Kuhlmann et al. 2014; Cai et al. 2023). Geo-monitoring is a subtype of geodetic monitoring (Wagner 2017) that focuses on the geological objects affected by geogenic and anthropogenic activities. The quantitative and reliable information from geological monitoring is crucial for studying and understanding natural and anthropogenic environmental processes. It aids in identifying hazards, such as landslides, ground subsidence, and rockfalls, thereby enabling timely and appropriate measures to be taken (Wieser and Capra 2017).

Starting with the extensive coal mining activities in the late nineteenth century in industrializing areas the need for observing the surface and connecting gathered information to underground movements and deformations became imminent. At that time, geo-monitoring was already being used in the mining industry. Today, geo-monitoring is widely applied in various fields such as mining, geotechnical engineering, environmental monitoring, and natural hazards management. Particularly in the field of mine surveying, geo-monitoring is applied throughout the entire mining value chain. It forms the basis for managing safety, environmental, and operation aspects of mineral resource management (Benndorf 2021).

Geo-monitoring can be divided into surface and underground geo-monitoring. Surface geo-monitoring encompasses a wide variety of monitoring tasks, such as slope movement monitoring, contamination monitoring, tailing monitoring, and the monitoring of buildings near mining operations. In underground geo-monitoring, the main tasks are acquiring geometry data and, through repeated measurements, deriving deformation information (e.g., derivation of cavity convergences) or characterizing the rock mass (e.g., monitoring of fissures by comparing spatial monitoring data from different periods using hyperspectral cameras).

The currently applied surveying and mapping technology is extensively developed and mature. Various methods and techniques are used to carry out 3D metric measurements. Some commonly used are photogram-metric, conventional terrestrial techniques (such as total stations, levelling etc.), satellite techniques (such as Global Positioning System), and laser scanners (Prokos and Roumpos 2019). Among these methods, conventional techniques offer high accuracy but often require considerable manpower to do large numbers of measurement tasks for a long time. On the other hand, satellite-based positioning techniques are unsuitable for underground mines, due to the lack of global navigation satellite systems (GNSS) and rugged rock limits communication (Yarkan et al. 2009). In contrast, laser scanners are better suited for automatic and continuous monitoring, enabling extensive and temporally closely meshed observation. However, a terrestrial laser scanner (TLS) necessitates a tripod manual setup and scanning time. To overcome these limitations, handheld Simultaneous Localization and Mapping (SLAM) technology has been introduced and continuously developed. This advancement makes underground 3D mapping and scanning safer, quicker, more automated, highly repeatable, and more effective (Farella 2016). To further reduce manual input and move towards autonomous mapping, researchers have explored the use of truck-mounted 3D handheld light detection and ranging (LiDAR) scanners (Evanek et al. 2021) and drones (Canales and Sellers 2020) to create underground maps.

In the era of Mining 4.0 and digital innovations, automation and robotics are the leading key technologies in the mining industry, often seen as the ultimate vision of digital transformation (Barnewold and Lottermoser 2020). Various autonomous driving robots are continually being introduced into the mining industry and underground mines. Examples include autonomous driving technologies for mineral transport (Larsson et al. 2006), for hazard detection (Yinka-banjo and Bagula 2012), for exploration (Miller et al. 2020), and so on. Without a doubt, the field of geo-monitoring, which aims to improve disaster prevention and control, would benefit from the large-scale use of autonomous driving robots, especially in underground monitoring, where the working environment is extremely harsh, and adverse ground behaviour events, such as convergence and ground falls, can occur.

Since 2000s, the “autonomous mine mapping campaign” has been conducted by the research team of the Field Robotics Centre of Carnegie Mellon University. In 2003, as one of the pioneers in underground mapping robotics, the robot “Groundhog” (Ferguson et al. 2003), whose main purpose is to explore and map abandoned mines, was presented in the United States. The designed system, a four-wheeled vehicle, was tested in multiple experiments and generated 2D and 3D maps of abandoned mines inaccessible to people with a single LiDAR scanner on board. At that time, the 6DoF SLAM method developed with the Bayesian estimation technique was applied in autonomous mine mapping (Nuchter et al. 2004). Later, Bakambu and Polotski (2007) used heading-aided odometry on an autonomous surveying and navigation vehicle for localization. Through incremental displacements based on vehicle self-localization and local range data, the robotic system can produce 2D and 3D maps of the network. This made localization no longer completely rely on scan matching.

Afterwards, boosted by the developments and improvements in SLAM methods, various studies of autonomous underground mine mapping started to move towards the integration of multiple sensors and optimization of SLAM methods. For example, Neumann et al. (2014) proposed a multi-sensor underground mapping robot “Barney” using radar, cameras, multiple LiDARs and an IMU sensor. Losch et al. (2018) designed an autonomous underground robot equipped with IMU, depth RGB-D Camera, and 2D and 3D laser scanners, and showed a 2D grid map by RTAB-Map (Real-Time Appearance-Based Mapping; Labbé and Michaud 2019) which they obtained using a depth camera. Ren et al. (2019) proposed a 3D GICP-SLAM method, which could achieve low drift localization and point cloud map construction based on a 3D LiDAR sensor, for an underground mining environment.

1.2 Motivation and contributions

Geo-monitoring involves extensive, long-term, and repetitive mapping tasks, especially in underground environments. The use of SLAM methods for mapping is becoming common in the surveying field. This paper aims to answer the questions: Do SLAM methods fulfil the requirements of underground geo-monitoring and change detection in terms of precision and reliability?

A thorough examination of existing research reveals that most SLAM accuracy analyses focus on indoor or outdoor on the surface rather than underground environments. However, the underground environment has unique characteristics such as the lack of absolute positioning using e.g. GNSS, uneven ground, and narrow and complex geometries. Next to this, water on the mining face, humidity and dust limit the quality of sensor signals underground. This leads to the fact that the accuracy research on the surface is not necessarily applicable to the underground environment.

Moreover, among the few studies on SLAM accuracy in underground environments, an even smaller number have been conducted in real underground environments rather than simulated ones. For example, Kim and Choi (2021) analysed absolute area errors between the 2D tunnel sections generated by an autonomous driving robot and those surveyed manually, demonstrating the advantages of the proposed pattern-based location estimation method. Ghosh et al. (2017) compared maps generated by a prototype mobile robot with those measured by a total station, analysed the absolute area differences and the average point-to-point deviations and finally proved that the generated mine map had good accuracy. Trybała et al. (2023) compared different LiDAR-based mobile mapping systems, carrying out their comprehensive accuracy and precision evaluation in a complex underground tunnel environment with reference data acquired with a TLS. However, the tests were conducted on handheld devices, without including any automated robotic measurement techniques.

Geo-monitoring requires consideration of local details, as a simple overall accuracy analysis is insufficient for assessing its applicability. To address this gap, the Institute for Mine Surveying and Geodesy at Technische Universität Bergakademie Freiberg (TU BAF) has designed and developed a real underground test site from the perspective of mine surveyors. In this test site, the accuracy and precision of selected SLAM methods, implemented on the designed robotic system, were systematically and quantitatively evaluated using mine surveying methods. The applicability of SLAM technology for mine surveying and geo-monitoring was also assessed.

The main contribution of this article is to present a methodology to objectively assess systematic and random uncertainty in SLAM-generated point clouds in an underground environment in the context of Geo-monitoring. This, in particular the task of change detection, requires not only the classical average cloud-to-cloud distance but also the investigation of precision for different geometries, such as points, line segments and areas, including their spatial orientation. The results allow to judge the performance of different SLAM algorithms and optimise instrumentation and monitoring design. This methodology is demonstrated using two commonly used SLAM algorithms.

The proposed method allows to assess the suitability and quantify the precision of different SLAM algorithms in specific underground environments. As such, it provides a piece of the puzzle to increase acceptance and proof reliability of results as the basis for operative use transferring the technology to a higher Technology Readiness Level (TRL) according to National Aeronautics and Space Administration (NASA; Mankins 1995) and European Union (“TRL” 2019) definitions.

1.3 Article outline

The outline of this work is as follows:

  1. (1)

    The multipurpose robotic research system developed for autonomous geo-monitoring, as well as surveying and mapping of underground mining areas, is presented. This includes descriptions of the sensors it carries and the methods it uses to process the data.

  2. (2)

    The two designed underground test sits, which consists of an 80 m natural ring test site and a 20 m long narrow tunnel, are introduced. This includes description of the configuration of control points and the selection of targets.

  3. (3)

    The implementation of the experiments is shown, involving the construction of reference and test networks with different instruments.

  4. (4)

    The evaluations of accuracy and precision of the selected SLAM methods as well as the verifiability and reliability of the results are demonstrated. This assessment involves comparing the following metrics: (A). the centre point coordinates, width, height, and area, as well as the normal vector of the placed planar objects. (B). the cloud-to-cloud distances between the test point clouds and the reference point cloud. (C). differences between the measured and reference coordinates of the control points.

  5. (5)

    Finally, the applicability of the chosen method and the factors affecting accuracy are discussed.

2.Methodology

In this study, a geo-monitoring robot from the Institute of Mine Surveying and Geodesy at TU Freiberg was used, and LiDAR-based and camera-based SLAM methods: HDL Graph SLAM (high-density LiDAR SLAM; Koide et al. 2019) and RTAB-Map (Real-Time Appearance-Based Mapping; Labbé and Michaud 2019) were tested in a real underground environment.

2.1 Description of the robot and equipment

The used underground geo-monitoring and mapping robot (Fig. 1) is a custom mobile robot by Robotnik RB-Eken (Guzman et al. 2016), which utilizes a four-wheeled differential drive off-road robot base. Its dimensions are 1242 mm × 763 mm × 845 mm in length, width and height (H = 1320 mm with a pan-tilt-unit), and its ground clearance is 167 mm. It weighs 275 kg and has an additional payload of 150 kg. On one hand, it is well-suited for (limited) rugged-terrain deployment with a maximum speed of 2.5 m/s and a climbing ability of 25°. On the other hand, it is also suited for wet and dusty underground environments due to a protection class of IP54 (protected against dust and splashing water).

Fig. 1
figure 1

Model of the robot with multi-sensors

The robot is equipped with many modern and high-precision sensors. At the top of the robot is a Pan-Tilt-Unit to ensure the real-time movement of the sensors on it, which include two halogen lights, a HySpex Hyperspectral camera system, an Indurad ISDR-C radar, and a Sick TIM 571 2D laser scanner. On the four corners of the robot body, two radars and a 2D laser scanner for anti-collision protection are installed, together with a 16-line Robosense LiDAR for mapping (on the right front side of the robot). A ZED2 RGB-D stereo camera and an Axis M5525-E Pan, Tilt and Zoom camera are mounted on the face of the robot. In the absence of a GNSS signal underground, an integrated high-precision, robust inertial measurement unit (IMU; King and Systems 1998) is placed at the centre of the robot to aid the location estimation. This Northrop Grumman LCI-100N IMU calculates the north heading precisely, independently of magnetic fields and the need for GNSS data. In general, the configuration of this robot specially designed for the underground can meet the needs of all autonomous mapping and geo-monitoring work such as mineral monitoring and hyperspectral imaging.

In this study, the Robosense 3D scanner and the ZED2 depth camera are used for mapping for the selected SLAM methods HDL Graph SLAM and RTAB-Map, respectively. Additionally, the information integrated by the IMU and the 2D scanners was also used as the initial information for the starting point. Table 1 lists the technical data of the sensors used in this study.

Table 1 Technical data of the sensors

Parameter

Robosense RS-LiDAR-16 scanner1

ZED 2 stereo camera2

SICK TIM 571 scanner3

Distance measurement

Time of flight

Triangulation

HDDM + 4

SLAM

HDL Graph SLAM

RTAB-Map

RTAB-Map

Range

0.4–80 m

0.5–20 m

0.5–25 m

Opening angle

360° (horizontal) + 30° (vertical)

110° (horizontal) + 70° (vertical)

270°

Resolution

0.18° (horizontal) + 2° (vertical)

-

0.33°

Ranging accuracy

± 2 cm

< 3% up to 3 m,

< 5% up to 15 m

Systematic error ± 60 mm2

Statistical error < 20 mm2 and

< 10 mm3

Installation height

62–69 cm

60–63 cm

Movable

Operating and software applications of the designed robot are implemented through the Robot Operating System (ROS; Quigley et al. 2009) on an internal central processing unit. ROS is an open-source, meta-operating system for developing robots. It has contributed significantly to the standardization of sensor data formats, thereby improving interoperability between robot platforms and enabling the comparison of SLAM methods. It was introduced in 2007 by the Stanford Artificial Intelligence Laboratory as part of the Stanford AI Robot project (STAIR). From 2009 onwards, it was primarily developed at the Willow Garage Robotics Institute. Since 2012, ROS has been supported by the newly founded, non-profit organization Open-Source Robotics Foundation (OSRF) and has been coordinated, maintained, and further developed by Willow Garage since the end of its operational activities in 2013. The main components and tasks of ROS are hardware abstraction, device driver, frequently reused functionality, message exchange between programs or program parts, package management and so on. Some of the commonly used automation methods such as SLAM, simulation, etc. are implemented in the ROS environment.

2.2 SLAM

In order to enable the robot eventually to drive autonomously, and at the same time locate its position and collect map information, in an unknown underground environment without GNSS, the SLAM method is chosen to be used. SLAM was introduced by Chatila and Laumond (1985), and originally for autonomous control of mobile robots. Since then, SLAM has been widely applied for self-driving cars, augmented reality visualization and building a 3D environment map using sensors in constant movement.

SLAM is a group of algorithms for estimating the state of a robot and mapping the environment the robot is in at the same time, using the information of robot control and measurements from the robot. This problem can be well explained by Fig. 2, where \({x}_{t}\) is the robot state (pose, consists of orientation and position) at time \(t\), \({u}_{t}\) is the robot control, \({z}_{t}\) is the measurements by sensors of the robot and \(m\) is a set of landmarks (a map) observed by a robot at the respective position. The goal is to estimate the \(x\), \(m\) from the \(u\) and \(z\). The basic workflow of the SLAM technique for general algorithms can be summed up in six steps: (1) input of sensor data, (2) feature extraction and (3) matching, (4) pose estimation, (5) loop closure and (6) map building, (Kazerouni et al. 2022).

Fig. 2
figure 2

Graphical model of SLAM problem. (Thrun et al. 2006)

The current multi-sensor system applied to solving the SLAM problem consists of two core parts: the front-end, and back-end. The former is called sensor data processing and is used to analyse and associate the information collected by the sensors, estimate the relevant change of the sensor pose, and ensure continuous map and robot state updates. Subsequently, the estimated position from the front-end is transmitted to the back-end for final positioning results obtained through iterative optimization (He et al. 2022). The front-end part can be divided into methods based on LiDAR and vision according to different sensor types. The main vision-based SLAM approach (visual SLAM) uses images acquired from monocular, stereo, and RGB-D cameras. It provides a wealth of information, that is important for landmarks detection. In addition, sensors for visual SLAM are easy to install and with a low cost. Compared to visual SLAM, LiDAR SLAM uses a laser sensor, which is significantly more precise than cameras and not affected by external light. However, the point clouds generated by LiDAR SLAM are not as finely detailed as high-resolution images. (Abaspur Kazerouni et al. 2022).

In order to study the different accuracies of LiDAR- and camera-based SLAM Methods, an HDL Graph SLAM and an RTAB-Map have been chosen to be applied respectively. They are relatively mature SLAM methods at present, and both provide a loop detection function, which is crucial for performing accurate measurements in underground environments without global positioning solutions.

2.2.1 HDL Graph SLAM

HDL Graph SLAM is a 3D LiDAR open-source algorithm for a real-time 6 DoF SLAM, which uses Eigen and g2o libraries (Kummerle et al. 2011) for core calculations. It has been tested with the geo-monitoring robot from the Institute of Mine Survey and Geodesy at TU BAF in a lab environment, where the resulting map was confirmed to be topographically correct and suitable for navigation (Trybala et al. 2022). The algorithm is based on Graph-based SLAM (Grisetti et al. 2010), in which the robot motion is represented as a graph: nodes represent robot poses (positions) at different points in time, and edges represent constraints between these poses, typically derived from sensor measurements. The algorithm has five main steps: data preprocessing, scan matching, plane detection, loop closure detection and graph optimization. Scan matching involves estimating the subsequent sensor poses using iterative closest point (ICP; Besl and McKay 1992), its variations (FastGICP; Koide, 2021b) or normal distributions transform (NDT; Biber and Strasser 2003) algorithm. Plane detection uses random sample consensus (RANSAC; Fischler and Bolles 1981). Graph optimization includes loop detection and pose graph optimization to compensate for the accumulated error of scan matching.

This study used the Robosense 3D LiDAR for mapping with HDL Graph SLAM. The acceleration information of the IMU was set as an initial constraint for graph optimization.

2.2.2 RTAB-Map

RTAB-Map is an open-source Graph-based visual SLAM. It can satisfy online requirements for long-term and large-scale environment mapping due to its thorough memory management approach implemented in loop closure detection. As a back-end algorithm, the main steps of RTAB-Map can be divided into three parts: data input and synchronization, optimization, and output.

The main inputs can be RGB-D or stereo images and laser scans. To handle different coordinate frames, tf (Transform; tf-ROS Wiki) is a necessary input. Beyond that, the odometry is an external input from either a different source (such as wheel encoders) or the integrated odometry node of RTAB-Map. After that, the input data stream has to be synchronized, as the data rates of the sensors are unequal. Next, the graph optimization process takes place, which is performed using neighbour, loop closure and proximity links as constraints. The computed error is propagated throughout the graph to reduce odometry drift when new links are added to the graph. The optimized graph can be assembled and exported as OctoMap, Point Cloud and 2D Occupancy Grid.

In this study, RGB-D images are the main inputs for the mapping process. The merged cloud topic from two 2D scanners was adopted as an input for correction. For odometry input, the wheel encoders and the visual odometry of the ZED2 camera were used. IMU has not been used here yet, because it requires additional software and setup.

2.3 Design test site

The experiments took place in a real underground environment: 150 m deep in the research and education mine Reiche Zeche, Freiberg, Germany (Fig. 3).

Fig. 3
figure 3

Robot in the Silver Mine Freiberg, Reiche Zeche, Freiberg, Germany

To analyse the real mapping effect and measurement accuracy of SLAM methods, two areas were selected as test sites (Fig. 4). One is a natural underground ring area (total length: 80 m) to test the effect of the loop closure function of different SLAM methods, which can compensate the accumulated error, and has a great impact on the accuracy. Considering that the real underground environment mostly consists of narrow and long tunnels rather than large or ring-shaped areas, a 20-m long and very narrow tunnel was selected as the second test area. This tunnel has an average width of 1.7 m, with the narrowest section being only 0.95 m.

Fig. 4
figure 4

Overview of the test area

These two test areas are designed differently based on their evaluation purpose. Control points were used for the 20 m long tunnel to analyse measurement accuracy specifically and accurately. For the 80 m ring test area, planar objects were placed to analyse the accuracy changes in points, line segments, and areas.

As one of the decisive factors determining the specific placement of control points and planar objects, the driving routes (trajectories) of the robot were planned for the test site first. In the 20 m long tunnel, the position behind the cross (Fig. 5) served as the starting point to enhance the recognizability of the point cloud and ensure that the robot could turn. Conversely, in the 80 m ring area, the starting point was chosen with many feature points, which are very important for the SLAM method to make the loop closure function more effectively (Fig. 6).

Fig. 5
figure 5

Design of the 20 m long tunnel

Fig. 6
figure 6

Design of 80 m ring area

With the available trajectories, two test areas were set up taking into account the performance of the sensors used. The technical data of the sensors are shown in Table 1.

The setup work for the 20 m long tunnel involved determining the target, number, and arrangement of the control points. As the target, the 10 cm diameter ball, commonly used in mobile and terrestrial scanning, was chosen for the sensors. There were three options for placing the ball: on the ground, roof, or wall of the tunnel. A ground target was ruled out first because the robot's base plate is only 167 mm from the ground. Since the robot has a height of 1.32 m with a pan tilt unit, the ball can only be hung at a height (\({h}_{\text{ball}}\)) of at least 1.4 m from the tunnel floor to ensure that the robot can pass through the tunnel safely. The sensors are installed on the robot at a height (\({h}_{\text{sensor}}\)) of 65 cm from the ground, and the vertical opening angle \(({\alpha }_{\text{opening}})\) of the scanner is only 30° (Fig. 7). That means, within a distance of 2.8 m (visual blind zone, VBZ) the sensors cannot see the ball target, which can be calculated with the Pythagorean theorem.

Fig. 7
figure 7

Robot mine simulation sketch

$$\text{VBZ}= \frac{{h}_{\text{ball}}-{h}_{\text{sensor}}}{\text{tan}\left(\frac{{\alpha }_{\text{opening}}}{2}\right)}$$

At the same time, to ensure that the target sphere can be identified, at least two points must be scanned in the vertical and horizontal directions. The scanner's angular resolutions (\({\alpha }_{\text{resolution}}\)) are 0.18° horizontally and 2° vertically, this leads to a maximum distance (\(r\)) at which the targets (\(b = 10 \space\text{cm}\)) can be identified is 3 m according to the Arc Formula with \(\rho = \frac{\uppi} {180^\circ }\).

$$r= \frac{b* \rho }{{\alpha }_{\text{resolution}}}$$

These two prerequisites (visual blind zone of 2.8 m and the maximum distinguishable distance of 3 m) make the roof target suboptimal.

Due to the above reasons, the control points were decided to be fixed on both sides of the tunnel walls at the same height as the sensors on the robot, ensuring the control points can be accessed for a long time. However, vein tunnels are often complex in shape and have many viewports dead ends, so it was eventually decided to place control points approximately every 2 m to make sure the target can be recognized. The arrangement of control points is shown in Fig. 5.

For the 80 m ring area, four planar objects are chosen for accuracy and precision evaluation instead of using control points. They are placed at different distances from the starting point and varying distances between each other. Table 2 shows the locations and sizes of these placed planar objects. Object 1 is a door, located 11 m from the starting point and directly in front of the robot. Objects 2 and 3 are wooden boards, positioned approximately 38 m and 50 m from the starting point, on the right and left sides relative to the direction of travel, respectively. Object 4 is another door, situated 70 m away from the starting point.

Table 2 Size and location of the placed planar objects

Planar object

Length (m)

Width (m)

Area (m2)

Distance from starting point (m)

1

1.816

0.823

1.495

11

2

0.957

0.425

0.407

38

3

0.956

0.427

0.408

50

4

1.831

0.985

1.804

70

3.Reference network construction

To evaluate the accuracy and precision of the robot using both SLAM methods and with different sensors, a reference network was constructed for two test areas using two instruments: a Trimble S8 total station for determining the coordinates of the 18 control points, which offers 1’’ angular accuracy and electronic distance measurement precision of 1 mm + 1 ppm (Trimble S8 datasheet), and a Riegl VZ-2000i TLS for comparing the point cloud as reference with an accuracy of 5 mm ranging up to 2500 m (Riegl datasheet 2022).

Among them, the coordinates of 18 control points were determined with the help of 5 fixed points with a total station. The point cloud collection work with TLS was divided into 3 scan positions for the 20 m long tunnel and 25 scan positions for the 80 m ring test area. The corresponding scan positions were registered into two point clouds through RiSCAN Pro (operating and processing software of RIEGL; RIEGL Laser Measurement Systems GmbH 2022) using tie points and plane patches. The mean errors (standard deviation) of the adjustments are 0.0019 m (20 m long tunnel) and 0.0013 m (80 m ring area). Subsequently, the point clouds were subsampled to a minimal space of 2 mm between points to allow efficient processing.

4.Experiments

The entire experimental process is divided into four steps: test experimental measurements, optimization of experiments, a series of experimental measurements, and data processing.

4.1 Test experimental measurements and optimization

In both prepared test areas, real-time test experimental measurements with the robot were first carried out to test whether the robot can pass the tunnel, whether the point cloud generated by the sensor is correct and usable for the evaluation, and whether there is a problem with the loop closure. The testing phase consisted of carrying out the measurements in both areas in the same manner for a total of 10 times. After test measurements, the following problems were identified:

  1. (1)

    Loss of odometry information when using RTAB-Map, caused by the system losing track of visual features.

  2. (2)

    Significant ghosting and drift in the point cloud collected by both SLAM methods in the 20 m tunnel (Figs. 8a and 9a).

Fig. 8
figure 8

Comparison: original point cloud versus result point cloud from separate measurements of RTAB-Map

Fig. 9
figure 9

Comparison: original point cloud versus result point cloud from separate measurements of HDL Graph SLAM

Because of the above problems, the following solutions were carried out:

For 1: There are different reasons for the loss of odometry (“Rtabmap GitHub” 2023), such as lack of distinctive visual features, no nearby features, and the influence of the robot's velocity. After repeated experiments, it was found when the driving speed is greater than 0.5 m/s, it sometimes causes odometry loss and unsuccessful loop closure. To address this problem, the robot was controlled during the experiments to travel at a minimum speed of 0.15 m/s. The issue caused by the lack of features was tackled by using lighting mounted on the robot and sticking photogrammetric black-and-white marks on the tunnel walls where there were not enough naturally distinctive features.

For 2: During the test phase, the robot starts from the starting point, follows the designed path (Fig. 5), drives to the end point of the tunnel, turns around at the end point, and then returns to the starting point using the same route. Afterwards, it turns around to ensure that it returns to the starting point with the same orientation. After following this path and measuring, the results point clouds show so much ghosting and drift that the control points cannot be recognized (Figs. 8a and 9a). Therefore, Interactive SLAM (Koide et al. 2021a), recommended by HDL Graph SLAM, and the standalone application of RTAB-Map were used to manually add additional loop closures and to change the parameters for scan matching. But to some extent, those issues with the resulting point cloud persisted. It can still be seen that the incorrect registration caused the doubling of objects in the final point cloud (Fig. 9b). The results indicate that the error increases especially when the robot rotates in the same location and travels at a higher driving speed. This problem was solved by using 2 scan positions for the robot, measuring back and forth separately, and discarding the measurement at the robot turns. This resulted in the generation of a new point cloud (Figs. 8b and 9c), where the centre point of a 10 cm sphere can be clearly identified (Figs. 8c, d and 9d, e).

4.2 Measurements and processing

During the experimental measurements, the robot was controlled manually with a controller to enter the mine and conduct multiple measurements. Four final 3D data acquisitions were carried out for each of the two test areas. Because of a reasonable quantity of measurements, it is sufficient to derive meaningful comparison metrics for precision and accuracy. The specific working steps of each measurement are as follows:

  • Check the published sensor topics for relevant data.

  • Create a *.rosbag (“Rosbag-ROS Wiki”) file to save the raw data collected by sensors.

  • Check whether the information from the bag data is correct.

  • Replay the recorded bag file to perform offline SLAM with parameter tuning and post-processing.

For HDL Graph SLAM, the configured *.launch file was utilized to generate a raw data folder, containing the map and odometry information of each pose. This data can be reprocessed in Interactive SLAM to verify and modify the loop closures, discover additional loops, and refine the graph edges, aiming for a better final adjustment of the pose graph. The resulting point cloud data can be exported as a *.pcd file for further analysis.

For RTAB-Map, the raw rosbag data was used to extract image frames, which can be imported into RTAB-Map to initiate the RTAB-Map mapping function. The loop closure function with RTAB-Map can be performed here also automatically or manually. After satisfactory post-processing, the point cloud was exported as a *.ply file.

Since the reference network and the experiment network were not constructed simultaneously, manual cleaning of the exported point clouds was required to eliminate the possibility of false error caused by moveable elements at certain positions, which may have changed during the experimental period (e.g., tools used by miners).

5.Evaluation

5.1 Evaluation methods

After the generation and preprocessing of point clouds, the evaluation of point cloud quality was carried out. The comparative analysis work is divided into two parts. One focuses on the control point coordinates for the 20 m long tunnel, and the other on the plane analysis for the 80 m ring area.

5.1.1 80 m ring test area

The comparisons were conducted using CloudCompare. The point cloud generated by both SLAM methods was initially approximately aligned to the reference point cloud collected by TLS through manual alignment and then finely registered using the ICP algorithm. The registration process was performed with a point sampling limit of 1 500 000 and farthest point removal enabled. The cloud-to-cloud distance was computed as the first metric using a point-to-plane method, in which a local plane was estimated for each point based on 6 closest points of the reference point cloud. Then, the placed planar objects were extracted from the point cloud and fitted as planes, facilitating subsequent comparisons of the height, width and area of the planar objects (second metric), the comparison of the centre point coordinates and the normal vector of the planar objects (third metric) and the comparison of the Root Mean Square Error (RMSE; last metric).

5.1.2 20 m long tunnel

The comparison work is done through the software RiSCAN Pro and CloudCompare with a reference coordinate system of Rauenberg Datum 83/ 3-degree Gauss Kruger Zone 4 (RD83 GKK 4 Zone), which is a two-dimensional geodetic reference system commonly used for underground mining maps in Saxony, Germany.

The target of control points in HDL Graph SLAM point clouds were extracted and fitted as spheres using RiSCAN Pro for later registration with the TLS point cloud. The coordinates of the centre point of the fitted sphere were calculated by RiSCAN Pro automatically and accepted as the coordinates of the control points.

For the point cloud collected by the ZED2 camera using RTAB-Map, the control point coordinate was determined using CloudCompare, since RiSCAN Pro does not accept.ply files from RTAB-Map and using the accepted.pcd point cloud files without real colour makes it quite challenging to identify the sphere. The point cloud from TLS was exported from RiSCAN Pro and imported into CloudCompare with the selected reference coordinate system for registration. The point cloud from RTAB-Map was registered through manual alignment and using the ICP algorithm, similar to the process described for the 80 m ring area. Subsequently, the target spheres in the RTAB-Map point cloud were manually extracted and fitted as spheres. The centre point coordinates can be calculated automatically after the fitting process.

The calculated control point coordinates from both point clouds were compared with the reference coordinates measured by the total station as the first quality control measure metric. The statistics of cloud-to-cloud unsigned distances from every SLAM point cloud to the reference are also computed as the second accuracy metric in the 20 m tunnel.

5.2 Results

5.2.1 Comparison of general information

The functions of the two SLAM methods and their closed-loop effect and point cloud density were compared. The results are listed in Table 3.

Table 3 Comparison of general information on HDL Graph SLAM and RTAB-Map

Metrics

HDL Graph SLAM

RTAB-Map

Output

Point cloud data, raw data folder

Many options

Point Cloud Density

5 Mio

25 Mio

Real-time measurement

Show point cloud

Show details of loop closure

Sphere and plane extraction

Difficulty with sprit-like point cloud

Simpler with denser coloured point cloud

Loop closure

Almost always successful in testing

Only successful in manual loop closure

Post-processing

Interactive SLAM

Standalone APP

Further developments

No

Yes

Environmental dependence

No

Yes

Firstly, the point clouds collected by different sensors demonstrate their unique properties. As shown in Fig. 10, the point cloud generated by the camera is denser than that collected by the laser scanner, because the Robosense is a 16-line LiDAR and therefore produces strip-like point clouds. Additionally, the point cloud from the LiDAR shows many holes in the tunnel roof due to its limited opening angle (Table 1), making the tunnel floor (blue points) visible through the tunnel roof (red points) in this top view graphic (Fig. 10). In contrast, the camera avoids this issue with its wider opening angle and different point cloud generation method.

Fig. 10
figure 10

Top view of point cloud collected by RTAB-Map (left) and HDL Grap SLAM (right) showing point cloud density

Moving on, both selected SLAM methods provide offline and real-time measurements, but RTAB-Map has a loop closure management function with rtabmap_viz for observing and managing loops during measurements. Due to the variety of input types, RTAB-Map supports a wider range of export file formats. Additionally, the point cloud with true colour makes sphere and plane extractions simpler with RTAB-Map compared to HDL Grap SLAM. However, the automatic loop closure was almost always successful by HDL Graph SLAM, unless driving at a very high speed, while in RTAB-Map, it was not always successful during preliminary testing, even with manual loop closure. This loop closure task is straightforward for RTAB-Map using a depth camera in colour-rich outdoor environments, but proves challenging in featureless and dark, damp underground conditions. Unclosed loops can be manually post-processed, which is done with a built-in standalone application by RTAB-Map but requires external software (Interactive SLAM) for HDL Graph SLAM. In summary, the programming logic of HDL Graph SLAM is clear, making it accessible even for beginners. RTAB-Map is a powerful method capable of handling various measurement and localization tasks. However, due to its versatility, it can be challenging to determine the optimal parameter settings for different purposes.

5.2.2 Results analysis of 80 m ring area

As the first metric to evaluate the accuracy of the point clouds, the cloud-to-cloud distances were calculated using CloudCompare software. The results indicate, as expected, that HDL Graph SLAM is more accurate than RTAB-Map. The mean distances between the reference network and HDL Graph SLAM and RTAB-Map are 0.123 m (with a standard deviation of 0.133 m) and 0.37 m (with a standard deviation of 0.385 m), respectively. This can be explained by the relative lack of loop closures and the drift of RTAB-Map.

Next, the capabilities of each SLAM method and the used sensors to capture and display planes and line segments are compared, i.e. the width, height and area of the placed planar objects acquired by HDL Graph SLAM and RTAB-Map are compared with the true values measured by a tape measure (Table 4).

Table 4 Analysis of the width (W in m), height (H in m) and area (A in m2) of the placed planar objects

Item

True value

TLS

HDL Graph SLAM

RTAB-Map

{x}_{\rm{true}}

{x}_{T}

{d}_{T}

{\overline{x} }_{H}

{d}_{H}

{\sigma }_{H}

{\sigma }_{{\overline{x} }_{H}}

{\overline{x} }_{R}

{d}_{R}

{\sigma }_{R}

{\sigma }_{{\overline{x} }_{R}}

S1

W

0.823

0.847

− 0.024

0.834

0.011

0.012

0.006

0.936

0.089

0.054

0.038

H

1.800

1.788

0.012

1.813

0.013

0.005

0.002

1.771

− 0.017

0.044

0.031

A

1.481

1.514

− 0.033

1.512

0.031

0.023

0.011

1.657

0.142

0.104

0.074

S2

W

0.425

0.455

− 0.03

0.460

0.035

0.022

0.011

0.457

0.002

0.057

0.040

H

0.957

0.967

− 0.01

0.956

− 0.001

0.023

0.011

1.024

0.057

0.066

0.047

A

0.407

0.440

− 0.033

0.439

0.032

0.023

0.012

0.470

0.030

0.066

0.046

S3

W

0.427

0.451

− 0.024

0.453

0.026

0.010

0.005

0.472

0.021

0.066

0.047

H

0.957

0.969

− 0.012

0.936

− 0.021

0.091

0.045

0.951

− 0.018

0.030

0.021

A

0.409

0.437

− 0.028

0.444

0.036

0.042

0.021

0.448

0.011

0.064

0.045

S4

W

0.985

1.029

− 0.044

0.997

0.012

0.017

0.009

0.938

− 0.091

0.017

0.012

H

1.831

1.862

− 0.031

1.861

0.030

0.021

0.010

1.877

0.015

0.075

0.053

A

1.804

1.916

− 0.112

1.855

0.052

0.038

0.019

1.761

− 0.155

0.0774

0.055

In Table 4, the \(\overline{x }\) is the mean value of the multiple measurements and d is the difference between the true value and measured actual value. The standard deviation of the measurements σ shows the degree of variability or dispersion in a set of data points and can be calculated with the following formula for the multiple measurements,

$${\upsigma } = \sqrt {\frac{{\left( {vv} \right)}}{n - 1}}$$

where, \(n\) is the number of experiments conducted, which is 4 by HDL Graph SLAM and 2 by RTAB-Map. It should be noted that although 4 experiments were conducted for each SLAM method, the result point cloud generated by RTAB-Map only had data from 2 experiments available due to failed loop closure. Next, \(n-1\) is the degree of freedom and \(v\) is the deviation from the mean value. For the standard deviation of the area of the placed planar objects, the law of error propagation should be considered, because the area \(A\) is a function of the height \(a\) and the width \(b\).

$$\sigma_{A} = \sqrt {\left( {\frac{\partial A}{{\partial a}} \cdot \sigma_{a} } \right)^{2} + \left( {\frac{\partial A}{{\partial b}} \cdot \sigma_{b} } \right)^{2} } = \sqrt {\left( {b \cdot \sigma_{a} } \right)^{2} + \left( {a \cdot \sigma_{b} } \right)^{2} }$$

The standard deviation of the mean value indicates the precision or reliability of the sample mean as an estimate of the population mean. It can be calculated using the following formula:

$${\upsigma }_{{\overline{x}}} = \frac{{\upsigma }}{\sqrt n }$$

Table 4 reveals that both methods and sensors exhibit acceptable capabilities in capturing planes and line segments, with mean differences of 2.5 cm for HDL Graph SLAM and 5 cm for RTAB-Map. Notably, regarding plane 4, the point cloud collected by HDL Graph SLAM has even smaller errors than that collected by TLS. This can be attributed to the continuous scanning method employed by the SLAM approach. Because plane 4 is in the shadow of the cabinet and the outer wall, it is difficult to find the correct edge with the point cloud of TLS (with the set-up scan) when extracting the plane.

Then, the difference in centre coordinates and normal vectors is discussed, which are calculated similarly to the methods described above. As shown in Table 5, 75% of differences in HDL Graph SLAM are within 20 cm. Conversely, the centre coordinates obtained from the RTAB-Map are not comparable. The differences are not only random but also very large, with the maximum discrepancy reaching up to 2 m, which is unacceptable. The significant differences are predominantly in the horizontal direction, while the difference in the vertical direction is mostly within 10 cm. This is due to the input from the wheel encoder and visual odometry, which are inherently inaccurate when the robot turns in place in a narrow and dark tunnel. The substantial odometry drift in horizontal direction is also a primary factor for RTAB-Map causing the loops detected by the images to be rejected.

Table 5 Analysis of surface centre coordinates

Item

TLS (m)

HDL Graph SLAM

RTAB-Map

 

\({\overline{x} }_{H}\) (m)

\({d}_{H}\) (m)

\({\sigma }_{H}\) (m)

\({\sigma }_{{\overline{x} }_{H}}\) (m)

\({\overline{x} }_{R}\) (m)

\({d}_{R}\) (m)

\({\sigma }_{R}\) (m)

\({\sigma }_{{\overline{x} }_{R}}\) (m)

S1

x

− 18.008

− 17.907

0.101

0.148

0.074

− 18.401

− 0.393

0.608

0.430

y

− 3.639

− 3.778

− 0.140

0.287

0.144

− 4.624

− 0.985

0.650

0.459

z

− 2.799

− 2.918

− 0.119

0.395

0.197

− 2.761

0.037

0.111

0.078

S2

x

2.727

2.574

− 0.153

0.132

0.066

1.185

− 1.542

0.457

0.323

y

− 16.570

− 16.535

0.036

0.070

0.035

− 14.289

2.281

0.756

0.535

z

− 3.261

−3.434

−0.173

0.088

0.044

−3.141

0.120

0.162

0.115

S3

x

3.042

2.825

−0.216

0.373

0.186

2.868

−0.174

0.144

0.102

y

−5.941

−5.919

0.021

0.044

0.022

− 4.549

1.391

0.942

0.666

z

− 3.244

− 3.454

− 0.210

0.249

0.124

− 3.155

0.090

0.077

0.055

S4

x

2.611

2.448

− 0.163

0.050

0.025

5.542

2.931

1.441

1.019

y

12.768

12.575

− 0.192

0.081

0.040

12.166

− 0.602

0.535

0.378

z

− 2.726

− 3.040

− 0.314

0.256

0.128

− 2.779

− 0.054

0.032

0.023

The normal vector is another important metric for assessing how light interacts with surfaces. They are listed in Table 6, where θ is the angle in degrees between the normal vector of the centre point measured by TLS ((\(\overrightarrow{a}\)) and the used SLAM methods (\(\overrightarrow{b}\))), which can be calculated as follows:

$${\uptheta } = {\text{arccos}}\left( {\frac{{\vec{a} \cdot \vec{b}}}{{\left| {\vec{a}} \right| \cdot \left| {\vec{b}} \right|}}} \right) \cdot \left( {\frac{180^\circ }{\uppi }} \right)$$
Table 6 Analysis of the normal vector of the placed planar objects

Item

TLS (m)

HDL Graph SLAM

RTAB-Map

 

{\overline{x} }_{H}

{\sigma }_{H}

{\sigma }_{{\overline{x} }_{H}}

θH

θon plane

{\overline{x} }_{R}

{\sigma }_{R}

{\sigma }_{{\overline{x} }_{R}}

θR

θon plane

S1

x

0.889

0.893

0.002

0.001

0.726 ^\circ

xy

0.563 ^\circ

0.742

0.061

0.043

14.666 ^\circ

xy

14.645 ^\circ

y

0.459

0.450

0.004

0.002

 

yz

1.009 ^\circ

0.667

0.069

0.049

 

yz

1.047 ^\circ

z

− 0.004

0.004

0.011

0.005

 

xz

0.514 ^\circ

− 0.018

0.047

0.033

 

xz

1.132 ^\circ

S2

x

0.986

0.990

0.005

0.002

4.220 ^\circ

xy

4.082 ^\circ

0.985

0.007

0.005

6.716^\circ

xy

6.720 ^\circ

y

0.158

0.087

0.089

0.045

 

yz

21.353 ^\circ

0.041

0.200

0.142

 

yz

36.787 ^\circ

z

− 0.058

− 0.077

0.034

0.017

 

xz

1.081 ^\circ

− 0.063

0.006

0.004

 

xz

0.293 ^\circ

S3

x

0.798

0.796

0.006

0.003

0.857 ^\circ

xy

0.486 ^\circ

0.659

0.930

0.658

12.075 ^\circ

xy

12.113^\circ

y

− 0.588

− 0.597

0.011

0.005

 

yz

1.286 ^\circ

− 0.743

1.051

0.743

 

yz

1.112 ^\circ

z

0.130

0.118

0.007

0.003

 

xz

0.820 ^\circ

0.111

0.156

0.111

 

xz

− 1.150 ^\circ

S4

x

0.629

0.625

0.013

0.006

0.448 ^\circ

xy

0.286 ^\circ

0.794

0.047

0.033

13.726 ^\circ

xy

13.703 ^\circ

y

0.777

0.780

0.010

0.005

 

yz

0.441 ^\circ

0.605

0.061

0.043

 

yz

1.347 ^\circ

z

− 0.001

0.005

0.025

0.012

 

xz

0.549 ^\circ

− 0.015

0.015

0.010

 

xz

0.991 ^\circ

The results show again larger difference in RTAB-Map. This leads to further analysis, where the angles are separately considered on the XY, XZ, and YZ planes, by projecting them onto the respective target planes and setting the unused coordinate components to zero.

In order to visualise the distribution characteristics of the data for comparison, separate scatter plots and two-dimensional histograms were created considering the residuals for centre point coordinates, normal vector, and area of the placed planar objects. Firstly, the two-dimensional histogram in Fig. 11 reveals a greater concentration (with one outlier) of data in HDL Graph SLAM, which represents a higher repeatability of HDL Graph SLAM. A more detailed analysis shows that the outlier is due to the height measurement.

Fig. 11
figure 11

Two-dimensional histogram and scatter plots to show differences in width, height, and area of the placed planar objects

Figure 12 shows differences between the centre point coordinates of the placed planar objects collected by both SLAM methods and reference network. As previously analysed, the results in HDL Graph SLAM are reliable, whereas the data of RTAB-Map in the horizontal direction is completely irregular and random.

Fig. 12
figure 12

Two-dimensional histogram and scatter plots of centre point coordinates of the placed planar objects

The two-dimensional histogram and scatter plots of the analysis of the normal vector (Fig. 13) displayed that the results in the vertical direction from both SLAM exhibit acceptable accuracy and repeatability. However, the results from RTAB-Map in the horizontal direction have greater differences. And the data in HDL Graph SLAM has a noticeable outlier.

Fig. 13
figure 13

Two-dimensional histogram and scatter plots of the normal vector of the placed planar objects

Finally, the RMSEs of fitting the plane, which describes how well and evenly the robot captured the surface, are compared as the last metric to analyse the 80 m ring area. The metric was calculated for all measurement sessions of each method (Fig. 14).

Fig. 14
figure 14

Boxplot of the plane fitting RMSE for every algorithm for all surfaces combined overall measurement sessions

5.2.3 Results analysis in 20 m long tunnel

The cloud-to-cloud distance was calculated, where the mean distance from the point cloud of HDL Graph SLAM to TLS is 0.042 m with a standard deviation of 0.050 m and the mean distance of RTAB-Map is 0.165 m with a standard deviation of 0.142 m.

The coordinates of the 18 control points from HDL Graph SLAM and RTAB-Map are compared with the reference value measured with a total station. The differences in all coordinates are listed in Table 7, which shows systematic errors clearly. This may be due to a common phenomenon in SLAM: errors accumulate as you move further from the starting point. However, it can also be caused by different methods of registering point clouds to ground truth data. The authors have not undertaken targeted investigations in this regard. Here, the point differences are visualised as a line graph showing the absolute difference in vertical (Fig. 15) and horizontal (Fig. 16) directions. Additionally, the odd and even points are plotted separately since the points are fixed on both sides of the wall (Fig. 5) and they have almost the same distance from the starting point when considering the impact of distance on accuracy. The results are clear that HDL Graph SLAM has maximum absolute differences of 10 cm horizontally and 4 cm vertically, while RTAB-Map exhibits 39 cm and 12 cm, respectively.

Table 7 Analysis of centre point coordinates of 18 control points

point Nr

HDL Graph SLAM

RTAB-Map

 

dEasting (m)

dNorthing (m)

dHeight (m)

dEasting (m)

dNorthing (m)

dHeight (m)

1

− 0.001

− 0.002

− 0.014

0.068

0.038

− 0.024

2

0.047

− 0.034

− 0.028

− 0.119

− 0.026

− 0.025

3

0.030

0.015

− 0.006

− 0.251

− 0.118

− 0.076

4

− 0.080

− 0.006

− 0.027

− 0.182

− 0.107

− 0.055

5

0.000

− 0.001

0.012

0.160

0.067

− 0.054

6

− 0.014

0.018

0.007

− 0.013

− 0.122

− 0.016

7

− 0.020

0.010

− 0.001

0.093

0.213

− 0.005

8

− 0.002

0.010

0.016

− 0.100

0.142

− 0.012

9

− 0.024

0.020

0.011

− 0.146

0.519

− 0.056

10

− 0.038

0.020

0.011

− 0.351

0.377

− 0.019

11

0.053

− 0.004

− 0.006

− 0.247

0.683

− 0.032

12

− 0.064

0.021

0.019

− 0.446

0.586

0.048

13

0.067

− 0.027

− 0.005

− 0.371

0.621

0.028

14

0.036

− 0.005

− 0.016

− 0.620

0.701

0.051

15

0.082

− 0.035

0.044

− 0.398

0.419

− 0.063

16

0.075

0.024

0.002

− 0.628

0.482

0.015

17

− 0.128

− 0.033

− 0.040

− 0.406

0.227

− 0.106

18

0.047

− 0.056

0.039

− 0.681

0.278

− 0.026

Fig. 15
figure 15

Line graph of differences of centre point coordinates in the vertical direction

Fig. 16
figure 16

Line graph of differences of centre point coordinates in the horizontal direction

6.Conclusions and discussion

This article presents a methodology to objectively and quantitatively assess the applicability of SLAM methods implemented on a multi-sensor robot system in underground geo-monitoring. According to the unique characteristics of underground conditions and the accuracy requirements of geo-monitoring, an underground test site was developed with some specific metrics. These metrics include not only the control point coordinate and the classical cloud-to-cloud distance between test and reference point clouds but also the centre point coordinates, normal vectors, areas of some placed planar objects. The introduced metrics allow detailed local analysis to assess whether significant changes in underground tunnel geometry can be detected. At the same time, using the proposed methodology were LiDAR-based and visual-based SLAM methods evaluated in the established test site, which consists of a 20 m long tunnel and an 80 m natural ring area.

The results indicated that the LiDAR-based HDL Graph SLAM exhibited greater resilience to environmental factors, delivering point clouds with centimetre-level accuracy and precision. In the 20 m long narrow tunnel without loop closure, HDL Graph SLAM achieved a maximum absolute difference of 10 cm horizontally and 4 cm vertically, when compared to control points coordinates measured by total station. In the 80 m ring test area with loop closure, mean differences of 2.5 cm (width, height, and area), 12 cm (centre point), and 1.5° (angle between normal vectors) were observed for surface capture of the placed planar objects. On the contrary, the camera-based RTAB-Map displayed heightened sensitivity to environmental conditions, resulting in measurement inaccuracies and irregularities. In the 20 m long tunnel, the absolute horizontal difference reached up to 40 cm. Within the 80 m ring test area, the unsuccessful loop closure led to errors at the meter level.

When compared to the officially claimed 5 mm measurement accuracy of reference point cloud based on TLS, even the well-performing HDL Graph SLAM is less accurate. However, the continuity (which can mitigate the impact of shadowed areas on results) and timeliness (estimated for our tests as around 1/30th of the time required for TLS) of measurements using SLAM methods, coupled with the potential for colourisation in point clouds acquired through visual SLAM, offer promising prospects for utilising robots in underground geo-monitoring.

It is worth noting that factors such as parameter settings in the standalone application of RTAB-Map, sensor performance, and the accuracy of plane and sphere registration and extraction can directly impact the results of this comparison. Additionally, the main problem of RTAB-Map is the horizontal error caused by the horizontal drift of odometry. This can be reduced by using more accurate IMU data. The achievable accuracy still needs to be analysed in future research with the proposed methodology from this article.

Finally, it is essential to acknowledge that due to limited time and resources, only two selected SLAM methods are analysed in this study. This analysis was based solely on the robot operating in a single loop at a low and uniform speed on a relatively flat area without a train track. However, the driving velocity and the ground flatness can have a significant impact on the accuracy of the algorithm. Therefore, future research will prioritize broader comparisons of SLAM methods and explore specific factors affecting errors, such as ground flatness, and driving style and speed.

References

[1] Abaspur Kazerouni I, Fitzgerald L, Dooly G, Toal D (2022) A survey of state-of-the-art on visual SLAM. Expert Syst Appl 205:117734
[2] Bakambu JN, Polotski V (2007) Autonomous system for navigation and surveying in underground mines. J Field Robotics 24:829–847. https://doi.org/10.1002/rob.20213
[3] Barnewold L, Lottermoser BG (2020) Identification of digital technologies and digitalization trends in the mining industry. Int J Min Sci Technol 30:747–757. https://doi.org/10.1016/j.ijmst.2020.07.003
[4] Benndorf J (2021) Geomonitoring und Markscheidewesen als integrativer Teil des Umweltmanagements in der Rohstoff-und Energiebranche–zukünftige Aufgaben. Allg Vermess 7:237–247
[5] Besl PJ, McKay ND (1992) A method for registration of 3-D shapes. In: IEEE transactions on pattern analysis and machine intelligence, pp 239-256. https://doi.org/10.1109/34.121791
[6] Biber P, Strasser W (2003) The normal distributions transform: a new approach to laser scan matching. In: IEEE/RSJ international conference on intelligent robots and systems, pp 2743–2748. https://doi.org/10.1109/IROS.2003.-1249285
[7] Cai YF, Jin YT, Wang ZY, Chen T, Wang YR, Kong WY, Xiao W, Li XJ, Lian XG, Hu HF (2023) A review of monitoring, calculation, and simulation methods for ground subsidence induced by coal mining. Int J Coal Sci Technol 10(1):32. https://doi.org/10.1007/s40789-023-00595-4
[8] Canales C C, Sellers E S (2020) Structural recognition and rock mass characterization in underground mines: a UAV and LiDAR mapping based approach. In: Castro R, Báez F, Suzuki K (eds), MassMin: proceedings of the eighth international conference & exhibition on mass mining, University of Chile, Santiago, pp 1302–1312. https://doi.org/10.36487/ACG_repo/2063_97
[9] Chatila R, Laumond J (1985) Position referencing and consistent world modelling for mobile robots. In: IEEE international conference on robotics and automation proceedings. https://doi.org/10.1109/ROBOT.1985.1087373.
[10] Evanek N, Slaker B, Iannacchione A, Miller T (2021) LiDAR mapping of ground damage in a heading reorientation case study. Int J Min Sci Technol 31:67–74. https://doi.org/10.1016/j.ijmst.2020.12.-018
[11] Farella EM (2016) 3D Mapping of underground environment with a hand-held laser. Bollettino della società italiana di fotogrammetria e topografia, 2, 1-10.
[12] Ferguson D, Morris A, Hähnel D, Baker C, Omohundro Z, Reverte C, Thayer S, Whittaker C, Whittaker W, Burgard W, Thrun S (2003) An autonomous robotic system for mapping abandoned mines. In: advances in neural information processing systems. MIT Press
[13] Fischler MA, Bolles RC (1981) Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography. https://doi.org/10.1145/358669.358692
[14] Ghosh D, Samanta B, Chakravarty D (2017) Multi-sensor data fusion for 6D pose estimation and 3D underground mine mapping using autonomous mobile robot. Int J Image Data Fusion 8:173–187. https://doi.org/10.1080/19479832.2016.1226966
[15] Rtabmap GitHub (2023). RTAB-Map github. Jan. 19, 2022. URL: http://introlab.github. io/rtabmap/ (acce-ssed 15.08.23).
[16] Grisetti G, Kummerle R, Stachniss C, Burgard W (2010) A tutorial on graph-based SLAM. IEEE Intell Trans Syst Mag 2(4):31–43. https://doi.org/10.1109/MITS.2010.939925
[17] Guzman R, Navarro R, Beneto M, Carbonell D (2016) Robotnik—professional service robotics applications with ROS. In: Koubaa A (ed) Robot operating system (ROS). Springer International Publishing, Cham, pp 253–288. https://doi.org/10.1007/978-3-319-26054-9_10
[18] He X, Gao W, Sheng C, Zhang Z, Pan S, Duan L, Zhang H, Xinyu L (2022) LiDAR-visual-inertial odometry based on optimized visual point-line features. Remote Sens 14(3):622. https://doi.org/10.3390/rs14030622
[19] Kim H, Choi Y (2021) Location estimation of autonomous driving robot and 3D tunnel mapping in underground mines using pattern matched LiDAR sequential images. Int J Min Sci Technol 31:779–788. https://doi.org/10.1016/j.ijmst.2021.-07.007
[20] King AD, Systems ME (1998) Inertial navigation - forty years of evolution. GEC REVIEW 13
[21] Koide K, Miura J, Menegatti E (2019) A portable three-dimensional LIDAR-based system for long-term and wide-area people behavior measurement. Int J Adv Robot Syst. https://doi.org/10.1177/1729881419841532
[22] Koide K, Miura J, Yokozuka M, Oishi S, Banno A (2021a) Interactive 3D graph SLAM for map correction. IEEE Robot Autom Lett 6:40–47. https://doi.org/10.1109/LRA.2020.3028828
[23] Koide K, Yokozuka M, Oishi S, Banno A (2021b) Globally consistent 3D LiDAR mapping with GPU-accelerated GICP matching cost factors. IEEE Robot Autom Lett 6(4):8591–8598. https://doi.org/10.1109/LRA.2021.3113043
[24] Kuhlmann H, Schwieger V, Wieser A, Niemeier W (2014) Engineering Geodesy – definition and core competencies, (6962). FIG Congress 2014. Engaging the Challenges - Enhancing the Relevance. Kuala Lumpur, Malaysia 16 – 21 June 2014
[25] Kummerle R, Grisetti G, Strasdat H, Konolige K, Burgard W (2011) G2o: a general framework for graph optimization. In: Presented at the 2011 ieee international conference on robotics and automation (ICRA), IEEE, Shanghai, China, pp. 3607–3613. https://doi.org/10.1109/ICRA.2011.5979949.
[26] Labbé M, Michaud F (2019) RTAB-Map as an open-source lidar and visual simultaneous localization and mapping library for large-scale and long-term online operation. J Field Robot 36:416–446. https://doi.org/10.1002/rob.21831
[27] Larsson J, Broxvall M, Saffiotti A (2006) A navigation system for automated loaders in underground mines. In: Corke P, Sukkariah S (eds) Field and service robotics: results of the 5th international conference. Springer Berlin Heidelberg, Berlin, Heidelberg, pp 129–140. https://doi.org/10.1007/978-3-540-33453-8_12
[28] Losch R, Grehl S, Donner M, Buhl C, Jung B (2018) Design of an autonomous robot for mapping, navigation, and manipulation in underground mines. In: Presented at the 2018 IEEE/RSJ international conference on intelligent robots and systems (IROS), Madrid, pp 1407–1412.
[29] Mankins John C (1995) Technology Readiness Levels: A white paper. NASA, office of space access and technology, Advanced concepts office.
[30] Miller ID, Cladera F, Cowley A, Shivakumar SS, Lee ES, Jarin-Lipschitz L, Bhat A, Rodrigues N, Zhou A, Cohen A, Kulkarni A, Laney J, Taylor, CJ, Kumar V (2020) Mine tunnel exploration using multiple quadrupedal robots. In: IEEE robotics and automation letters, vol 5, no. 2, pp 2840–2847
[31] Neumann T, Ferrein A, Kallweit S, Scholl I (2014) Towards a mobile mapping robot for underground mines. In: Proceedings of the 2014 PRASA, RobMech and AfLaT international joint symposium. Cape Town, South Africa, pp 1–7
[32] Nuchter A, Surmann H, Lingemann K, Hertzberg J, Thrun S (2004) 6D SLAM with an application in autonomous mine mapping. Presented at the IEEE international conference on robotics and automation, 2004. IEEE, New Orleans, LA, USA, vol 2, pp 1998–2003 https://doi.org/10.1109/ROBOT.2004.1308117
[33] Prokos A, Roumpos C (2019) Ground deformation monitoring techniques at continuous surface lignite mines. In: Conference 4th joint international symposium on deformation monitoring (JISDM). Athens, Greece, pp 98–99
[34] Quigley M, Gerkey B, Conley K, Faust J, Foote T, Leibs J, Wheeler R, Ng AY (2009) ROS: an open-source Robot Operating System. In: Proc. open-source software workshop of the international conference on robotics and automation (ICRA), Kobe, Japan
[35] Ren Z, Wang L, Bi L (2019) Robust GICP-Based 3D LiDAR SLAM for underground mining environment. Sensors (Basel) 19:2915. https://doi.org/10.3390/s19132915
[36] RIEGL Laser Measurement Systems GmbH (2022). URL http://www.riegl.com/products/software-packages/riscan-pro/ (accessed 16.06.23).
[37] Riegl VZ-2000i Datasheet (2022). URL http://www.riegl.com/uploads/tx_pxpriegldownloads/RIEGL_VZ-2000i_Datasheet_2022-09-27.pdf. (accessed 15.08.23).
[38] Rosbag - ROS Wiki, n.d. URL http://wiki.ros.org/rosbag (accessed 15.08.23).
[39] Settles E, Göttle A, von Poschinger A (2008) Slope monitoring methods work package 6, 19
[40] SLAM - MATLAB & Simulink - MathWorks Deutschland. URL https://de.mathworks.com/-help/nav/slam.html (accessed 25.08.23).
[41] Tf - ROS Wiki. URL http://wiki.ros.org/tf (accessed 21.08.23).
[42] Theilig T (2017) HDDM+ – Innovative technology for distance ­measurement from SICK, 8022027.
[43] Thrun S, Burgard W, Fox D (2006) Probabilistic robotics. The MIT Press Cambridge, Massachusetts London, England, p 311
[44] Trimble S8 Total Station datasheet (2005). URL https://www.laserinst.com/content/s8%20datasheet.pdf. (accessed 16.06.23)
[45] TRL - "Technology readiness levels (TRL); Extract from Part 19 - Commission Decision C (2014) 4995" (PDF). ec.europa.eu. 2014. Retrieved 11 November 2019. Material was copied from this source, which is available under a creative commons attribution 4.0 International License
[46] Trybała P, Kujawa P, Romańczukiewicz K, Szrek A, Remondino F (2023) Designing and evaluating a portable LiDAR-based SLAM system. Int Arch Photogr Remote Sens Sp Inf Sci XLVIII-1/W3-2023:191–198. https://doi.org/10.5194/isprs-archives-XLVIII-1-W3-2023-191-2023
[47] Trybala P, John A, Köhler C, Benndorf J, Blachowski J (2022) Towards a mine 3D dense mapping mobile robot: A system design and preliminary accuracy evaluation. Markscheidewesen 129 Nr. 1.
[48] Wagner A (2017) New geodetic monitoring approaches using image assisted total stations. PhD thesis, Technical University of Munich, Munich, Germany, p. 19
[49] Wieser A, Capra A (2017) Special Issue: Deformation Monitoring. Appl Geomat 9:79–80. https://doi.org/10.1007/s12518-017-0192-0
[50] Yarkan S, Guzelgoz S, Arslan H, Murphy R (2009) Underground mine communications: a survey. IEEE Commun Surv Tutorials 11:125–142. https://doi.org/10.1109/SURV.2009.090309
[51] Yinka-banj C, Bagula A (2012) Autonomous multi-robot behaviours for safety inspection under the constraints of underground mine terrains. Ubiquitous Comput Commun J 7(5):1316

Funding

This research was supported by the German Academic Scholarship Foundation, the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation, Project number 422117092); and the Saxon Ministry of Science and Arts

About this article

Cite this article

Li, J., Benndorf, J. & Trybała, P. Quantitative analysis of different SLAM algorithms for geo-monitoring in an underground test field.Int J Coal Sci Technol 12, 7 (2025).
  • Received

    30 November 2023

  • Revised

    17 June 2024

  • Accepted

    26 July 2024

  • Issue Date

    November -0001

  • DOI

    https://doi.org/10.1007/s40789-025-00745-w

  • Share this article

    Copy to clipboard

For Authors

Explore