Skip to content

JokerJohn/Cloud_Map_Evaluation

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

98 Commits
 
 
 
 
 
 
 
 

Repository files navigation

MapEval: Towards Unified, Robust and Efficient SLAM Map Evaluation Framework

Xiangcheng Hu1 · Jin Wu1 · Mingkai Jia1 · Hongyu Yan1 · Yi Jiang2 · Binqian Jiang1
Wei Zhang1 · Wei He3 · Ping Tan1*†

1HKUST     2CityU     3USTB

†Project Lead   *Corresponding Author

Paper PDFGitHub Stars GitHub Forks PRs Welcome GitHub IssuesLicense

MapEval Framework Overview

Overview

MapEval is a comprehensive framework for evaluating point cloud maps in SLAM systems, addressing two fundamentally distinct aspects of map quality assessment:

  1. Global Geometric Accuracy: Measures the absolute geometric fidelity of the reconstructed map compared to ground truth. This aspect is crucial as SLAM systems often accumulate drift over long trajectories, leading to global deformation.

  2. Local Structural Consistency: Evaluates the preservation of local geometric features and structural relationships, which is essential for tasks like obstacle avoidance and local planning, even when global accuracy may be compromised.

These complementary aspects require different evaluation approaches, as global drift may exist despite excellent local reconstruction, or conversely, good global alignment might mask local inconsistencies. Our framework provides a unified solution through both traditional metrics and novel evaluation methods based on optimal transport theory.

News

  • 2025/07/06: Use TBB to accelerate MME calculation, update parameter settings and add more configuration examples.
  • 2025/05/05: Add new test data and remove simulation codes.
  • 2025/03/05: Formally published in IEEE RAL!
  • 2025/02/25: Paper accepted!
  • 2025/02/12: Source code released!
  • 2025/02/05: Paper resubmitted.
  • 2024/12/19: Paper submitted to IEEE RAL!

Key Features

Traditional Metrics Implementation

  • Accuracy (AC): Point-level geometric error assessment
  • Completeness (COM): Map coverage evaluation
  • Chamfer Distance (CD): Bidirectional point cloud difference
  • Mean Map Entropy (MME): Information-theoretic local consistency metric

Novel Proposed Metrics

  • Average Wasserstein Distance (AWD): Robust global geometric accuracy assessment
  • Spatial Consistency Score (SCS): Enhanced local consistency evaluation

Evaluation Metrics Illustration

Experimental Results

Simulation Experiments

Noise Sensitivity Outlier Robustness
image-20241129123446075 image-20241129091845196

image-20241127083707943

Real-World Experiments

Map Evaluation via Localization Accuracy Map Evaluation in Diverse Environments
image-20241127083813797 image-20241127083801691
image-20241129092052634

Efficiency and Parameter Analysis

image-20250322192323830 image-20250322192349614

Datasets

The following datasets are supported and used for evaluation:

Dataset Description
MS-Dataset Multi-session mapping dataset
FusionPortable (FP) and FusionPortableV2 Multi-sensor fusion dataset
Newer College (NC) Outdoor autonomous navigation dataset
GEODE Dataset (GE) Degenerate SLAM dataset
image-20250322192302315

Quick Start

Dependencies

Test Data

Download the test data using password: 1

Sequence Preview Test PCD Ground Truth PCD
MCR_slow MCR_slow Preview map.pcd map_gt.pcd
PK01 PK01 Preview map.pcd gt.pcd

Installation and Usage

1. Install Open3D

Note: A higher version of CMake may be required.

git clone https://github.com/isl-org/Open3D.git
cd Open3D && mkdir build && cd build   
cmake ..
make install

2. Configure Parameters

Set and review the parameters in config.yaml:

# accuracy_level: vector5d, we mainly use the result of the first element
# For small inlier ratios, try larger values, e.g., for outdoors: [0.5, 0.3, 0.2, 0.1, 0.05]
accuracy_level: [0.2, 0.1, 0.08, 0.05, 0.01]

# initial_matrix: vector16d, the initial transformation matrix for registration
# Ensure correct format to avoid YAML::BadSubscript errors
initial_matrix:
  - [1.0, 0.0, 0.0, 0.0]
  - [0.0, 1.0, 0.0, 0.0]
  - [0.0, 0.0, 1.0, 0.0]
  - [0.0, 0.0, 0.0, 1.0]
  
# vmd_voxel_size: outdoor: 2.0-4.0; indoor: 2.0-3.0
vmd_voxel_size: 3.0

3. Compile MapEval

git clone https://github.com/JokerJohn/Cloud_Map_Evaluation.git
cd Cloud_Map_Evaluation/map_eval && mkdir build && cd build
cmake ..
make

4. Run Evaluation

./map_eval

This evaluates a point cloud map generated by a SLAM system against a ground truth point cloud map and calculates related metrics.

Evaluation Results

Visualization

Error Visualization

The framework generates rendered distance-error maps with color coding:

  • Raw distance-error map (10cm): Shows error for all points
  • Inlier distance-error map (2cm): Shows error for matched points only
  • Color scheme: R→G→B represents distance error levels from 0-10cm

Error Visualization

Evaluation Without Ground Truth

If ground truth is not available, only Mean Map Entropy (MME) can be evaluated. Lower values indicate better consistency. Set evaluate_mme: false in config.yaml.

MME Evaluation

Mesh Reconstruction

A simple mesh can be reconstructed from the point cloud map:

Mesh Reconstruction

5. Output Files

The evaluation generates the following result files:

Output Files

6. Voxel Error Visualization

For detailed voxel error visualization, use error-visualization.py:

pip install numpy matplotlib scipy
python3 error-visualization.py
Voxel Error 1 Voxel Error 2
Voxel Error 3 Voxel Error 4

Frequently Asked Questions

How to Obtain Initial Pose?

Use CloudCompare to align the LiDAR-Inertial Odometry (LIO) map to the ground truth map:

  1. Roughly translate and rotate the LIO point cloud map to align with the GT map
  2. Manually register the moved LIO map (aligned) to the GT map (reference)
  3. Extract the terminal transform output T as the initial pose matrix
CloudCompare Alignment 1 CloudCompare Alignment 2

Difference Between Raw and Inlier Rendered Maps

  • Raw rendered map (left): Color-codes error for all points in the estimated map. Points without correspondences in the ground truth map are assigned maximum error (20cm) and rendered in red.

  • Inlier rendered map (right): Excludes non-overlapping regions and colors only inlier points after point cloud matching. Contains only a subset of the original estimated map points.

Credit: John-Henawy in issue #5

Raw vs Inlier Maps

Applicable Scenarios

1. With Ground Truth Map

All metrics (AC, COM, CD, MME, AWD, SCS) are applicable.

2. Without Ground Truth Map

Only Mean Map Entropy (MME) can be used for evaluation.

Important considerations:

  • Maps must be on the same scale
    • Cannot compare LIO maps with LIO SLAM maps that have undergone loop closure optimization. Loop closure modifies local point cloud structure, leading to inaccurate MME evaluation.
    • Can compare MME between different LIO maps

Credit: @Silentbarber, ZOUYIyi in issue #4 and issue #7

Citation

If you find this work useful for your research, please cite our paper:

@article{hu2025mapeval,
  title={MapEval: Towards Unified, Robust and Efficient SLAM Map Evaluation Framework}, 
  author={Xiangcheng Hu and Jin Wu and Mingkai Jia and Hongyu Yan and Yi Jiang and Binqian Jiang and Wei Zhang and Wei He and Ping Tan},
  journal={IEEE Robotics and Automation Letters},
  year={2025},
  volume={10},
  number={5},
  pages={4228-4235},
  doi={10.1109/LRA.2025.3548441}
}

@article{wei2024fpv2,
  title={Fusionportablev2: A unified multi-sensor dataset for generalized slam across diverse platforms and scalable environments},
  author={Wei, Hexiang and Jiao, Jianhao and Hu, Xiangcheng and Yu, Jingwen and Xie, Xupeng and Wu, Jin and Zhu, Yilong and Liu, Yuxuan and Wang, Lujia and Liu, Ming},
  journal={The International Journal of Robotics Research},
  pages={02783649241303525},
  year={2024},
  publisher={SAGE Publications Sage UK: London, England}
}

Related Works

The following research works have utilized MapEval for map evaluation:

Work Description Publication Metrics Used Preview
LEMON-Mapping Multi-Session Point Cloud Mapping arXiv 2025 MME LEMON-Mapping
CompSLAM Multi-Modal Localization and Mapping arXiv 2025 AWD/SCS CompSLAM
GEODE SLAM Dataset IJRR 2025 - GEODE
ELite LiDAR-based Lifelong Mapping ICRA 2025 AC/CD ELite
PALoc Prior-Assisted 6-DoF Localization T-MECH 2024 AC/CD PALoc
MS-Mapping Multi-Session LiDAR Mapping arXiv 2024 AC/CD/MME MS-Mapping
FusionPortableV2 SLAM Dataset IJRR 2024 COM/CD FusionPortableV2

Contributors

We thank all contributors to this project:

Star History

Star History Chart

License

This project is licensed under the MIT License - see the LICENSE file for details.