Xiangcheng Hu1 · Jin Wu1 · Mingkai Jia1 · Hongyu Yan1 · Yi Jiang2 · Binqian Jiang1
Wei Zhang1 · Wei He3 · Ping Tan1*†
1HKUST 2CityU 3USTB
†Project Lead *Corresponding Author
MapEval is a comprehensive framework for evaluating point cloud maps in SLAM systems, addressing two fundamentally distinct aspects of map quality assessment:
-
Global Geometric Accuracy: Measures the absolute geometric fidelity of the reconstructed map compared to ground truth. This aspect is crucial as SLAM systems often accumulate drift over long trajectories, leading to global deformation.
-
Local Structural Consistency: Evaluates the preservation of local geometric features and structural relationships, which is essential for tasks like obstacle avoidance and local planning, even when global accuracy may be compromised.
These complementary aspects require different evaluation approaches, as global drift may exist despite excellent local reconstruction, or conversely, good global alignment might mask local inconsistencies. Our framework provides a unified solution through both traditional metrics and novel evaluation methods based on optimal transport theory.
- 2025/07/06: Use TBB to accelerate MME calculation, update parameter settings and add more configuration examples.
- 2025/05/05: Add new test data and remove simulation codes.
- 2025/03/05: Formally published in IEEE RAL!
- 2025/02/25: Paper accepted!
- 2025/02/12: Source code released!
- 2025/02/05: Paper resubmitted.
- 2024/12/19: Paper submitted to IEEE RAL!
- Accuracy (AC): Point-level geometric error assessment
- Completeness (COM): Map coverage evaluation
- Chamfer Distance (CD): Bidirectional point cloud difference
- Mean Map Entropy (MME): Information-theoretic local consistency metric
- Average Wasserstein Distance (AWD): Robust global geometric accuracy assessment
- Spatial Consistency Score (SCS): Enhanced local consistency evaluation
Noise Sensitivity | Outlier Robustness |
---|---|
![]() |
![]() |
Map Evaluation via Localization Accuracy | Map Evaluation in Diverse Environments |
---|---|
![]() |
![]() |
![]() |
---|
![]() |
![]() |
---|
The following datasets are supported and used for evaluation:
Dataset | Description |
---|---|
MS-Dataset | Multi-session mapping dataset |
FusionPortable (FP) and FusionPortableV2 | Multi-sensor fusion dataset |
Newer College (NC) | Outdoor autonomous navigation dataset |
GEODE Dataset (GE) | Degenerate SLAM dataset |
![]() |
---|
- Open3D 0.15.1 (>= 0.11)
- Eigen 3.3.7
- PCL 1.10.0
- yaml-cpp
- TBB 2020.1
- Ubuntu 20.04
Download the test data using password: 1
Sequence | Preview | Test PCD | Ground Truth PCD |
---|---|---|---|
MCR_slow | ![]() |
map.pcd | map_gt.pcd |
PK01 | ![]() |
map.pcd | gt.pcd |
Note: A higher version of CMake may be required.
git clone https://github.com/isl-org/Open3D.git
cd Open3D && mkdir build && cd build
cmake ..
make install
Set and review the parameters in config.yaml:
# accuracy_level: vector5d, we mainly use the result of the first element
# For small inlier ratios, try larger values, e.g., for outdoors: [0.5, 0.3, 0.2, 0.1, 0.05]
accuracy_level: [0.2, 0.1, 0.08, 0.05, 0.01]
# initial_matrix: vector16d, the initial transformation matrix for registration
# Ensure correct format to avoid YAML::BadSubscript errors
initial_matrix:
- [1.0, 0.0, 0.0, 0.0]
- [0.0, 1.0, 0.0, 0.0]
- [0.0, 0.0, 1.0, 0.0]
- [0.0, 0.0, 0.0, 1.0]
# vmd_voxel_size: outdoor: 2.0-4.0; indoor: 2.0-3.0
vmd_voxel_size: 3.0
git clone https://github.com/JokerJohn/Cloud_Map_Evaluation.git
cd Cloud_Map_Evaluation/map_eval && mkdir build && cd build
cmake ..
make
./map_eval
This evaluates a point cloud map generated by a SLAM system against a ground truth point cloud map and calculates related metrics.
The framework generates rendered distance-error maps with color coding:
- Raw distance-error map (10cm): Shows error for all points
- Inlier distance-error map (2cm): Shows error for matched points only
- Color scheme: R→G→B represents distance error levels from 0-10cm
If ground truth is not available, only Mean Map Entropy (MME) can be evaluated. Lower values indicate better consistency. Set evaluate_mme: false
in config.yaml.
A simple mesh can be reconstructed from the point cloud map:
The evaluation generates the following result files:
For detailed voxel error visualization, use error-visualization.py:
pip install numpy matplotlib scipy
python3 error-visualization.py
![]() |
![]() |
---|---|
![]() |
![]() |
Use CloudCompare to align the LiDAR-Inertial Odometry (LIO) map to the ground truth map:
- Roughly translate and rotate the LIO point cloud map to align with the GT map
- Manually register the moved LIO map (aligned) to the GT map (reference)
- Extract the terminal transform output
T
as the initial pose matrix
![]() |
![]() |
---|
-
Raw rendered map (left): Color-codes error for all points in the estimated map. Points without correspondences in the ground truth map are assigned maximum error (20cm) and rendered in red.
-
Inlier rendered map (right): Excludes non-overlapping regions and colors only inlier points after point cloud matching. Contains only a subset of the original estimated map points.
Credit: John-Henawy in issue #5
All metrics (AC, COM, CD, MME, AWD, SCS) are applicable.
Only Mean Map Entropy (MME) can be used for evaluation.
Important considerations:
- Maps must be on the same scale
- Cannot compare LIO maps with LIO SLAM maps that have undergone loop closure optimization. Loop closure modifies local point cloud structure, leading to inaccurate MME evaluation.
- Can compare MME between different LIO maps
Credit: @Silentbarber, ZOUYIyi in issue #4 and issue #7
If you find this work useful for your research, please cite our paper:
@article{hu2025mapeval,
title={MapEval: Towards Unified, Robust and Efficient SLAM Map Evaluation Framework},
author={Xiangcheng Hu and Jin Wu and Mingkai Jia and Hongyu Yan and Yi Jiang and Binqian Jiang and Wei Zhang and Wei He and Ping Tan},
journal={IEEE Robotics and Automation Letters},
year={2025},
volume={10},
number={5},
pages={4228-4235},
doi={10.1109/LRA.2025.3548441}
}
@article{wei2024fpv2,
title={Fusionportablev2: A unified multi-sensor dataset for generalized slam across diverse platforms and scalable environments},
author={Wei, Hexiang and Jiao, Jianhao and Hu, Xiangcheng and Yu, Jingwen and Xie, Xupeng and Wu, Jin and Zhu, Yilong and Liu, Yuxuan and Wang, Lujia and Liu, Ming},
journal={The International Journal of Robotics Research},
pages={02783649241303525},
year={2024},
publisher={SAGE Publications Sage UK: London, England}
}
The following research works have utilized MapEval for map evaluation:
Work | Description | Publication | Metrics Used | Preview |
---|---|---|---|---|
LEMON-Mapping | Multi-Session Point Cloud Mapping | arXiv 2025 | MME | ![]() |
CompSLAM | Multi-Modal Localization and Mapping | arXiv 2025 | AWD/SCS | ![]() |
GEODE | SLAM Dataset | IJRR 2025 | - | ![]() |
ELite | LiDAR-based Lifelong Mapping | ICRA 2025 | AC/CD | ![]() |
PALoc | Prior-Assisted 6-DoF Localization | T-MECH 2024 | AC/CD | ![]() |
MS-Mapping | Multi-Session LiDAR Mapping | arXiv 2024 | AC/CD/MME | ![]() |
FusionPortableV2 | SLAM Dataset | IJRR 2024 | COM/CD | ![]() |
We thank all contributors to this project:
This project is licensed under the MIT License - see the LICENSE file for details.