MOE LiDAR Moving Event Benchmark
Last updated: 2026-05-09
Why It Matters
MOE is a dense LiDAR Moving Event Detection (MED) dataset and benchmark. It targets a narrower question than semantic segmentation: which LiDAR points are generated by moving objects or moving events?
That distinction matters for static map building, dynamic obstacle filtering, and airside map QA. A moving aircraft tug, pedestrian, bus, or baggage cart can leave ghost structure in a LiDAR map even if the object class is known.
Dataset Snapshot
| Field | MOE detail | Practical note |
|---|---|---|
| Full name | MOE: A Dense LiDAR MOving Event Dataset, Detection Benchmark and LeaderBoard | Treat as a MED benchmark, not a general semantic segmentation dataset |
| Source types | Ten sequences from high-fidelity simulation and real-world / campus-style scenes | Useful density and diversity, but still not an airport apron dataset |
| Public structure | Sequence folders such as 00, 01, 02; each contains pcd, label, and gt_poses.txt where labels are released | Supports reproducible replay with ground-truth poses |
| Point cloud format | xxxxxx.pcd with x,y,z information | Simple to load with Open3D or PCL |
| Label row | Moveable_ID Moving_Status Class_ID | Separates movable object membership from moving status |
| Pose format | KITTI-style first three rows of a 4x4 transform in gt_poses.txt | Convenient for MOS and map-cleaning baselines |
| Competition split | Sequences 05-09 are used for the CodaLab MED competition | Those sequences publish point clouds and poses without public motion labels |
Benchmark Table
The MOE repository reports mean IoU over sequences 00, 01, and 02, with moving points treated as positive. The table below is the repository benchmark, not a live leaderboard scrape.
| Method | Family | Seq 00 IoU | Seq 01 IoU | Seq 02 IoU | Mean IoU | Transfer note |
|---|---|---|---|---|---|---|
| DOD | Online non-learning | 0.786 | 0.142 | 0.595 | 0.508 | Strong in dense indoor/crowd-style cases; weaker on outdoor sequence 01 |
| InsMOS | Learning-based MOS | 0.495 | 0.282 | 0.379 | 0.385 | Best learning baseline in the published MOE table, but trained on SemanticKITTI |
| ERASOR | Offline non-learning | 0.378 | 0.028 | 0.627 | 0.344 | Map-cleaning baseline; not designed for low-latency event output |
| Octomap | Offline non-learning | 0.328 | 0.031 | 0.652 | 0.337 | Occupancy baseline; sensitive to scene and range settings |
| Dynablox | Online non-learning | 0.320 | 0.195 | 0.492 | 0.336 | Incremental mapping flavor; useful for runtime dynamic masking |
| Removert | Offline non-learning | 0.297 | 0.028 | 0.421 | 0.249 | Good static-map reference but weak on MOE sequence 01 |
| M-detector | Online non-learning MED | 0.305 | 0.174 | 0.044 | 0.174 | Point-stream detector; repository score reflects this dataset/protocol, not its latency advantage |
| MotionBEV | Learning-based BEV | 0.002 | 0.055 | 0.069 | 0.042 | Shows poor cross-domain transfer when trained elsewhere |
Metrics
| Metric | Definition / reporting guidance | Why it matters |
|---|---|---|
| Moving-event IoU | TP / (TP + FP + FN) for moving points | Core MOE ranking metric |
| Per-sequence IoU | Report each sequence, not only mean IoU | MOE results vary sharply by scene structure |
| Mean of per-scan IoU | Average IoU scan by scan when required by the protocol | More sensitive to sparse dynamic frames than pooled IoU |
| Dynamic precision | Fraction of predicted moving points that are truly moving | Controls false dynamic erosion of static map structure |
| Dynamic recall | Fraction of true moving points detected | Controls ghost leakage into maps and occupancy history |
| Latency | Point-level, scan-level, and P95/P99 processing delay | MED is valuable only if the result arrives before mapping/planning consumes the points |
| Pose sensitivity | Score under pose noise, timestamp drift, and extrinsic perturbation | MOE provides poses, but real fleets do not get perfect alignment |
Airside Transfer
| MOE asset | Use it for | Do not claim until local data exists |
|---|---|---|
| Dense moving-object scenes | Stress moving/static separation under crowded conditions | Aircraft pushback, towbar coupling, belt-loader alignment, and GSE staging |
| Simulation sequences | Verify label parsing, pose format, and metric code | Domain performance on reflective apron concrete and airport lighting |
| Public baselines | Compare offline cleaners, online occupancy methods, and learned MOS under one protocol | A single production default; the published table has strong sequence dependence |
| CodaLab held-out split | External regression signal if submitting methods | Safety-case evidence; leaderboard scores are not airside acceptance tests |
Validation Guidance
- Reproduce the repository benchmark on sequences
00-02before adding private data. - Keep sequence-level scores in dashboards; MOE sequence
01behaves very differently from00and02. - Evaluate both online point output and delayed frame output when a method supports both.
- Add static-preservation metrics alongside moving IoU if the output feeds map cleaning.
- For airside replay, label movable-static states separately: parked GSE, staged carts, chocks, cones, stopped aircraft, and starting-to-move actors.
- Use MOE to choose candidate algorithms, then retune thresholds on local sensor range, scan pattern, vehicle speed, and localization quality.
Sources
- MOE project site: https://sites.google.com/view/moe-dataset
- MOE dataset repository: https://github.com/DeepDuke/MOE-Dataset
- MOE CodaLab competition: https://codalab.lisn.upsaclay.fr/competitions/18028
- MOE IROS 2024 record: https://researchportal.hkust.edu.hk/en/publications/moe-a-dense-lidar-moving-event-dataset-detection-benchmark-and-le-2/
- M-detector paper: https://www.nature.com/articles/s41467-023-44554-8
- Local context:
30-autonomy-stack/perception/datasets-benchmarks/moving-static-separation-mos-datasets.md