Skip to content

MOE LiDAR Moving Event Benchmark

Last updated: 2026-05-09

Why It Matters

MOE is a dense LiDAR Moving Event Detection (MED) dataset and benchmark. It targets a narrower question than semantic segmentation: which LiDAR points are generated by moving objects or moving events?

That distinction matters for static map building, dynamic obstacle filtering, and airside map QA. A moving aircraft tug, pedestrian, bus, or baggage cart can leave ghost structure in a LiDAR map even if the object class is known.

Dataset Snapshot

FieldMOE detailPractical note
Full nameMOE: A Dense LiDAR MOving Event Dataset, Detection Benchmark and LeaderBoardTreat as a MED benchmark, not a general semantic segmentation dataset
Source typesTen sequences from high-fidelity simulation and real-world / campus-style scenesUseful density and diversity, but still not an airport apron dataset
Public structureSequence folders such as 00, 01, 02; each contains pcd, label, and gt_poses.txt where labels are releasedSupports reproducible replay with ground-truth poses
Point cloud formatxxxxxx.pcd with x,y,z informationSimple to load with Open3D or PCL
Label rowMoveable_ID Moving_Status Class_IDSeparates movable object membership from moving status
Pose formatKITTI-style first three rows of a 4x4 transform in gt_poses.txtConvenient for MOS and map-cleaning baselines
Competition splitSequences 05-09 are used for the CodaLab MED competitionThose sequences publish point clouds and poses without public motion labels

Benchmark Table

The MOE repository reports mean IoU over sequences 00, 01, and 02, with moving points treated as positive. The table below is the repository benchmark, not a live leaderboard scrape.

MethodFamilySeq 00 IoUSeq 01 IoUSeq 02 IoUMean IoUTransfer note
DODOnline non-learning0.7860.1420.5950.508Strong in dense indoor/crowd-style cases; weaker on outdoor sequence 01
InsMOSLearning-based MOS0.4950.2820.3790.385Best learning baseline in the published MOE table, but trained on SemanticKITTI
ERASOROffline non-learning0.3780.0280.6270.344Map-cleaning baseline; not designed for low-latency event output
OctomapOffline non-learning0.3280.0310.6520.337Occupancy baseline; sensitive to scene and range settings
DynabloxOnline non-learning0.3200.1950.4920.336Incremental mapping flavor; useful for runtime dynamic masking
RemovertOffline non-learning0.2970.0280.4210.249Good static-map reference but weak on MOE sequence 01
M-detectorOnline non-learning MED0.3050.1740.0440.174Point-stream detector; repository score reflects this dataset/protocol, not its latency advantage
MotionBEVLearning-based BEV0.0020.0550.0690.042Shows poor cross-domain transfer when trained elsewhere

Metrics

MetricDefinition / reporting guidanceWhy it matters
Moving-event IoUTP / (TP + FP + FN) for moving pointsCore MOE ranking metric
Per-sequence IoUReport each sequence, not only mean IoUMOE results vary sharply by scene structure
Mean of per-scan IoUAverage IoU scan by scan when required by the protocolMore sensitive to sparse dynamic frames than pooled IoU
Dynamic precisionFraction of predicted moving points that are truly movingControls false dynamic erosion of static map structure
Dynamic recallFraction of true moving points detectedControls ghost leakage into maps and occupancy history
LatencyPoint-level, scan-level, and P95/P99 processing delayMED is valuable only if the result arrives before mapping/planning consumes the points
Pose sensitivityScore under pose noise, timestamp drift, and extrinsic perturbationMOE provides poses, but real fleets do not get perfect alignment

Airside Transfer

MOE assetUse it forDo not claim until local data exists
Dense moving-object scenesStress moving/static separation under crowded conditionsAircraft pushback, towbar coupling, belt-loader alignment, and GSE staging
Simulation sequencesVerify label parsing, pose format, and metric codeDomain performance on reflective apron concrete and airport lighting
Public baselinesCompare offline cleaners, online occupancy methods, and learned MOS under one protocolA single production default; the published table has strong sequence dependence
CodaLab held-out splitExternal regression signal if submitting methodsSafety-case evidence; leaderboard scores are not airside acceptance tests

Validation Guidance

  1. Reproduce the repository benchmark on sequences 00-02 before adding private data.
  2. Keep sequence-level scores in dashboards; MOE sequence 01 behaves very differently from 00 and 02.
  3. Evaluate both online point output and delayed frame output when a method supports both.
  4. Add static-preservation metrics alongside moving IoU if the output feeds map cleaning.
  5. For airside replay, label movable-static states separately: parked GSE, staged carts, chocks, cones, stopped aircraft, and starting-to-move actors.
  6. Use MOE to choose candidate algorithms, then retune thresholds on local sensor range, scan pattern, vehicle speed, and localization quality.

Sources

Public research notes collected from public sources.