Skip to content

Moving/Static Separation and MOS Datasets

Last updated: 2026-05-09

Why It Matters

Moving object segmentation (MOS) is the bridge between raw perception and map-safe autonomy. Semantic labels can say "car" or "person", but MOS answers whether that object is moving now, was moving during mapping, or should be excluded from a persistent map. This distinction is critical for airside operations because aircraft, tugs, belt loaders, dollies, buses, cones, and ground crew can alternate between static, movable-static, and dynamic states within the same stand.

The public benchmarks below are useful for screening algorithms, but none is a complete airport-apron benchmark. Use them to test point-wise motion separation, sensor-pattern robustness, and map-cleaning inputs before collecting local airside data.

Dataset/Benchmark Table

Dataset / benchmarkSource URLDomain and sensorsLabels / taskBest useMain transfer risk
SemanticKITTI / SemanticKITTI-MOShttps://semantic-kitti.org/index.htmlKITTI odometry urban driving, 10 Hz spinning automotive LiDARPoint-wise semantic labels with moving and non-moving traffic-participant classes; MOS task added laterBaseline for moving-vs-static segmentation, class-conditioned dynamic analysis, and compatibility with many LiDAR methodsRoad traffic and one sensor family do not cover aircraft geometry, low-speed GSE, or multi-LiDAR apron rigs
LiDAR-MOS / LMNethttps://github.com/PRBonn/LiDAR-MOSSequential spinning LiDAR with ego-motion compensation, evaluated on SemanticKITTI MOSBinary per-point moving/static output from temporal range residualsFast learned MOS baseline for SLAM filtering, map pre-cleaning, and dynamic occupancy maskingResidual features are sensitive to ego-pose error, timestamp drift, rolling LiDAR distortion, and non-rotating LiDAR patterns
HeLiMOShttps://sites.google.com/view/helimos/datasetKAIST05 from HeLiPR with Velodyne VLP-16, Ouster OS2-128, Livox Avia, and Aeva Aeries II12,188 labeled point clouds using SemanticKITTI-MOS-style unlabeled/static/dynamic labelsSensor-transfer benchmark for heterogeneous and solid-state LiDAR MOSUrban campus dynamics are still not apron dynamics; labels are MOS only, not aircraft/GSE semantics
MOEhttps://sites.google.com/view/moe-datasetTen simulated and real LiDAR sequences with dense moving-object activityMoving Event Detection (MED) benchmark and CodaLab competition over held-out sequencesStress testing dense moving-event detection and comparing offline, online, and learned methodsMED emphasizes moving events, so static preservation and map-layer policy still need separate evaluation
Dynablox datasethttps://projects.asl.ethz.ch/datasets/dynablox/Indoor and outdoor OS0-128 LiDAR sequences with pedestrians and atypical motion such as bouncing balls and rolling luggageDynamic-object detection for incremental mapping and object-aware planningTesting detection of non-road, non-vehicle dynamic objects in cluttered indoor/outdoor spacesSmall mobility-platform scenes do not validate long-range road or airport-scale open-area performance

Metrics

MetricWhat to reportWhy it matters
Moving IoU / IoU_MOSIoU for the dynamic or moving class, per sequence and per distance bandThe moving class is sparse; mean scores can hide missed dynamic actors
Static IoU / static preservationIoU or precision for static points, with thin-structure breakdownsOver-removing static points can damage localization maps and occupancy priors
Mean IoU / F1Class-balanced aggregate over static and dynamic labelsUseful for leaderboard comparison, but not sufficient for safety decisions
Dynamic precision and recallFalse dynamic and missed dynamic rates, split by actor class where labels allowAirside planners care more about missed moving GSE and people than a single aggregate
Latency and deadline missesMean, P95, P99 per scan on target hardwareA MOS mask that arrives late cannot protect mapping, tracking, or planning
Ego-motion sensitivityScore under pose noise, timestamp offsets, and per-LiDAR extrinsic perturbationsTemporal residual MOS can degrade when localization is imperfect
Sensor-pattern robustnessPer-sensor scores before and after multi-LiDAR fusionHeLiMOS shows why spinning and solid-state LiDAR cannot be assumed equivalent

Airside/Indoor/Outdoor Transfer

Transfer pathUse public data forDo not claim until validated locally
Outdoor road to airsideAlgorithm bring-up, SemanticKITTI compatibility, moving vehicle and pedestrian masksAircraft, wing/tail sweep, belt-loader conveyors, dollies, chocks, cones, FOD, reflective markings, and jet-bridge occlusions
Heterogeneous LiDAR to airsidePer-sensor MOS behavior, solid-state scan-pattern failures, multi-LiDAR fusion policyFinal sensor placement and synchronized fused-cloud behavior on the target vehicle
Indoor/outdoor clutter to airsideUnusual moving objects and occlusion around ramps, stairs, corridors, and clutterOpen apron degeneracy, long flat concrete, aircraft-scale occluders, and weather/floodlighting
MOS to map cleaningFirst-pass dynamic masks before SLAM or offline map cleanersPermanent deletion from the localization map without multi-session evidence

Validation Guidance

  1. Reproduce SemanticKITTI-MOS or LiDAR-MOS results first to verify data formatting, ego-motion compensation, and label remapping.
  2. Run HeLiMOS per sensor, not only on a fused cloud. Treat large score differences across VLP-16, OS2-128, Livox, and Aeva-style patterns as deployment risks.
  3. Add MOE and Dynablox to expose dense dynamic scenes and non-vehicle motion before tuning on private airside data.
  4. For airside acceptance, label at least parked, starting-to-move, moving, and stopped-again states for GSE and aircraft-adjacent equipment. A single moving/static binary label is not enough for map lifecycle decisions.
  5. Keep false-positive static erosion and false-negative dynamic leakage separate. False positives reduce map density; false negatives can put moving objects into maps or occupancy history.
  6. Report MOS results alongside downstream effects: SLAM residuals, static-map ghost rate, tracker false tracks, and planner hard-brake events.

Sources

Public research notes collected from public sources.