Skip to content

HeLiMOS Heterogeneous LiDAR MOS

Last updated: 2026-05-09

Why It Matters

Most LiDAR MOS work was built around mechanically spinning automotive LiDAR. HeLiMOS is important because it tests moving object segmentation across four heterogeneous LiDAR sensors, including solid-state sensors with irregular scan patterns.

That makes it a strong proxy for airside vehicles using mixed LiDAR suites. It is still an urban-campus benchmark, not an airport apron benchmark.

Dataset Snapshot

FieldHeLiMOS detailPractical note
PaperIROS 2024 dataset for MOS in 3D point clouds from heterogeneous LiDAR sensorsUse as a sensor-transfer benchmark
Base dataKAIST05 sequence from the HeLiPR datasetSingle route/sequence family, repeated urban locations
SensorsVelodyne VLP-16, Ouster OS2-128, Livox Avia, Aeva Aeries IITwo spinning omnidirectional sensors and two solid-state sensors
Dynamic actorsBuses, pedestrians, bicyclists, cars, and other urban moving objectsGood for sensor pattern effects, limited for GSE and aircraft
LabelsSemanticKITTI-MOS-style unlabeled, static, dynamicEasy to adapt existing MOS loaders
Size12,188 labeled point clouds on the public siteLarge enough for per-sensor evaluation
DownloadKAIST05 IROS 2024 zip, listed as 35 GB and 48 GB decompressedPlan storage and data governance up front
Point cloudsDeskewed point clouds are providedReduces one source of temporal label noise

Sensor Transfer Matrix

SensorPatternWhat it testsAirside lesson
Velodyne VLP-16Low-channel spinningSparse conventional LiDARLow-density edge cases near cones, legs, tow bars, and distant GSE
Ouster OS2-128High-channel spinningDense conventional LiDARStrong reference sensor for static/dynamic separation
Livox AviaSolid-state irregularNon-repetitive field coverageRange-image MOS assumptions can break on irregular scans
Aeva Aeries IIFMCW solid-stateSolid-state scan pattern with velocity-capable sensor familyCheck whether the deployed stack uses raw point patterns, velocity, or both

Tasks and Metrics

TaskMetricUse
Moving object segmentationIntersection-over-Union for static and dynamic MOS labelsCompare MOS methods across sensors
Sensor generalizationPer-sensor mIoU before fused averagesExpose scan-pattern dependence
Cross-trainingTrain on SemanticKITTI or HeLiMOS and evaluate per sensorSeparate road-data generalization from sensor adaptation
Static map buildingPreservation rate, rejection rate, and F1 scoreBridge MOS masks into map-cleaning decisions
Qualitative reviewPer-sensor dynamic masks on the same sceneFind geometry failures hidden by aggregate metrics

Labeling and Tooling

ComponentSource detailWhy it matters
Automatic labelingPaper describes instance-aware static map building and tracking-based false-label filteringReduces manual labeling burden but is not a perfect oracle
Site toolingHeLiMOS site notes ERASOR2 + TOSS for the automatic labeling pipelineImportant for understanding label biases
Toolbox saverDeskews and saves individual LiDAR and pose data in HeLiMOS formatUseful when adapting private multi-LiDAR logs
Toolbox mergerSynchronizes and merges individual LiDAR data into one cloudSupports multi-sensor label propagation
Toolbox propagatorBackpropagates labels from merged clouds to individual cloudsEnables per-sensor MOS evaluation

Comparison With Other MOS Data

DatasetWhat HeLiMOS addsRemaining gap
SemanticKITTI-MOSHeterogeneous sensors and solid-state patternsLess benchmark history and one core HeLiPR sequence
LiDAR-MOS / LMNetSensor-transfer stress test for range-view residual methodsDoes not solve semantic class or map lifecycle policy
MOESensor-pattern generalization rather than dense moving-event scenesMOE-style MED latency and competition split are separate
KTH map cleaningPer-point MOS labels that can feed static map buildingMap-level PR/RR still needs accumulated maps

Airside Transfer

Airside questionUse HeLiMOS forStill collect locally
Which LiDAR pattern fails first?Per-sensor MOS degradation across spinning and solid-state LiDARActual mounted sensors, extrinsics, vibration, and apron ranges
Should MOS run per sensor or fused?Compare single-sensor labels and merged-cloud behaviorMulti-LiDAR synchronization and blind zones on the vehicle
Can road-trained MOS generalize?Test SemanticKITTI-trained baselines on HeLiMOSAircraft, GSE, cones, chocks, FOD, reflective markings, and wet concrete
Can MOS help map cleaning?Use static-map metrics from the HeLiMOS task pagePermanent vs movable-static map layer decisions

Validation Guidance

  1. Report per-sensor results before any fused average.
  2. Keep sensor model, mount pose, deskewing status, and timestamp policy in the benchmark artifact.
  3. Compare range-view, BEV, point-based, and 4D sparse methods because scan-pattern sensitivity differs by representation.
  4. Use HeLiMOS labels to stress sensor transfer, then use local apron data to stress object taxonomy and low-speed motion.
  5. For map cleaning, track both dynamic rejection and static preservation; MOS IoU alone does not prove map safety.
  6. Audit automatic labels around stopped or starting objects, because tracking-based cleanup can encode assumptions about motion history.

Sources

Public research notes collected from public sources.