Skip to content

LiDAR Map Cleaning and Dynamic Removal

Executive Summary

LiDAR map cleaning removes transient, dynamic, ghost, and artifact points from accumulated point-cloud maps so localization, planning, QA, and annotation operate on a stable representation of the environment. It is broader than online moving-object segmentation. A production airside stack needs both runtime dynamic masks and offline static-map cleaning.

Core methods include ERASOR, Removert, MapCleaner, ERASOR++, 4dNDF, and MOS-style evaluation such as LiDAR-MOS and HeLiMOS. The safest map lifecycle separates four layers:

  • Static persistent map: surveyed structure used for localization.
  • Movable-static layer: aircraft, GSE, cones, barriers, and staged equipment.
  • Dynamic layer: moving objects observed during a run.
  • Artifact layer: weather, ghost, multipath, saturation, and sensor contamination.

Technique Taxonomy

FamilyMethodsMain evidenceBest use
Visibility/range-image cleaningRemovertQuery-to-map range inconsistency and multiresolution revertOffline map cleaning with pose uncertainty.
Pseudo-occupancy cleaningERASOREgocentric pseudo-occupancy ratio and ground refinementRemoving object traces from accumulated maps.
Terrain and voting cleaningMapCleanerTerrain model, object-part separation, local observation votingLearning-free map cleaning with ground-aware processing.
Enhanced occupancy codingERASOR++Height coding descriptor and dynamic-bin testsMore precise occupancy-based dynamic bin identification.
Neural implicit 4D mapping4dNDFTime-dependent TSDF, sparse feature grids, learned static extractionResearch-grade dynamic scene reconstruction and map extraction.
Online MOSLiDAR-MOS, 4DMOS, HeLiMOS-style evaluationMoving/static point labels over timeRuntime masking and dataset evaluation.
Multi-session consensusFleet map lifecyclePersistence across days/shiftsProduction promotion or rejection of map changes.

Map Lifecycle Pipeline

  1. Collect synchronized LiDAR, pose, GNSS/INS, wheel/IMU, weather, and sensor-health logs.
  2. Produce a high-quality trajectory using LIO/SLAM plus loop closure and control points.
  3. Build an initial raw map and preserve raw scan provenance.
  4. Apply runtime dynamic masks if available, but do not trust them as final map truth.
  5. Run offline cleaning with ERASOR, Removert, MapCleaner, ERASOR++, or another validated method.
  6. Compare multiple cleaners or parameter sets and inspect disagreement.
  7. Assign map points to static, movable-static, dynamic, artifact, or unknown layers.
  8. Validate localization on the cleaned map and on raw-map baseline.
  9. Publish a map package with cleaner configuration, diagnostics, and QA evidence.
  10. Update production maps only through change-control and multi-session evidence.

Deployment Decision Rules

ScenarioRule
Single survey pass with aircraft presentDo not promote aircraft surfaces into the static localization map.
Same object appears across one shiftKeep in movable-static or unknown until cross-session policy confirms persistence.
Cleaner removes static stand equipmentReject or retune the map build; static erosion is a localization risk.
Cleaner disagreement is highRoute segment needs manual QA or more data.
Dynamic ratio is high in a segmentAdd a dedicated quiet survey or use multi-session cleaning.
Open apron has low static inlier count after cleaningUse additional anchors, GNSS/INS, radar, or map landmarks; do not over-clean.
Wet or reflective artifacts appear in mapUse artifact layer and avoid training/localization on those points.

Method Comparison

MethodStrengthWeaknessAirside note
ERASORFast, explainable pseudo-occupancy and ground-aware removalCan erode static structure under pose/sparsity issuesStrong baseline for vehicle/person traces; validate around aircraft gear and stand objects.
RemovertRevert stage helps recover false removals from pose/projection errorNeeds good poses and range-image adaptationGood for preserving static airport geometry after aggressive removal.
MapCleanerTerrain model plus observation voting; learning-freeTerrain assumptions can fail with ramps, curbs, and unusual apron equipmentUseful where ground/object separation is reliable.
ERASOR++Adds height coding and tests to improve bin decisionsNewer research baseline; implementation maturity must be checkedPromising for complex vertical structure.
4dNDFLearns a time-dependent implicit representation and extracts static mapGPU/optimization cost and research-stage deploymentUseful for offline QA and future dense reconstruction, not first production cleaner.
MOS networksRuntime dynamic labels; can catch moving actors earlyTraining-domain and sensor-pattern sensitivityHeLiMOS-style multi-LiDAR evaluation is valuable for airside rigs.

Failure Modes

  • Dynamic objects parked during mapping become persistent static clutter.
  • Temporarily absent static objects are interpreted as removed infrastructure.
  • Static erosion removes thin or low structures needed by localization.
  • Ground segmentation mistakes remove ramps, curbs, chocks, tow bars, or aircraft gear.
  • Pose error creates false disagreement and aggressive removal.
  • Learned dynamic masks fail on airport-specific classes not present in road datasets.
  • Cleaned maps improve appearance but reduce scan-matching observability.

Airside Validation Guidance

Build validation sets from:

  • Quiet survey passes and busy operational passes on the same route.
  • Stands with aircraft present and absent.
  • GSE staging areas across multiple shifts.
  • Wet and dry apron captures.
  • Night and day captures with reflective markings.
  • De-icing and winter operations where allowed.
  • Repeated gate layouts to test localization aliasing.

Metrics:

  • Static preservation rate by infrastructure class.
  • Dynamic rejection rate by actor class.
  • Movable-static classification accuracy.
  • Map ghost rate per 100 m or per stand.
  • Localization ATE/RPE, residual, inlier count, and degeneracy.
  • Change-detection precision across map versions.
  • Manual QA burden per kilometer or per stand.

Implementation Notes

  • Store point provenance: source scan, timestamp, pose, cleaner decision, and map layer.
  • Use a rejected-points review workflow; do not discard dynamic or artifact layers.
  • Compare ERASOR and Removert as complementary baselines before adopting a single default.
  • Use MapCleaner/ERASOR++/4dNDF as evaluation candidates where their assumptions match the data.
  • Treat 4dNDF as offline research/QA until runtime, uncertainty, and maintainability are proven.
  • Use HeLiMOS-style labels to evaluate multi-LiDAR rigs separately and after fusion.

Sources

Public research notes collected from public sources.