Skip to content

Dynamic Map Cleaning Benchmarks

Last updated: 2026-05-09

Why It Matters

Dynamic map cleaning removes ghost trails, parked-then-removed objects, moving actors, and transient clutter from point-cloud maps. It is not the same as runtime obstacle detection. A cleaned map is used for localization, simulation, annotation, map QA, and change control, so false deletion of static structure can be as damaging as leaving dynamic ghosts behind.

For airside autonomy, the risk is amplified: aircraft, tugs, carts, buses, cones, barriers, and service equipment can dominate a survey pass but should not automatically become permanent localization structure.

Dataset/Benchmark Table

Benchmark / methodSource URLScopeEvaluation styleBest useMain transfer risk
KTH Dynamic Map Benchmarkhttps://kth-rpl.github.io/DynamicMap_Benchmark/Unified dynamic-point removal benchmark for point-cloud maps; includes KITTI, Argoverse 2, KTH campus, semi-indoor, and two-floor sequencesMethods output clean maps; evaluation extracts labels from the output cloud and compares against human-labeled ground truth where availableReproducible comparison across offline and online cleaners such as OctoMap variants, ERASOR, Removert, Dynablox, DUFOMap, BeautyMap, and DeFlowSome sequences have no ground truth; road/campus/semi-indoor data does not capture aircraft-scale movable objects
ERASORhttps://github.com/LimHyungTae/ERASOREgocentric pseudo-occupancy ratio and ground-aware refinement for static 3D map buildingPreservation/rejection style metrics on SemanticKITTI-derived labelsStrong explainable offline baseline for removing dynamic traces while preserving groundPose error, sparse scan patterns, and ground-plane assumptions can erode ramps, curbs, low objects, or aircraft gear
Removerthttps://github.com/gisbi-kim/removertMultiresolution range-image remove-then-revert map constructionValidated on KITTI using SemanticKITTI labels as dynamic/static ground truthComplementary baseline that explicitly recovers likely false removalsRequires good poses and projection parameters; parked objects seen consistently can remain static
MapCleanerhttps://www.mdpi.com/2072-4292/14/18/4496Terrain modeling plus local-observation voting for moving-point identificationReports PR, RR, and score on SemanticKITTI sequences 00, 01, 02, 05, and 07Learning-free map cleaning with explicit terrain/object separationTerrain model can fail on non-road surfaces, overhangs, ramps, and apron equipment

Metrics

MetricDefinition / reporting guidanceAcceptance signal
Preservation rate (PR)Fraction of ground-truth static map points retained in the clean mapHigh PR for localization landmarks, lane/stand markings, poles, curbs, terminal edges, and docking features
Rejection rate (RR)Fraction of ground-truth dynamic points removed from the clean mapHigh RR for moving or transient vehicles, people, carts, buses, and aircraft/GSE ghosts
Combined scoreBenchmark-specific aggregate of preservation and rejection, reported with PR and RR rather than aloneUseful for ranking, but do not let a high score hide static erosion in safety-critical areas
Static erosion by classFalse removal rate for ground, walls, poles, signs, curbs, chocks, stand equipment, and aircraft-adjacent infrastructureNear-zero erosion for features used by localization or collision margins
Ghost rateRemaining dynamic or transient points per 100 m, per stand, or per map tileLow ghost density in planning and localization layers
Localization impactScan-to-map residuals, inlier ratio, covariance, ATE/RPE, and relocalization success before and after cleaningCleaned map must improve or preserve localization health
Runtime and resource useOffline processing time per km or per stand, memory, GPU/CPU, and parameter sensitivityPredictable processing for fleet map operations

Airside/Indoor/Outdoor Transfer

DomainWhat transfersWhat must be revalidated
Road drivingDynamic vehicle and pedestrian trails, SemanticKITTI/KITTI formatting, range-image and occupancy baselinesAircraft geometry, low-speed GSE, repetitive stands, reflective paint, open concrete, and temporary ramp equipment
Campus / semi-indoorPeople around platforms, clutter, repeated scans, non-road movementAirside traffic rules, large moving aircraft, equipment staging, and apron weather exposure
Indoor multi-floorIrregular LiDAR patterns, non-road structure, vertical complexityLong-range outdoor map quality, GNSS/INS alignment, and geodetic map control
AirsideMap lifecycle policy, movable-static layering, aircraft-present/absent comparisonsMust be measured with local sensors, local ODD, and airport operations constraints

Validation Guidance

  1. Benchmark at least ERASOR, Removert, and MapCleaner on the same input maps before selecting a default cleaner.
  2. Preserve raw scans, poses, and rejected points. A production map package should be auditable back to the source observations and cleaner decisions.
  3. Run cleaning on both quiet survey passes and busy operational passes. A cleaner that only works on sparse dynamics is not enough for aircraft stands.
  4. Compare localization on raw, cleaned, and over-cleaned maps. Reject a cleaner if the map looks cleaner but localization residuals, degeneracy, or relocalization failures worsen.
  5. Keep movable-static objects in a separate layer until cross-session evidence decides whether they are persistent infrastructure, temporary equipment, or dynamic clutter.
  6. Add fault injection: pose jitter, missing LiDAR, time offset, wet ground, nighttime reflections, and low static-feature apron segments.

Sources

Public research notes collected from public sources.