Skip to content

LiDAR Artifact Removal Validation

Executive Summary

LiDAR artifact removal changes the sensor evidence available to perception, localization, mapping, and planning. That makes it a safety-relevant function. Validation must prove both sides of the tradeoff: the removal layer suppresses false measurements, and it does not hide real hazards or remove localization-critical static structure.

For SOTIF-style safety argumentation, the central claim should be narrow: artifact removal reduces unreasonable risk from foreseeable LiDAR insufficiencies under the validated ODD, while monitored degradation states trigger fallback behavior when the filtered cloud is no longer sufficient.

Validation Scope

Scope itemIncludeExclude from claim
Classical filteringSOR, ROR, DROR, DSOR, LIOR, DDIOR, D-LIOR, IDSOR, DVIOR, SDOR, LIDSORUnvalidated transfer to new LiDARs or cover materials.
Learned weather removalLIORNet-style learned denoising where evaluatedUsing learned confidence as safety truth without independent checks.
Sensor artifactsGhosts, multipath, retroreflector bloom, saturation, blockage, dustHardware faults covered by a separate diagnostic case.
Dynamic map cleaningERASOR, Removert, MapCleaner, ERASOR++, 4dNDF, FreeDOM, STATIC-LIO-style dynamic-point removalRuntime deletion of obstacles from the planning world without tracking/fusion.
Airside ODDRain, snow, fog, dust, road spray, de-icing mist, wet apron, reflective equipmentPublic-road-only results as final airport evidence.

Hazard and Failure Taxonomy

HazardCauseSafety consequenceRequired evidence
False obstacle retainedWeather or ghost point survives filteringUnnecessary stop, route blockage, planner instabilityFalse-positive rate by artifact type and scenario.
Real obstacle removedFilter classifies person/object as artifactCollision riskFalse deletion rate on safety-critical classes.
Localization observability lossFilter removes too much static structurePose error, wrong scan match, degraded recoveryStatic inlier count, residuals, degeneracy, pose error.
Map pollutionDynamic or ghost points enter static mapFuture localization or planning errorsMap ghost rate and static preservation metrics.
Silent sensor degradationFilter hides blockage or saturationOperation outside safe perception envelopeHealth monitor detection and ODD transition logs.
Domain transfer failureFilter tuned on road snow used on airport mist/sprayUnknown perception failureTarget-domain validation and change-control records.

Artifact Test Matrix

FamilyTest examplesRequired labels
SnowFalling snow, accumulated snow, plowed snow banksNoise, static, dynamic, safety-critical object.
RainLight, heavy, tropical downpour, road sprayRain/spray points and real obstacle points.
Fog/mist/steamNatural fog, de-icing mist, engine/APU steamBackscatter, attenuated real surfaces, objects behind plume.
Dust/FODJet blast dust, prop wash dust, rubber debrisDust cloud, solid FOD, static background.
Wet surfacesStanding water, wet concrete, glycol filmGround, below-ground multipath, true obstacles.
ReflectorsCones, vests, signs, apron markingsTrue object extent, bloom points, saturation sectors.
Ghost/multipathTerminal glass, aircraft skin, wet mirrorsPhysical object, reflective surface, ghost point.
Dynamic map clutterAircraft, tugs, buses, carts, peopleStatic, dynamic, movable-static, unknown.

Metrics

LayerMetrics
Point filteringArtifact precision/recall, static preservation rate, safety-critical false deletion rate, removal ratio by range/sector/intensity.
Detection/trackingFalse obstacle rate, missed object rate, track fragmentation, track latency, class-specific performance.
LocalizationICP/NDT/VGICP inliers, residual distribution, Hessian degeneracy, ATE/RPE, relocalization success.
MappingGhost trail rate, dynamic rejection rate, static preservation rate, map completeness, map thickness, cross-session consistency.
Runtime assuranceODD state transition accuracy, sensor cleaning trigger precision/recall, controlled-stop latency, radar-primary transition behavior.
ComputeRuntime percentile, memory, queue delay, worst-case latency under dense weather.

Acceptance Rules

RuleRationale
Raw and removed clouds must be logged for every validation run.Without removed evidence, false deletion cannot be investigated.
No filter may be accepted on false-positive reduction alone.The main safety risk is often deleting real obstacles.
Thresholds are LiDAR-model and cover-specific.Intensity and saturation behavior do not transfer cleanly.
Weather-mode activation must be justified by diagnostics or ODD state.Aggressive filters in clear weather can reduce useful structure.
Localization validation must include open apron and reflective terminal-edge cases.Airside geometry can be sparse and aliased.
Static map updates require multi-session evidence.A parked aircraft or bus is not long-term structure by default.
Filtered-cloud sufficiency must be monitored online.A clean but sparse cloud can still be unsafe.

Airside-Specific Validation Guidance

Use airport-specific scenario slices:

  • Gate approach with parked aircraft and moving GSE.
  • Wet stand at night with retroreflective markings and cones.
  • De-icing pad perimeter with steam/mist and glycol residue.
  • Jet blast or prop wash dust plume.
  • Heavy rain route with road spray from service vehicles.
  • Terminal glass and repeated gate geometry.
  • Open apron with few vertical features and high sun.
  • Snow-covered or partially plowed apron with hidden markings.

Use at least three validation outputs:

  • Point-level artifact report for the LiDAR team.
  • Perception and localization report for autonomy integration.
  • Safety case artifact with ODD decision traces, residual risks, and fallback actions.

HeLiMOS-Style Evaluation

HeLiMOS is useful as a pattern because it evaluates moving object segmentation across heterogeneous LiDAR sensors and scan patterns. For airside artifact removal, use the same idea:

  • Label static, dynamic, movable-static, weather artifact, ghost/multipath, and unknown.
  • Preserve per-sensor labels for spinning, solid-state, FMCW, and merged clouds.
  • Report metrics per LiDAR type instead of only on the fused cloud.
  • Back-propagate labels from merged clouds to individual sensors when needed.
  • Include sensor-specific failure cases, not just aggregate F1.

Safety Case Hooks

For ISO 21448/SOTIF alignment, connect artifact removal to:

  • Known hazardous behavior caused by sensor or algorithm performance insufficiency.
  • Foreseeable weather, reflectivity, blockage, and dynamic-object scenarios.
  • Verification of design measures: filtering, health monitoring, fallback, ODD restriction.
  • Validation in target operational conditions.
  • Operation-phase monitoring, data collection, and change management.

The claim should remain bounded. Artifact removal can support safe perception; it cannot prove that LiDAR alone is sufficient in all adverse conditions.

For static objects that do not belong in the persistent map, pair this validation file with the Airside Dynamic Map-Cleaning Benchmark. The benchmark separates false retention of transient clutter from false deletion of valid structure, which is the core safety tradeoff in dynamic/static map cleaning.

Sources

Public research notes collected from public sources.