Skip to content

Solid-State LiDAR and Photonic Integrated Circuits for Airside Autonomous GSE

Table of Contents

  1. Introduction & Motivation
  2. LiDAR Beam Steering Technologies
  3. FMCW vs ToF: Measurement Principles
  4. Silicon Photonics Integration
  5. Solid-State LiDAR Products and Specifications
  6. Optical Phased Arrays (OPA)
  7. MEMS Mirror LiDAR
  8. Flash LiDAR
  9. Airside-Specific Requirements
  10. Reliability and Lifetime Analysis
  11. Integration with reference airside AV stack Stack
  12. Cost Roadmap and Market Dynamics
  13. Migration Strategy: Mechanical to Solid-State
  14. Implementation Roadmap
  15. Key Takeaways
  16. References

1. Introduction & Motivation

1.1 The Mechanical LiDAR Problem for 24/7 Airside Operations

the reference airside AV stack's current fleet uses 4-8 RoboSense mechanical/semi-mechanical LiDARs per vehicle (RSHELIOS, RSBP). These sensors rely on spinning optical assemblies or MEMS mirrors with moving parts that face accelerated wear in the harsh airside environment:

Stress FactorImpact on Mechanical LiDARAirside Severity
VibrationBearing wear, optical misalignmentHigh — tarmac surface, loading/unloading
Temperature cyclingThermal expansion of bearings, lubricant degradationExtreme — -20°C to +60°C tarmac
De-icing chemicalsSeal degradation, ingressSevere — glycol/potassium formate spray
Jet blastMechanical shock, debris impactSevere — 100+ km/h gusts at gate
Dust/particulateAbrasion of optical window, bearing contaminationModerate — rubber particles, sand
24/7 duty cycle16-20 hours/day continuous rotationVery high — 5,800-7,300 hours/year

Mechanical LiDAR MTBF in airside conditions: Estimated 15,000-25,000 hours (2-3.5 years), with degraded performance well before failure. For a 50-vehicle fleet with 6 LiDARs each, this means ~100-200 LiDAR replacements per year at $2,000-5,000 each = $200K-1M/year in sensor replacement alone.

1.2 The Solid-State Promise

Solid-state LiDAR eliminates all moving parts, offering:

  • 10-100x longer MTBF: 100,000+ hours projected (semiconductor-level reliability)
  • Smaller form factor: Matchbox-sized sensors vs. current hockey-puck mechanical units
  • Lower power: 5-15W vs. 15-30W for mechanical spinning LiDAR
  • Instant-on: No spin-up time (mechanical LiDARs need 2-5 seconds)
  • Per-point velocity: FMCW solid-state provides instantaneous radial velocity on every measurement
  • Immunity to solar interference: Coherent detection in FMCW inherently rejects ambient light

1.3 Technology Readiness

As of early 2026, solid-state LiDAR technologies are at varying TRL levels:

TechnologyTRLStatusFirst Mass Production
MEMS mirrorTRL 8-9In production vehicles2022 (Livox, Innoviz)
Flash LiDARTRL 7-8Automotive qualification2024-2025 (Ibeo, Continental)
OPA (silicon photonics)TRL 6-7Pre-production prototypes2026-2028 (Voyant, Analog Photonics)
FMCW + OPATRL 5-7Advanced demos, early products2026-2028 (Aeva, SiLC)
Photonic focal plane arrayTRL 5-6First prototypes2027-2029 (Voyant Helium)

2. LiDAR Beam Steering Technologies

2.1 Technology Comparison

┌─────────────────────────────────────────────────────────────────┐
│              LIDAR BEAM STEERING TAXONOMY                       │
├─────────────────────────────────────────────────────────────────┤
│                                                                 │
│  MECHANICAL              SEMI-SOLID-STATE      FULLY SOLID-STATE│
│  ┌──────────┐           ┌──────────┐          ┌──────────────┐ │
│  │ Spinning │           │  MEMS    │          │    OPA        │ │
│  │ Mirror   │           │  Mirror  │          │ (Phased Array)│ │
│  │          │           │          │          │               │ │
│  │ Velodyne │           │ Innoviz  │          │ Voyant        │ │
│  │ Ouster   │           │ Livox    │          │ Analog Photon │ │
│  │ RoboSense│           │ MicroVis │          │               │ │
│  └──────────┘           └──────────┘          ├──────────────┤ │
│                                                │   Flash       │ │
│  Moving: rotor/motor    Moving: MEMS mirror    │ (Illuminator) │ │
│  Lifetime: 15-30K hrs   Lifetime: 30-50K hrs   │               │ │
│  Power: 15-30W          Power: 8-20W           │ Ibeo          │ │
│                                                │ Continental   │ │
│                                                ├──────────────┤ │
│                                                │ Focal Plane   │ │
│                                                │ Array (FPA)   │ │
│                                                │               │ │
│                                                │ Voyant Helium │ │
│                                                │               │ │
│                                                │ No moving     │ │
│                                                │ parts at all  │ │
│                                                │ Lifetime:     │ │
│                                                │ 100K+ hrs     │ │
│                                                │ Power: 3-10W  │ │
│                                                └──────────────┘ │
└─────────────────────────────────────────────────────────────────┘

2.2 Beam Steering Methods Compared

MethodMoving PartsScan RateFoVAngular ResPowerCost Trend
Spinning mirrorMotor + mirror assembly10-20 Hz360° × 40°0.1-0.4°15-30WFlat ($500-2000)
MEMS mirrorMicro-mirror (1-3mm)5-30 Hz120° × 25°0.05-0.2°8-20WDeclining ($300-1000)
OPANone (electronic)~GHz point-to-point50-120° × 20°0.01-0.05°3-10WSteep decline (→$100)
FlashNone (flood illuminator)Frame rate (10-30 Hz)20-120° × 10-30°0.1-0.5°5-15WModerate ($200-800)
Focal plane arrayNone (2D on-chip steering)ElectronicUp to 180°0.01-0.1°3-8WSteep decline (→$50-200)

2.3 Scan Patterns

Mechanical spinning: Continuous 360° rotation, uniform angular sampling. Well-suited for current reference airside AV stack (360° coverage from fewer sensors).

MEMS raster: Lissajous or raster patterns. Non-uniform point density with higher density at edges (mechanical turnaround). Some designs (Livox, Innoviz) use optimized patterns for more uniform coverage.

OPA random access: Electronic steering enables arbitrary point placement. Can dynamically allocate resolution — high density on nearby aircraft, sparse on empty taxiway. This is transformative for foveated perception strategies (see active-perception-sensor-scheduling.md).

Flash: Entire FoV illuminated simultaneously. Uniform point density but limited range (lower energy per point). Best for short-range, wide-FoV applications (docking, close-range obstacle detection).


3. FMCW vs ToF: Measurement Principles

3.1 Time-of-Flight (Current reference airside AV stack Approach)

Pulsed ToF:
                ┌──┐                          ┌──┐
  Transmit:     │  │                          │  │
  ──────────────┘  └──────────────────────────┘  └──────
                                                        
                         ┌──┐                          ┌──┐
  Receive:               │  │                          │  │
  ───────────────────────┘  └──────────────────────────┘  └──
                         
                ├────────┤
                  Δt = 2R/c
                  
  Range R = c·Δt/2
  Precision: ~1-5 cm (limited by pulse width and timing jitter)
  Velocity: NOT measured (requires multi-return tracking)

Limitations for airside:

  • No per-point velocity → MOT tracker required for dynamic objects
  • Vulnerable to solar interference (905nm near-IR competes with sunlight)
  • Range-energy tradeoff: longer range needs higher peak power (eye safety limit)
  • Multi-echo interference from rain, fog, de-icing spray

3.2 Frequency-Modulated Continuous Wave (FMCW)

FMCW LiDAR:
                  Frequency
                    ↑    /\    /\    /\
  Transmit (TX):    │   /  \  /  \  /  \
                    │  /    \/    \/    \
                    │ /                  \
                    └──────────────────────→ Time
                    
                    ↑      /\    /\    /\
  Receive (RX):     │     /  \  /  \  /
  (delayed+shifted) │    /    \/    \/
                    │   /
                    └──────────────────────→ Time
                    
  Beat frequency: f_beat = f_range + f_doppler
  
  Upsweep:   f_beat_up   = (2·B·R)/(c·T) + (2·v·f_0)/c
  Downsweep: f_beat_down = (2·B·R)/(c·T) - (2·v·f_0)/c
  
  Range:    R = c·T·(f_up + f_down)/(4·B)
  Velocity: v = c·(f_up - f_down)/(4·f_0)
  
  Where: B = chirp bandwidth, T = chirp period, f_0 = center frequency

FMCW advantages for airside:

CapabilityBenefit for Airside
Per-point instantaneous velocityDetect approaching aircraft (1-5 km/h pushback), GSE, personnel without tracking delay
Coherent detectionImmune to solar interference — critical for open-air tarmac operations
Lower peak power10-100mW CW vs. 10-100W pulsed → inherently eye-safe at Class 1
Range precisionmm-level (vs cm-level for ToF) — better for precision docking
Interference rejectionHeterodyne detection rejects other LiDAR signals, ambient light, headlights
Weather resilienceCoherent gain suppresses multi-path scattering in fog/rain by ~10-20 dB

FMCW challenges:

ChallengeCurrent State
Point rate100K-1M points/s (vs. 1-3M for mechanical ToF) — improving rapidly
ComplexityRequires stable laser source, balanced photodetectors, DSP — silicon photonics solves this
Range100-300m demonstrated (vs. 200-300m mechanical ToF) — sufficient for airside (max 200m needed)
CostCurrently 2-5x ToF — rapidly converging with silicon photonics volume

3.3 Velocity Measurement Value

For airside operations, per-point velocity is transformative:

python
# Current approach: velocity requires multi-frame tracking
# Problem: latency of 3-5 frames (300-500ms at 10 Hz) to establish velocity

# FMCW approach: instantaneous velocity per point
class FMCWPointCloud:
    """Point cloud with per-point radial velocity from FMCW LiDAR."""
    
    def __init__(self):
        self.x = []      # meters
        self.y = []      # meters  
        self.z = []      # meters
        self.intensity = []  # reflectivity
        self.velocity = []   # m/s radial velocity (positive = approaching)
    
    def detect_approaching_objects(self, threshold_velocity=-0.5):
        """
        Instantly identify approaching objects without tracking.
        
        In current ToF pipeline: requires 3-5 frames of MOT tracking.
        With FMCW: single scan, zero-latency velocity.
        """
        approaching_mask = np.array(self.velocity) < threshold_velocity
        approaching_points = self.get_points(approaching_mask)
        
        # Cluster approaching points
        clusters = DBSCAN(eps=0.5, min_samples=5).fit(approaching_points[:, :3])
        
        objects = []
        for label in set(clusters.labels_) - {-1}:
            cluster_mask = clusters.labels_ == label
            cluster_points = approaching_points[cluster_mask]
            objects.append({
                'centroid': cluster_points[:, :3].mean(axis=0),
                'velocity': cluster_points[:, 4].mean(),  # mean radial velocity
                'num_points': cluster_mask.sum(),
                'extent': cluster_points[:, :3].ptp(axis=0),  # bounding box
            })
        
        return objects
    
    def jet_blast_boundary_detection(self):
        """
        Detect jet blast boundaries via velocity field.
        
        FMCW can detect particle velocity in jet exhaust plume.
        This is invisible to ToF LiDAR, camera, and radar.
        Thermal cameras detect temperature but not flow velocity.
        
        FMCW LiDAR is the ONLY sensor that can detect both
        the spatial extent AND flow velocity of jet blast.
        """
        # High-velocity points in exhaust direction behind aircraft
        # Jet blast particles (dust, water droplets) have measurable velocity
        high_velocity_mask = np.abs(np.array(self.velocity)) > 5.0  # >5 m/s
        
        if high_velocity_mask.sum() > 50:
            blast_points = self.get_points(high_velocity_mask)
            # Fit boundary surface to high-velocity region
            boundary = convex_hull_2d(blast_points[:, :2])
            return JetBlastZone(boundary, max_velocity=np.max(np.abs(blast_points[:, 4])))
        
        return None

4. Silicon Photonics Integration

4.1 What is Silicon Photonics?

Silicon photonics (SiPh) fabricates optical components — waveguides, modulators, photodetectors, couplers — on standard silicon wafers using CMOS-compatible processes. This enables:

  1. Mass production: Existing semiconductor fabs (TSMC, GlobalFoundries, TowerSemiconductor) can produce photonic chips at automotive volumes
  2. Monolithic integration: Laser source, beam steering, receiver, and signal processing on a single chip
  3. Cost reduction: From $1,000+ per discrete-optics LiDAR to projected $50-200 at volume
  4. Size reduction: Entire LiDAR transceiver in <1 cm² die area

4.2 Key Components on Silicon

┌─────────────────────────────────────────────────────┐
│           SILICON PHOTONICS LiDAR-ON-CHIP           │
│                                                     │
│  ┌─────────┐   ┌──────────┐   ┌───────────────┐   │
│  │ Laser   │──→│ Frequency│──→│ Optical Phased │──→ FREE SPACE
│  │ Source   │   │ Chirp    │   │ Array (OPA)    │   │  (TO TARGET)
│  │ (III-V  │   │ Modulator│   │ 128-8192       │   │
│  │ bonded) │   │          │   │ emitters       │   │
│  └─────────┘   └──────────┘   └───────────────┘   │
│       │                            ↑                │
│       │         ┌──────────┐       │                │
│       └────────→│ Reference│   ┌───┴──────────┐    │
│                 │ Splitter │   │ Phase Shifters│    │
│                 └─────┬────┘   │ (thermo-optic │    │
│                       │        │  or PN-junction│    │
│                       ↓        │  per channel) │    │
│                 ┌──────────┐   └───────────────┘    │
│                 │ Balanced  │                        │
│  FREE SPACE ──→│ Photo-    │──→ Electrical beat     │
│  (FROM TARGET)  │ detectors │    frequency signal   │
│                 │ (Ge/Si)   │                        │
│                 └──────────┘                        │
│                       │                              │
│                       ↓                              │
│                 ┌──────────┐                        │
│                 │ ADC +    │──→ Range + Velocity    │
│                 │ DSP      │    per point           │
│                 └──────────┘                        │
└─────────────────────────────────────────────────────┘

4.3 Material Platforms

PlatformWaveguide LossAdvantageLimitationVendors
SOI (Silicon on Insulator)1-2 dB/cmCMOS compatible, high index contrastNo native laser, limited to 1.1-4 μmMost SiPh fabs
SiN (Silicon Nitride)0.01-0.1 dB/cmUltra-low loss, wider transparencyLower index contrast, larger waveguidesLioniX, Ligentec
III-V on Si (hybrid)0.5-2 dB/cmNative laser, photodetectorComplex bonding, lower yieldAeva, Intel
InP0.5-1 dB/cmFull photonic integrationNot CMOS compatible, expensiveLumentum, II-VI

For LiDAR: Hybrid III-V/Si or III-V/SiN approaches dominate because they combine the active gain medium (laser) from III-V semiconductors with the low-loss passive routing of silicon or silicon nitride waveguides.

4.4 Integration Levels

Level 1: Discrete optics (current)
  Laser → fiber → lens → mirror → target → lens → fiber → detector
  Components: 20-50 discrete parts
  Assembly: manual alignment, hours per unit
  Cost: $1,000-5,000

Level 2: Hybrid photonic module
  Laser chip → wire-bond → SiPh chip (waveguides + PDs) → package
  Components: 5-10 die + package
  Assembly: automated pick-and-place, minutes per unit
  Cost: $200-800 (Aeva Atlas, SiLC Eyeonic)

Level 3: Monolithic SiPh LiDAR-on-chip
  Single die: laser + modulator + OPA + detector + DSP
  Components: 1 die + package
  Assembly: wafer-level, seconds per unit
  Cost: $50-200 at volume (Voyant Helium target)

Level 4: Fully integrated sensor SoC (future)
  LiDAR transceiver + point cloud processor + interface on single package
  Cost: $20-100 at high volume (2030+ projection)

5. Solid-State LiDAR Products and Specifications

5.1 FMCW Solid-State Products (2024-2026)

ProductCompanyBeam SteeringRangePoints/sFoVVelocityForm FactorPrice (est.)Status
AtlasAevaMEMS + FMCW300m @10%~2M120° × 30°±200 m/s per pointAutomotive-grade$500-800Production 2025
Aeries IIAevaMEMS + FMCW500m @10%~2M120° × 30°±200 m/s per pointPremium$1,000-2,000Available H1 2026
HeliumVoyantPhotonic FPA (no moving parts)100-200m (est.)12K-100K pixelsUp to ~180°Per-point<150g, <50 cm³$200-500 (target)Prototype CES 2026
Eyeonic VisionSiLCMEMS + FMCW50-1,250mVariableMultiple configsPer-point + polarizationIndustrial module$2,000-5,000Available now
ScantinelScantinelMEMS + FMCW200m500K120° × 30°Per-pointAutomotive$500-1,500Sampling 2025

5.2 Semi-Solid-State Products (In Production)

ProductCompanyBeam SteeringRangePoints/sFoVPrice (est.)Status
HAPLivoxNon-repetitive MEMS450m @10%240K120° × 25°$800-1,200Production
Two (InnovizTwo)InnovizMEMS300m14.4M eq120° × 25°$500-1,000Production in BMW
AT128HesaiHybrid solid-state200m1.53M120° × 25.4°$300-800Mass production
MXRoboSenseMEMS200m750K120° × 25°$500-1,000Production

5.3 Comparison with Current reference airside AV stack Sensors

ParameterRoboSense RSHELIOS (current)Aeva Atlas (FMCW)Voyant Helium (full SS)
Beam steeringSpinningMEMSPhotonic FPA
Moving partsMotor + mirrorMEMS mirror (~1mm)None
Range150m @10%300m @10%100-200m (est.)
Points/s1.2M~2M12K-100K
FoV360° × 32°120° × 30°Configurable
Per-point velocityNoYesYes
Power~15W~12W~5W (est.)
Weight~600g~500g<150g
Operating temp-20 to 60°C-40 to 85°C-40 to 85°C (target)
MTBF (est.)20,000 hrs40,000+ hrs100,000+ hrs
IP ratingIP67IP67/69KIP67 (target)
Price$1,500-3,000$500-800$200-500 (volume)

6. Optical Phased Arrays (OPA)

6.1 Operating Principle

An OPA steers a laser beam by controlling the phase of light emitted from an array of closely spaced waveguide emitters. By varying the relative phase between emitters, the constructive interference direction changes, steering the beam without any moving parts.

OPA Beam Steering:

  Phase shifters:   φ₁  φ₂  φ₃  φ₄  φ₅  φ₆  φ₇  φ₈
                    │   │   │   │   │   │   │   │
  Emitters:         ▼   ▼   ▼   ▼   ▼   ▼   ▼   ▼
                    ○   ○   ○   ○   ○   ○   ○   ○
                    
  Wavefront (φ₁=φ₂=...=φ₈):     Wavefront (linear phase ramp):
  
    ┃ ┃ ┃ ┃ ┃ ┃ ┃ ┃               ╲ ╲ ╲ ╲ ╲ ╲ ╲ ╲
    ┃ ┃ ┃ ┃ ┃ ┃ ┃ ┃                ╲ ╲ ╲ ╲ ╲ ╲ ╲ ╲
    ↓ ↓ ↓ ↓ ↓ ↓ ↓ ↓                 ↘ ↘ ↘ ↘ ↘ ↘ ↘ ↘
    Straight ahead                   Steered to angle θ
    
  Steering angle: sin(θ) = λ·Δφ / (2π·d)
  Where: λ = wavelength, Δφ = phase increment between emitters, d = emitter spacing
  
  Max angle: θ_max = arcsin(λ / 2d)  (grating lobe limit)
  For d = λ/2: θ_max = 90° (hemisphere)

6.2 OPA Performance State-of-the-Art (2025-2026)

Parameter2022 SOTA2025 SOTAAirside Requirement
Channel count128512-8192256+ for adequate resolution
Steering range (azimuth)±25°±50-60°±60° (3-4 sensors for 360°)
Steering range (elevation)Fixed or ±5°±10-15°±15° for close-range objects
Angular resolution0.1°0.01-0.05°<0.1° for personnel at 100m
Switching speed~μs per point~ns per point<100 μs (non-critical)
Side lobe suppression-10 to -15 dB-15 to -25 dB<-20 dB (avoid ghost detections)
Power efficiency5-10% wall-plug10-20%Higher is better for battery GSE

6.3 2D Steering Approaches

Wavelength + OPA (λ-OPA):

  • OPA steers in one axis (azimuth)
  • Wavelength tuning steers in the other axis (elevation) via dispersive grating
  • Advantage: Only 1D phase control needed (simpler electronics)
  • Limitation: Requires tunable laser (~40nm range for ±10° elevation)
  • Used by: Aeva (early), academic demonstrations

2D OPA:

  • True 2D array of emitters with 2D phase control
  • Advantage: Full electronic 2D steering, no tunable laser needed
  • Limitation: N² phase shifters for N×N array (power, complexity)
  • Used by: Voyant Helium (focal plane array variant)

Switch + OPA:

  • Optical switch selects between multiple fixed OPA sub-arrays
  • Each sub-array covers a different angular sector
  • Advantage: Lower phase-shifter count, higher power efficiency
  • Used by: Intel (research), various academic groups

6.4 OPA Challenges and Solutions

ChallengeImpactSolutionStatus
Grating lobesGhost beams at wrong anglesSub-wavelength emitter pitch (<775nm for 1550nm)Achieved in SiN
Power efficiencyHigh insertion loss limits rangeOptimized waveguide design, SiN platforms15-20% demonstrated
Beam qualityPoor far-field patternApodization, non-uniform spacingActive research
Phase noisePointing jitterFeedback calibration, thermal stabilizationSolved for slow scan
CrosstalkAdjacent channel interferenceIsolation trenches, differential driveImproving
2D scalingN² complexityHierarchical addressing, row-column driveResearch stage

7. MEMS Mirror LiDAR

7.1 MEMS Technology Overview

MEMS (Micro-Electro-Mechanical Systems) mirrors use microscale mirrors (1-5mm diameter) actuated by electrostatic, electromagnetic, or piezoelectric forces to steer laser beams. They represent the most mature "semi-solid-state" approach.

7.2 MEMS Mirror Types

TypeActuationMirror SizeScan AngleResonant FreqDurability
Electrostatic comb-driveVoltage (30-200V)1-2mm±10-15°1-10 kHzHigh (no contact)
ElectromagneticCurrent coil2-5mm±15-25°0.5-5 kHzModerate
Piezoelectric (PZT)Voltage (1-20V)1-3mm±10-20°1-20 kHzGood

7.3 MEMS Reliability for Airside

MEMS mirrors are sensitive to:

  • Mechanical shock: Airport operations involve loading/unloading impacts (5-50g)
    • MEMS can handle 1,000-2,000g shock (inherent advantage of microscale)
    • Much better than macroscale spinning bearings
  • Vibration: MEMS resonant frequency (1-20 kHz) well above vehicle vibration spectrum (<500 Hz)
    • Vibration isolation requirements minimal
  • Fatigue: MEMS flexures can achieve 10¹¹-10¹² cycles before failure
    • At 5 kHz scan rate, 24/7 operation: ~1.6×10¹¹ cycles/year → 1-10 year fatigue life
    • This is the primary MEMS lifetime limiter for 24/7 airside operations

MEMS verdict for airside: Significantly better than mechanical spinning (5-10x lifetime improvement) but still has moving parts subject to fatigue. A bridge technology toward fully solid-state OPA/FPA.


8. Flash LiDAR

8.1 Operating Principle

Flash LiDAR illuminates the entire FoV simultaneously with a single laser pulse or CW modulation, then images the reflected light onto a 2D detector array (like a camera but measuring distance per pixel).

Flash LiDAR:

  ┌─────────┐                        ┌──────────────┐
  │ Laser   │──→ Diffuser ──→ Wide   │              │
  │ Source   │        illumination    │   SCENE      │
  └─────────┘                        │              │
                                     └──────┬───────┘

                                     Reflected light

                                     ┌──────↓───────┐
                                     │ Imaging Lens  │
                                     └──────┬───────┘

                                     ┌──────↓───────┐
                                     │ SPAD Array    │
                                     │ 128×128 to    │
                                     │ 1024×1024     │
                                     │ pixels        │
                                     │               │
                                     │ Each pixel:   │
                                     │ - ToF range   │
                                     │ - intensity   │
                                     └───────────────┘

8.2 Flash LiDAR for Airside Applications

Advantages:

  • No moving parts, no beam steering electronics → highest reliability
  • Entire frame captured simultaneously → no motion distortion
  • Simple optics → lowest cost at volume
  • Ideal for short-range docking applications (wide FoV, dense point cloud at close range)

Limitations:

  • Range limited by eye-safe power spread across FoV: typically 20-50m for Class 1
  • Resolution limited by detector array pixel count (currently 128×128 to 640×640)
  • Lower SNR per pixel than scanned approaches

Best airside application: Docking sensors. A flash LiDAR at each docking interface provides:

  • Dense point cloud at 0-10m range
  • No mechanical wear
  • Fast frame rate (30-100 Hz)
  • Cost-effective ($200-500 per unit)

8.3 SPAD Array Detectors

Single-Photon Avalanche Diode (SPAD) arrays are the enabling detector technology for flash LiDAR:

Parameter20222025Target
Array size256×256512×5121024×1024
Pixel pitch10-15 μm6-10 μm3-5 μm
PDE (Photon Detection Efficiency)20-30%30-45%>50%
DCR (Dark Count Rate)100-500 cps50-200 cps<50 cps
Timing jitter100-200 ps50-100 ps<30 ps
Time gatingYesYes, fasterSub-ns programmable

9. Airside-Specific Requirements

9.1 Environmental Challenges Unique to Airports

ChallengeDescriptionImpact on LiDARSolid-State Advantage
De-icing sprayPropylene glycol / potassium formate aerosolBlocks optical path, coats opticsSealed flat window easier to clean; self-heating with SiPh power dissipation
Jet blast100+ km/h exhaust at >200°CMechanical shock, dust/debris impactNo moving parts to damage; FMCW detects blast velocity
Fuel vaporsJP-A kerosene fumesPotential optical absorption at some wavelengths1550nm not affected by JP-A
Tarmac heat shimmerRadiative heat creates optical turbulenceBeam wander, range noiseFMCW coherent detection less sensitive to turbulence
FODSmall objects 1-10cmRequires high angular resolutionOPA adaptive resolution can focus on suspicious areas
Aircraft reflectionsLarge specular surfacesMulti-path, saturationFMCW rejects multi-path via coherence gating
24/7 operations16-20 hrs/day per vehicleWear-out of moving partsSemiconductor-level reliability (100K+ hrs)
Solar glareDirect sunlight, tarmac reflectionSaturation, reduced SNRFMCW coherent detection rejects solar background

9.2 Range and Resolution Requirements

Operational ScenarioRequired RangeRequired Angular ResRequired Range PrecisionPriority
Personnel detection100m<0.15°<5cmCritical
Aircraft detection200m<0.2°<10cmCritical
FOD detection50m<0.05°<2cmHigh
Precision docking5m<0.02°<5mmHigh
GSE detection150m<0.2°<10cmCritical
Taxiway boundaries100m<0.2°<5cmMedium
Jet blast detection80mN/A (velocity field)N/AHigh (FMCW only)

9.3 Sensor Configuration for Solid-State Migration

CURRENT (Mechanical):                    FUTURE (Solid-State):
                                         
4-8× RoboSense RSHELIOS/RSBP           4× FMCW OPA (long-range 360° coverage)
  - 360° spinning each                  + 4× Flash LiDAR (short-range docking)
  - Overlapping coverage                + 2× FMCW narrow-FoV (forward, rear)
  - No velocity                         
                                         Total: 10 sensors
Total: 4-8 sensors                       + Per-point velocity
                                         + Adaptive resolution
                                         + No moving parts
                                         + Lower total power
                                         
Power: 60-240W                          Power: 30-80W (est.)
Weight: 2.4-4.8 kg                      Weight: 0.3-1.5 kg
Cost: $6K-24K                           Cost: $3K-8K (at volume)
MTBF: ~20K hrs                          MTBF: ~80-100K+ hrs

9.4 Eye Safety at 1550nm

Most solid-state FMCW LiDARs operate at 1550nm (vs. 905nm for most ToF):

Parameter905nm (ToF)1550nm (FMCW)
Eye safety limit (MPE)1 mW/cm²100 mW/cm²
Retinal hazardHigh (focused by cornea onto retina)Low (absorbed by vitreous humor before retina)
Safe power (Class 1)~1 mW peak~10 mW CW
Range at safe power100-200m (pulsed gain)100-300m (coherent gain compensates)
Airport relevanceWorkers near sensors100x more margin for ground crew safety

1550nm is strongly preferred for airside — ground crew work within 1-2m of vehicle-mounted sensors. The 100x higher eye-safety limit at 1550nm provides critical safety margin.


10. Reliability and Lifetime Analysis

10.1 Failure Mode Comparison

ComponentMechanical LiDARMEMS LiDAROPA Solid-State
Bearing failurePrimary failure mode (wear-out)N/AN/A
Mirror fatigueN/AEventual (10¹¹ cycles)N/A
Laser degradationGaAs 905nm (20K-40K hrs)SameInP/InGaAs 1550nm (50K-100K hrs)
Connector fatigueRotating joint (weak point)FixedFixed
Thermal cyclingDifferential expansionLow mass = fast equalizationMonolithic = no differential
ContaminationSeal failure → internalSealed packageHermetic package
ElectronicsDriver boardDriver ASICIntegrated on die

10.2 Projected MTBF

Using MIL-HDBK-217F reliability prediction methodology and adjusting for airside environmental factors (K_env = 2.5 for ground mobile, airfield):

TechnologyBase MTBF (benign)Airside MTBF (K=2.5)Annual Failure Rate (24/7)
Mechanical spinning50,000 hrs20,000 hrs~35%
MEMS mirror80,000 hrs32,000 hrs~22%
OPA (silicon photonics)200,000 hrs80,000 hrs~9%
Flash (SPAD array)250,000 hrs100,000 hrs~7%
Photonic FPA250,000+ hrs100,000+ hrs~7%

10.3 Fleet-Level Impact

For a 50-vehicle fleet with 6 LiDARs each (300 total sensors):

TechnologyAnnual Sensor FailuresReplacement Cost/YearDowntime Hours/Year
Mechanical (current)~105$210-525K~525-1,050 hrs
MEMS~66$66-264K~330-660 hrs
OPA solid-state~27$14-54K~135-270 hrs
Flash~21$4-11K~105-210 hrs

Solid-state migration saves $150-450K/year in sensor replacement for a 50-vehicle fleet, plus 400-800 hours of reduced downtime.


11. Integration with reference airside AV stack Stack

11.1 Point Cloud Format Compatibility

FMCW LiDAR produces extended point clouds with velocity. ROS integration requires extending PointCloud2 message:

python
#!/usr/bin/env python3
"""
ROS driver shim for FMCW solid-state LiDAR.
Extends standard PointCloud2 with per-point radial velocity.
"""

import rospy
import numpy as np
from sensor_msgs.msg import PointCloud2, PointField
from sensor_msgs import point_cloud2
from std_msgs.msg import Header


# Extended point fields for FMCW LiDAR
FMCW_POINT_FIELDS = [
    PointField('x', 0, PointField.FLOAT32, 1),
    PointField('y', 4, PointField.FLOAT32, 1),
    PointField('z', 8, PointField.FLOAT32, 1),
    PointField('intensity', 12, PointField.FLOAT32, 1),
    PointField('radial_velocity', 16, PointField.FLOAT32, 1),  # FMCW-specific
    PointField('snr', 20, PointField.FLOAT32, 1),  # Signal-to-noise ratio
    PointField('ring', 24, PointField.UINT16, 1),  # Scan line index
    PointField('time', 26, PointField.FLOAT32, 1),  # Per-point timestamp
]


class FMCWLidarBridge:
    """Bridge FMCW solid-state LiDAR to ROS PointCloud2."""
    
    def __init__(self):
        self.pub = rospy.Publisher('/fmcw_lidar/points', PointCloud2, queue_size=2)
        self.pub_velocity = rospy.Publisher('/fmcw_lidar/velocity_cloud', 
                                            PointCloud2, queue_size=2)
        
        # Backward compatibility: publish standard XYZI cloud on legacy topic
        self.pub_compat = rospy.Publisher('/rslidar_points', PointCloud2, queue_size=2)
    
    def publish_scan(self, fmcw_data):
        """
        Convert FMCW LiDAR native data to ROS PointCloud2.
        
        Publishes on two topics:
        1. Full FMCW cloud with velocity (for new FMCW-aware pipeline)
        2. Standard XYZI cloud (backward compatible with existing PointPillars/GTSAM)
        """
        header = Header()
        header.stamp = rospy.Time.now()
        header.frame_id = 'fmcw_lidar'
        
        # Full FMCW point cloud
        full_cloud = point_cloud2.create_cloud(header, FMCW_POINT_FIELDS, fmcw_data)
        self.pub.publish(full_cloud)
        
        # Backward-compatible XYZI cloud (drop velocity, SNR)
        xyzi_data = fmcw_data[:, :4]  # x, y, z, intensity only
        compat_fields = FMCW_POINT_FIELDS[:4]
        compat_cloud = point_cloud2.create_cloud(header, compat_fields, xyzi_data)
        self.pub_compat.publish(compat_cloud)

11.2 Velocity-Enhanced Perception Pipeline

CURRENT PIPELINE:                    FMCW-ENHANCED PIPELINE:
                                     
PointCloud (XYZI)                    PointCloud (XYZI + V)
      │                                    │
      ↓                                    ├──→ Velocity segmentation
  PointPillars                             │    (static vs dynamic in
  (Detection)                              │     single scan, no tracker)
      │                                    ↓
      ↓                              PointPillars+V
  CenterPoint                        (Velocity-augmented detection)
  Tracker (MOT)                            │
      │                                    ↓
      ↓                              Simplified tracker
  Velocity estimate                  (velocity already measured)
  (3-5 frame delay)                        │
      │                                    ↓
      ↓                              Instant velocity
  Frenet planner                     (zero-frame delay!)


                                     Frenet planner
                                     (velocity-aware costs)

11.3 GTSAM Localization Enhancement

FMCW velocity data improves ego-velocity estimation for GTSAM:

cpp
// Add FMCW ego-velocity factor to GTSAM factor graph
// Uses per-point radial velocity to estimate vehicle velocity

#include <gtsam/navigation/NavState.h>
#include <gtsam/nonlinear/NonlinearFactor.h>

class FMCWVelocityFactor : public gtsam::NoiseModelFactor1<gtsam::NavState> {
    gtsam::Vector3 measured_velocity_;  // From FMCW Doppler
    
public:
    FMCWVelocityFactor(gtsam::Key key, const gtsam::Vector3& measured_v,
                       const gtsam::SharedNoiseModel& model)
        : NoiseModelFactor1(model, key), measured_velocity_(measured_v) {}
    
    gtsam::Vector evaluateError(const gtsam::NavState& state,
                                 boost::optional<gtsam::Matrix&> H) const override {
        // Error = predicted velocity - measured velocity
        gtsam::Vector3 predicted_v = state.velocity();
        if (H) {
            // Jacobian: d(error)/d(state) for velocity component
            *H = gtsam::Matrix::Zero(3, 9);
            (*H).block<3,3>(0, 6) = gtsam::Matrix3::Identity();  // dv/dv
        }
        return predicted_v - measured_velocity_;
    }
};

// Usage in GTSAM factor graph:
// 1. Estimate ego-velocity from static-point FMCW velocities
// 2. Static points: radial_velocity ≈ -v_ego · point_direction
// 3. RANSAC to separate static/dynamic, estimate v_ego
// 4. Add FMCWVelocityFactor with estimated v_ego

11.4 Adaptive Resolution for Active Perception

OPA beam steering enables dynamic resolution allocation:

python
class AdaptiveResolutionManager:
    """
    Manage OPA LiDAR scan pattern based on scene context.
    
    Unlike mechanical LiDAR (fixed scan pattern), OPA can allocate
    more points to regions of interest and fewer to empty space.
    """
    
    # Region-of-interest priorities for airside
    ROI_PRIORITIES = {
        'aircraft_zone': 1.0,       # Highest: near aircraft
        'personnel_area': 0.9,      # Very high: where people work  
        'active_taxiway': 0.7,      # High: traffic areas
        'docking_approach': 1.0,    # Highest: precision needed
        'empty_taxiway': 0.2,       # Low: open space
        'sky': 0.0,                 # None: waste of points
    }
    
    def compute_scan_pattern(self, scene_context, total_points_budget=100000):
        """
        Allocate point budget across FoV based on scene context.
        
        Returns: List of (azimuth, elevation, dwell_time) per point
        """
        regions = scene_context.get_regions()
        
        # Allocate points proportional to priority × solid angle
        total_weighted_area = sum(
            r.solid_angle * self.ROI_PRIORITIES.get(r.type, 0.3)
            for r in regions
        )
        
        scan_points = []
        for region in regions:
            priority = self.ROI_PRIORITIES.get(region.type, 0.3)
            n_points = int(total_points_budget * 
                          (region.solid_angle * priority) / total_weighted_area)
            
            # Generate uniform point grid within region
            points = region.generate_uniform_grid(n_points)
            scan_points.extend(points)
        
        return scan_points

12. Cost Roadmap and Market Dynamics

12.1 LiDAR Cost Trajectory

YearMechanical SpinningMEMSOPA Solid-StateFlash
2020$4,000-10,000$1,000-3,000N/A (lab only)$2,000-5,000
2023$1,500-5,000$500-1,500$5,000+ (prototype)$1,000-3,000
2025$1,000-3,000$300-800$1,000-3,000$500-1,500
2027 (proj.)$800-2,000$200-500$200-800$200-500
2030 (proj.)$500-1,500$100-300$50-200$100-300

12.2 Volume Dependency

Cost per unit ($)

10K ┤ ○ 
    │  ╲
 5K ┤   ╲  Mechanical (limited scaling)
    │    ╲─────────────────────────────
 2K ┤     ╲
    │      ○ 
 1K ┤       ╲  MEMS (moderate scaling)
    │        ╲──────────────────────
500 ┤         ╲
    │          ╲
200 ┤           ○  OPA/SiPh (semiconductor scaling)
    │            ╲
100 ┤             ╲─────────────────
 50 ┤              ╲
    │               ╲_______________
    └──┬──┬──┬──┬──┬──┬──┬──┬──┬──→
      1K 10K 100K 1M 10M         Units/year
      
Key insight: Silicon photonics follows semiconductor cost curves.
At automotive volumes (1M+ units/year), OPA LiDAR approaches $50-100.
At reference airside AV stack volumes (100-1000 units/year), OPA premium still significant.
Strategy: Use automotive-volume sensors designed for cars, adapt for airside.

12.3 Market Consolidation (2025-2028)

The LiDAR market is consolidating rapidly:

  • Ibeo: Filed insolvency (2022), acquired by MicroVision
  • Velodyne + Ouster: Merged (2023), now focused on industrial/robotics
  • Quanergy: Bankrupt (2022)
  • Cepton: Acquired by Koito (2023)
  • Innoviz: Secured BMW production deal, largest MEMS deployment
  • Aeva: Only pure-play FMCW company with automotive design win
  • Voyant: Newest entrant, most aggressive silicon photonics approach

Implication for reference airside AV stack: Don't bet on a single vendor. Design sensor interfaces that can swap between RoboSense (current), Aeva (FMCW mid-term), and silicon photonics OPA (long-term). The ROS PointCloud2 abstraction already provides this.


13. Migration Strategy: Mechanical to Solid-State

13.1 Phased Migration

Phase 1 (2025-2026): EVALUATE
├── Add 1-2 Aeva Atlas (FMCW) alongside existing RoboSense
├── Run dual-stack: existing pipeline + FMCW-enhanced pipeline
├── Validate velocity data utility for tracking, jet blast detection
├── No changes to safety-critical pipeline
└── Cost: $5-10K hardware + $15-20K integration

Phase 2 (2026-2027): AUGMENT
├── Replace 2 of 6 RoboSense with FMCW (forward-facing)
├── Add flash LiDAR for docking (2-4 per vehicle)
├── Modify PointPillars to use velocity channel
├── FMCW velocity feeds GTSAM ego-velocity factor
└── Cost: $10-20K hardware + $25-35K software

Phase 3 (2027-2028): PRIMARY TRANSITION
├── Majority FMCW sensors (4 of 6 positions)
├── Keep 2 mechanical as redundancy during transition
├── Full velocity-aware perception pipeline
├── Adaptive resolution active perception
└── Cost: $15-25K hardware + $20-30K software

Phase 4 (2028-2030): FULL SOLID-STATE
├── All sensors solid-state (OPA + flash)
├── Remove mechanical LiDAR entirely
├── Per-point velocity as primary input to planner
├── Fleet-wide sensor standardization
└── Cost: $10-20K per vehicle (sensor swap)

13.2 Backward Compatibility Requirement

Critical: Each phase must maintain backward compatibility with existing perception pipeline. The ROS PointCloud2 message format enables this — FMCW sensors publish standard XYZI on the same topic, with velocity as an optional additional field.

python
class DualStackPerception:
    """
    Run existing ToF pipeline and FMCW-enhanced pipeline in parallel.
    Simplex pattern: FMCW-enhanced as AC, existing ToF as BC.
    """
    
    def __init__(self):
        # Existing pipeline (backward compatible)
        self.tof_pipeline = PointPillarsDetector()  # Standard XYZI input
        
        # New FMCW-enhanced pipeline
        self.fmcw_pipeline = VelocityAwareDetector()  # XYZIV input
        
        # Safety monitor compares both
        self.safety_monitor = PerceptionSafetyMonitor()
    
    def process(self, cloud_msg):
        """
        Dual-stack processing during migration phases.
        """
        # Extract standard XYZI for backward-compatible pipeline
        xyzi = extract_xyzi(cloud_msg)
        tof_detections = self.tof_pipeline.detect(xyzi)
        
        # Extract XYZIV for FMCW-enhanced pipeline (if velocity available)
        if has_velocity_field(cloud_msg):
            xyziv = extract_xyziv(cloud_msg)
            fmcw_detections = self.fmcw_pipeline.detect(xyziv)
            
            # Safety monitor: cross-check both pipelines
            if self.safety_monitor.agree(tof_detections, fmcw_detections):
                return fmcw_detections  # Use enhanced results
            else:
                rospy.logwarn("Perception disagreement — using conservative ToF pipeline")
                return tof_detections  # Fallback to proven pipeline
        else:
            return tof_detections  # No FMCW data available

14. Implementation Roadmap

14.1 Timeline and Costs

PhaseDurationHardware CostSoftware CostTotalKey Deliverable
Phase 1: Evaluation8 weeks$5-10K$15-20K$20-30KFMCW data characterization report
Phase 2: Augmentation12 weeks$10-20K$25-35K$35-55KVelocity-enhanced PointPillars, flash docking
Phase 3: Primary transition16 weeks$15-25K$20-30K$35-55KMajority FMCW fleet, adaptive resolution
Phase 4: Full solid-state12 weeks$10-20K/vehicle$10-15K$20-35K + per-vehicleAll-solid-state fleet
Total48 weeks$40-75K + fleet$70-100K$110-175K + fleet

14.2 Risk Mitigation

RiskProbabilityImpactMitigation
FMCW range insufficient for airsideLowHighEvaluate multiple vendors; 200m is sufficient, most claim 300m+
OPA resolution inadequateMediumMediumMEMS as bridge; flash LiDAR for close-range
Software integration complexityMediumMediumDual-stack approach allows gradual migration
Vendor bankruptcy/exitMediumHighVendor-agnostic ROS interface; multi-source strategy
Point rate too low for PointPillarsLowHighAccumulate 2-3 scans; OPA point rates increasing rapidly
1550nm atmospheric absorptionLowLow1550nm absorption is minimal in normal atmosphere; FMCW coherent gain compensates

15. Key Takeaways

  1. Solid-state LiDAR saves $150-450K/year for a 50-vehicle fleet through reduced sensor replacement, less downtime, and lower maintenance labor

  2. FMCW provides per-point velocity at zero additional latency — eliminates 300-500ms tracking delay for velocity estimation, transformative for jet blast detection and approaching-object early warning

  3. 1550nm eye safety is critical for airside — 100x higher safe power limit vs 905nm protects ground crew working within 1-2m of sensors

  4. Voyant Helium (photonic focal plane array) is the long-term target — truly no moving parts, semiconductor scaling economics, <150g per sensor. Prototype at CES 2026

  5. Aeva Atlas is the near-term practical choice — FMCW + MEMS, 300m range, automotive-grade, production 2025. Best bridge technology while OPA matures

  6. Migration is incremental, not big-bang — dual-stack Simplex pattern (FMCW-enhanced AC, existing ToF BC) allows gradual transition with zero safety regression

  7. Adaptive OPA resolution enables foveated perception — dynamically allocate more points to aircraft/personnel zones, fewer to empty taxiway. 2-5x effective resolution improvement in regions that matter

  8. Flash LiDAR solves docking — wide-FoV, dense, no moving parts, $200-500/unit. Complementary to long-range FMCW for docking applications requiring ±5cm precision

  9. Silicon photonics cost curves follow semiconductors — current $1,000-3,000 per FMCW sensor will reach $50-200 at automotive volumes (2028-2030). reference airside AV stack volumes (100-1000/year) benefit from automotive-scale pricing

  10. FMCW velocity data improves GTSAM localization — direct ego-velocity measurement from static-point Doppler provides additional factor for fusion alongside IMU, wheel odometry, and RTK-GPS


16. References

Papers and Standards

  1. Poulton, C.V. et al., "Large-scale silicon nitride nanophotonic phased arrays at infrared and visible wavelengths," Optics Letters, 2017
  2. Rogers, C. et al., "A universal 3D imaging sensor on a silicon photonics platform," Nature, 2021
  3. Hsu, C.P. et al., "A Review and Perspective on Optical Phased Array for Automotive LiDAR," IEEE Journal of Selected Topics in Quantum Electronics, 2021
  4. Li, B. et al., "A large-scale microelectromechanical-systems-based silicon photonics LiDAR," Nature, 2022
  5. Sun, X. et al., "Si Photonics FMCW LiDAR Chip with Solid-State Beam Steering by Interleaved Coaxial Optical Phased Array," Micromachines, 2023
  6. Isaac, B.J. et al., "Photonic-electronic integrated circuit-based coherent LiDAR engine," Nature Communications, 2024
  7. Fan, G. et al., "Progress and prospects for LiDAR-oriented optical phased arrays based on photonic integrated circuits," npj Nanophotonics, 2025
  8. Luo, G. et al., "Requirements for next-generation integrated photonic FMCW LiDAR sources," Nature Communications, 2025

Products and Companies

  1. Aeva, "Atlas — Automotive-Grade 4D LiDAR," Product Specification, 2025
  2. Aeva, "Aeries II — Camera-Level Resolution 4D LiDAR," Product Specification, 2026
  3. Voyant Photonics, "Helium Platform — Fully Solid-State 4D FMCW LiDAR," Press Release, December 2025
  4. SiLC Technologies, "Eyeonic Vision System — FMCW LiDAR with Polarization," Product Specification, 2024
  5. Innoviz Technologies, "InnovizTwo — Automotive-Grade MEMS LiDAR," Product Specification, 2024

Market Analysis

  1. Yole Group, "LiDAR for Automotive and Industrial 2025," Market Report, 2025
  2. PatSnap, "LiDAR Sensor Technology Landscape for Autonomous 2026," Analysis, 2026
  • 20-av-platform/sensors/sensor-degradation-health-monitoring.md — Sensor health monitoring framework
  • 20-av-platform/sensors/multi-lidar-calibration.md — Multi-LiDAR calibration (applies to mixed ToF+FMCW fleet)
  • 30-autonomy-stack/perception/overview/active-perception-sensor-scheduling.md — Adaptive resolution scheduling (OPA enables this)
  • 30-autonomy-stack/planning/autonomous-docking-precision-positioning.md — Docking use case for flash LiDAR
  • 30-autonomy-stack/perception/overview/radar-lidar-fusion-adverse-weather.md — Weather resilience (FMCW LiDAR + 4D radar complementary)

Public research notes collected from public sources.