Skip to content

Incident Reporting and Post-Market Monitoring

Last updated: 2026-05-09

Post-market monitoring is how the safety case stays alive after launch. It captures incidents, near misses, field performance, misuse, ODD drift, cybersecurity events, customer/site reports, and corrective actions. Incident reporting is the regulated subset of that system. The operational rule is simple: collect and preserve enough evidence to decide reportability quickly, report on time when required, and feed the lessons back into the safety case and release gates.

Practical Evidence and Artifact Model

ArtifactContentsPurpose
Incident intake recordEvent ID, time, site, vehicles, people/assets involved, initial severity, reporter, system stateStarts traceable handling
Reportability matrixJurisdiction, regulator/customer contract, trigger criteria, deadline, owner, submission statusPrevents missed reporting clocks
Forensic evidence packageVehicle manifest, active map/model/config/calibration, rosbags/MCAPs, video, telemetry, operator actions, cloud logsSupports investigation and external reports
Near-miss recordTrigger, closest approach, safety margin, intervention, ODD condition, contributing factorsLeading indicator for safety improvement
Post-market monitoring planSignals monitored, thresholds, sampling, review cadence, escalation criteria, safety-case linkageMeets continuous assurance expectations
Trend reviewPeriodic incident and near-miss rates by site, vehicle, ODD, version, map tile, operator, weatherDetects drift and recurring hazards
Corrective and preventive actionRoot cause, containment, long-term action, verification, owner, due dateCloses the loop
Safety-case change requestClaims, assumptions, hazards, requirements, or evidence affectedKeeps assurance current
External communication logRegulator, airport/site authority, customer, insurer, public statement, timestampsPreserves transparency record

The incident evidence package should be immutable after preservation. Investigators may derive working copies, but raw evidence and the active deployment manifest must remain unchanged.

Reporting and Monitoring Obligations

United States road AV reporting

NHTSA Standing General Order 2021-01 was first issued on 2021-06-29 and has been amended in 2021, 2023, and 2025. NHTSA's public SGO page states that identified manufacturers and operators must report certain crashes involving ADS or SAE Level 2 ADAS. The current reporting form and trigger definitions should be checked directly before making a submission because the order and data elements have changed over time.

U.S. airport AGVS coordination

FAA Part 139 CertAlert 24-02, published on 2024-02-15, addresses autonomous ground vehicle systems technology on certificated airports. It emphasizes early FAA coordination for airport AGVS activities. For airside fleets, reportability may also arise from airport SMS, tenant contracts, airport operations procedures, insurer terms, and local civil aviation authority requirements, even when no road-vehicle SGO applies.

EU AI Act post-market monitoring and incidents

Regulation (EU) 2024/1689 entered into force on 2024-08-01. Article 72 establishes post-market monitoring for high-risk AI systems, and Article 73 establishes serious incident reporting. The European Commission published draft guidance and a reporting template for serious AI incidents on 2025-09-26, with consultation closing on 2025-11-07.

As of 2026-05-09, a political agreement announced on 2026-05-07 would delay high-risk AI obligations to 2027-12-02 for stand-alone high-risk AI systems and 2028-08-02 for high-risk AI systems embedded in products. The Council press release says the agreement is provisional and still requires endorsement, legal-linguistic revision, and formal adoption. Until the final legal text is published, keep both the current AI Act dates and the provisional amended dates in the regulatory watch log.

SOTIF and aviation AI monitoring

ISO 21448:2022 includes activities during the operation phase needed to achieve and maintain SOTIF. EASA's AI Roadmap 2.0 was published on 2023-05-10, and EASA NPA 2025-07 was published on 2025-11-10 to propose detailed specifications and AMC/GM for AI trustworthiness in aviation in response to the EU AI Act.

Deployment Operations

1. Define reportable and internally reportable events

Use a two-layer taxonomy:

LayerExamples
Externally reportable candidatesInjury, fatality, tow-away or asset-damage threshold, vulnerable road user involvement, aircraft/critical infrastructure strike, serious AI incident, airport SMS reportable event, cybersecurity incident
Internally reportable leading indicatorsNear miss, hard brake, MRC activation, emergency stop, remote intervention, ODD exit, perception-map disagreement, repeated rule violation, operator confusion

Do not wait for final root cause before preserving evidence or starting the reportability clock assessment.

2. Preserve evidence immediately

For SEV-0 and SEV-1 events:

  1. Freeze vehicle deployment manifest and active map/model/config/calibration IDs.
  2. Preserve pre-event and post-event sensor data, telemetry, logs, operator UI actions, remote-assistance sessions, and cloud traces.
  3. Preserve site context: weather, lighting, traffic, work orders, NOTAMs, construction, shift handover, maintenance state.
  4. Lock evidence against deletion and overwriting.
  5. Assign a reportability owner and a safety investigation owner.

3. Review field data on a cadence

Post-market monitoring should include:

  • Daily safety operations review for severe events and open containment actions.
  • Weekly trend review for near misses, interventions, ODD exits, and version correlations.
  • Monthly safety performance indicator review by site and vehicle type.
  • Release-specific enhanced monitoring after OTA/model/map/config changes.
  • Quarterly safety-case review of assumptions that field evidence has weakened or confirmed.

4. Feed corrective actions back into release gates

Every closed incident should answer:

  • Did the incident invalidate an ODD assumption?
  • Did a safety monitor miss or detect late?
  • Should a scenario be added to simulation or track testing?
  • Does a model dataset need new coverage?
  • Does a map or site procedure need change?
  • Does operator training or HMI wording need change?
  • Does SUMS require a new release gate?

Risks and Failure Modes

Failure modeConsequenceControl
Reporting clock missedRegulatory penalty and trust lossReportability matrix and named owner on every severe event
Severity downplayed too earlyEvidence lost or reporting delayedClassify high until evidence supports downgrade
Near misses ignoredLeading indicators never become fixesNear-miss thresholds and trend reviews
Privacy conflicts with evidence retentionEvidence is deleted or over-sharedLegal hold path plus restricted access and redaction policy
Root cause stops at "operator error"System contributors remainHuman factors and system-factor review
External narratives diverge from evidenceRegulator/customer trust degradesSingle source of truth, communications lead, decision log
Post-market data not linked to safety caseAssurance becomes staleSafety-case change requests for incidents and trends
Multiple reporting regimes conflictLate, duplicate, or inconsistent reportsRegulator/customer matrix and legal/regulatory review
  • 60-safety-validation/safety-case/safety-incidents-lessons.md
  • 60-safety-validation/safety-case/safety-case-evidence-traceability.md
  • 50-cloud-fleet/operations/fleet-sre-incident-response.md
  • 50-cloud-fleet/observability/fleet-anomaly-root-cause-attribution.md
  • 40-runtime-systems/data-logging/on-vehicle-data-triage-selective-upload.md
  • 60-safety-validation/runtime-assurance/runtime-verification-monitoring.md
  • 60-safety-validation/standards-certification/certification-guide.md

Sources

Public research notes collected from public sources.