Skip to content

Waymo: Comprehensive Autonomous Vehicle Technology Stack

Last updated: March 2026


Table of Contents

  1. Company Overview
  2. Vehicle Platform
  3. Sensor Suite
  4. Onboard Compute
  5. Autonomy Software Stack
  6. Machine Learning & AI
  7. Mapping & Localization
  8. Simulation Platform
  9. Cloud & Data Infrastructure
  10. Programming Languages & Tools
  11. Safety & Redundancy Architecture
  12. Fleet Operations & Ride-Hailing
  13. Regulatory & Safety Record
  14. Key Partnerships & Suppliers
  15. Research & Publications
  16. Engineering Organization
  17. Competitive Differentiators
  18. Open Source & Datasets

1. Company Overview

Corporate Identity

AttributeDetail
Legal NameWaymo LLC
Parent CompanyAlphabet Inc. (majority owner)
HeadquartersMountain View, California, USA
FoundedJanuary 17, 2009 (as Google Self-Driving Car Project); spun out as Waymo in December 2016
Employees~3,776 (as of early 2026)
Valuation$126 billion (post-money, February 2026)
Annual Revenue Run Rate~$350 million (late 2025)
Co-CEOsDmitri Dolgov (technology), Tekedra Mawakana (business)

Funding History

RoundDateAmount RaisedPost-Money ValuationLead Investors
Series AMarch 2020$2.25B (grew to $3.2B)~$30BSilver Lake, Mubadala, Magna, Andreessen Horowitz
Series BJune 2021$2.5B~$30BAlphabet and external investors
Series COctober 2024$5.6B$45BAlphabet, Andreessen Horowitz, Silver Lake, Fidelity, Tiger Global, T. Rowe Price
Series DFebruary 2026$16B$126BDragoneer, DST Global, Sequoia Capital, Alphabet
Total Raised~$27.1B

Key Milestones

YearMilestone
2009Google Self-Driving Car Project launched at Google X by Sebastian Thrun
2010Project publicly revealed by the New York Times (Oct 9)
2012First autonomous vehicle licensed in Nevada (modified Prius)
2015First fully driverless ride on public roads (Austin, TX; no steering wheel)
2016Spun out from Google X as "Waymo" under Alphabet
2017Launched Early Rider Program in Phoenix metro with Chrysler Pacificas
2018Launched Waymo One commercial ride-hailing in Phoenix (with safety drivers)
2019Surpassed 10 billion simulated miles
2020Fully driverless (rider-only) rides began in Phoenix; 5th-gen hardware unveiled
2021Rider-only rides in San Francisco begin; co-CEO structure established
2022Expanded rider-only operations in SF; began Waymo One rides in SF
2023Waymo Via trucking unit closed to focus on robotaxis; retired Chrysler Pacifica fleet
2024Launched in Los Angeles; 6th-gen Waymo Driver announced; surpassed 100K weekly paid rides
2025Launched in Austin and Atlanta; 450K+ weekly rides; 2,500 robotaxis in service; Magna factory opened in Mesa, AZ
20266th-gen Waymo Driver begins autonomous operations; $16B raised; Waymo World Model unveiled; 200M autonomous miles logged; expansion to 10+ cities

2. Vehicle Platform

Vehicle Generations

VehiclePartner OEMGenerationStatusBatteryArchitectureApprox. Fleet Count
Toyota PriusToyotaGen 1-2Retired (2009-2012)HybridN/A~12
Lexus RX450hToyota/LexusGen 3Retired (2012-2017)HybridN/A~30
Chrysler Pacifica HybridFCA/StellantisGen 4Retired (2017-2023)16 kWh PHEV (33 mi EV range)N/A~1,000
Jaguar I-PACEJaguar Land RoverGen 5Active90 kWh Li-ion, 234 mi EPA range400V~1,500+ (scaling to 3,500 by end 2026)
Waymo Ojai (Zeekr RT)Geely/ZeekrGen 6Active (Feb 2026)93 kWh LFP, 800V800VInitial deployments; ramping through 2026

Jaguar I-PACE (5th Generation Waymo Driver)

The all-electric Jaguar I-PACE has served as Waymo's primary commercial vehicle since 2020. Key specifications:

  • Drivetrain: Dual-motor AWD, 394 hp combined
  • Battery: 90 kWh liquid-cooled LG Chem lithium-ion
  • EPA Range: 234 miles (377 km)
  • Charging: 100 kW DC fast charge (0-80% in ~40 min)
  • Body: Aluminum-intensive monocoque
  • Sensor Hardware: 5th-gen Waymo Driver with 29 cameras, 5 LiDAR, 6 radar, external audio receivers
  • Sensor Pod: Roof-mounted pod with spinning LiDAR (Laser Bear Honeycomb evolution), plus front/rear bumper pods containing cameras and radar. The pod design features overlapping fields of view for full 360-degree coverage.
  • Waymo received its final Jaguar I-PACE delivery and plans to retrofit 2,000+ additional I-PACE vehicles through 2026 at the Magna Mesa factory.

Waymo Ojai / Zeekr RT (6th Generation Waymo Driver)

The purpose-built robotaxi developed with Geely's Zeekr brand represents the first mass-produced vehicle designed from the ground up for autonomous ride-hailing.

  • Official Name: Waymo Ojai (named after the California city; internally "Zeekr RT")
  • OEM Partner: Zeekr (majority owned by Geely, parent of Volvo Cars)
  • Design Origin: Designed in Sweden, manufactured in China (Zeekr), integrated in Mesa, AZ (Magna/Waymo)
  • Drivetrain: Rear-mounted single motor, 268 hp
  • Battery: 93 kWh lithium iron phosphate (LFP)
  • Architecture: 800V electrical architecture (enabling rapid DC fast charging for 24/7 fleet operation)
  • Doors: Three sliding doors (one driver-side conventional, two sliding passenger doors)
  • Interior: Purpose-built for passengers -- five seats, center tunnel table, no traditional driver controls in rider cabin
  • Sensor Hardware: 6th-gen Waymo Driver with 13 cameras, 4 LiDAR, 6 radar, external audio receivers (EARs)
  • Sensor Pod: Smaller, more integrated sensor package with miniature wipers for camera and LiDAR lens cleaning. Front and rear sensor pods contain cameras, radar, and LiDAR units distributed to maintain 360-degree overlap.
  • Distinguishing Feature: First mass-produced vehicle purpose-built for autonomous driving (per Zeekr CEO Andy An)

Chrysler Pacifica Hybrid (Retired)

  • Partnership: Began May 2016 with FCA (now Stellantis)
  • Fleet Size: Initially 100 units (Dec 2016), expanded to 600 by 2018, orders placed for up to 62,000
  • Actual Deployments: ~1,000 converted vehicles served riders before retirement in 2023
  • Modifications: Custom electrical, powertrain, chassis, and structural engineering for full autonomy
  • Reason for Retirement: Transition to all-electric fleet (Jaguar I-PACE)

3. Sensor Suite

Generation Comparison

Sensor Type5th Gen (I-PACE)6th Gen (Ojai)Change
Cameras2913-55%
LiDAR54-20%
Radar660%
External Audio ReceiversYesYes--
Max Detection Range>500 m>500 m--
Field of View360 deg overlapping360 deg overlapping--

LiDAR (In-House Design)

Waymo designs and manufactures its own LiDAR sensors, making it one of the only AV companies with fully vertically integrated LiDAR.

Types of LiDAR (3 distinct categories):

  1. Long-Range LiDAR (Roof-Mounted)

    • Custom "Laser Bear Honeycomb" design evolution
    • 360-degree rotation
    • Range: >300 meters
    • The 5th-gen version featured a 95-degree vertical field of view (VFOV)
    • 6th-gen version: reengineered illumination and internal data processing to better penetrate weather and reduce point cloud distortion near highly reflective signs
  2. Short-Range / Perimeter LiDAR

    • Positioned at four points around the vehicle sides (5th gen) or optimized placements (6th gen)
    • Provides uninterrupted surround view close to the vehicle body
    • Critical for detecting vulnerable road users (pedestrians, cyclists) at close range
    • Centimeter-scale range accuracy
  3. High-Resolution Forward LiDAR

    • First-of-its-kind zooming capability to dynamically focus on distant objects
    • Provides detailed point clouds at extended ranges

Cost Reduction: Waymo reduced LiDAR unit cost by more than 90% through in-house design (from $75,000+ per unit to an estimated sub-$7,500).

Core Components: Designed and built in California. Custom-designed chips and optical elements.

Cameras

5th Generation:

  • 29 cameras providing 360-degree coverage with overlapping fields of view
  • Capable of identifying stop signs at >500 meters

6th Generation:

  • 13 cameras (reduced count, higher capability per unit)
  • Next-generation 17-megapixel imager
  • Exceptional thermal stability across automotive temperature ranges
  • Day/night operation with overlapping fields of view up to 500 meters
  • Miniature wipers on front sensor pod to clear debris

Radar (In-House Imaging Radar)

Waymo developed one of the world's first automotive imaging radar systems.

  • Count: 6 radar units (both 5th and 6th gen)
  • Type: Custom in-house imaging radar
  • Capabilities:
    • Dense temporal maps tracking distance, velocity, and size of objects
    • Continuous 360-degree view
    • Tracks targets at >500 meters
    • Operates in all lighting and weather conditions (rain, fog, snow, direct sunlight)
    • Detects static, barely moving, and fully moving objects
    • Higher resolution and enhanced signal processing vs. conventional automotive radar
  • 6th-gen Improvements: New in-house algorithms for improved rain/snow performance; added dense temporal mapping for real-time object speed, size, and trajectory tracking

External Audio Receivers (EARs)

  • Array of microphones positioned around the vehicle exterior
  • Detects and localizes emergency vehicle sirens, honking, construction noise
  • Enables the Waymo Driver to respond to auditory cues that cameras and LiDAR cannot capture

4. Onboard Compute

Compute Architecture

Waymo designs its own onboard compute platform, treating it as a tightly integrated system with its sensor suite.

ComponentDetails
Primary ComputeCustom board combining server-grade CPUs and GPUs
Secondary ComputeRedundant backup computer running continuously in parallel
CPUsIntel Xeon processors (historically confirmed)
GPUsNVIDIA GPUs (likely A-series or successor for inference)
FPGAsIntel Arria FPGAs for low-latency sensor preprocessing
ConnectivityIntel XMM LTE/5G modems; Intel Gigabit Ethernet for internal bus
Sensor Data Throughput~20 GB/s aggregate from all sensors
End-to-End Latency~3 ms (sensor input to actuation command)
RedundancyFull dual-compute architecture; secondary system can bring vehicle to safe stop
DesignEntirely in-house by Waymo hardware engineers

Redundancy Philosophy

  • Dual compute boards: Primary and secondary systems run simultaneously; failover is seamless
  • Independent power supplies: Separate power paths for primary and backup systems
  • Watchdog monitoring: Hardware watchdogs detect compute failures within milliseconds
  • Graceful degradation: If primary compute fails, backup system executes a minimal risk condition (MRC) -- typically pulling over and stopping safely

5. Autonomy Software Stack

The Waymo Driver software is organized as a multi-stage pipeline, though recent developments are pushing toward end-to-end learned approaches.

Pipeline Architecture

Sensors --> Perception --> Prediction --> Planning --> Control --> Actuators
               |               |              |            |
               v               v              v            v
         3D Detection    Behavior        Trajectory    Steering/
         Tracking        Forecasting     Generation    Braking/
         Segmentation    Intent          Optimization  Throttle
         Classification  Recognition     Risk Eval

5.1 Perception

The perception module fuses data from all sensor modalities (LiDAR, camera, radar, audio) to build a comprehensive understanding of the environment.

Key capabilities:

  • 3D Object Detection: Detects 100+ object classes including vehicles, pedestrians, cyclists, scooters, animals, traffic cones, construction equipment
  • Semantic Segmentation: Per-point LiDAR segmentation and per-pixel camera segmentation
  • Object Tracking: Temporal association of detected objects across frames with globally unique tracking IDs
  • Lane Detection & Road Graph: Real-time identification of lane boundaries, crosswalks, curbs
  • Traffic Signal Recognition: Detection and classification of traffic lights (including arrow signals), stop signs, speed limit signs
  • Free Space Estimation: Drivable area computation

Model architectures:

  • Transformer-based models (ViTs) for camera-based perception
  • BEV (Bird's Eye View) representations via BEVFormer-style architectures
  • PointPillars-derived architectures for LiDAR point cloud processing
  • Multi-modal fusion networks combining LiDAR + camera features
  • Neural Architecture Search (NAS) for optimized model design: 10,000+ architectures explored, yielding 10% lower error rate at same latency or 20-30% faster inference at same quality

5.2 Prediction

The prediction module forecasts the future trajectories and behaviors of all detected agents.

Key capabilities:

  • Multi-agent trajectory prediction over 8-11 second horizons
  • Joint prediction of 100+ agents simultaneously
  • Intent recognition (lane change, turn, stop, yield)
  • Interaction modeling between agents (e.g., merging, negotiating intersections)

Model architectures:

  • Graph Neural Networks (GNNs) for modeling agent-to-agent and agent-to-road interactions (VectorNet)
  • MultiPath/MultiPath++: Fixed anchor trajectory hypotheses with Gaussian mixture outputs
  • Temporal graph networks and diffusion models for motion forecasting
  • Scaling laws applied: 10x data/compute yields 20-30% error reduction in forecasting

5.3 Planning

The planning module generates safe, comfortable, and efficient trajectories.

Key capabilities:

  • Trajectory generation with spatiotemporal optimization
  • Risk-aware decision making under uncertainty
  • Goal fulfillment (follow route, reach destination)
  • Negotiation behaviors (unprotected left turns, merges, yielding)
  • Comfort optimization (smooth acceleration, deceleration, steering)

Approach evolution:

  • Historical: Rule-based + optimization-based planning
  • Current: ML-driven motion planning using foundation models, shifting from hand-crafted rules to learned policies
  • Scaling laws demonstrate power-law gains in trajectory quality with more data and compute

5.4 Control

The control module translates planned trajectories into precise actuator commands.

Key capabilities:

  • Model Predictive Control (MPC) hybridized with neural policies
  • Reinforcement learning-derived control policies
  • Real-time adaptation to road surface conditions, wind, vehicle load
  • Smooth, human-like driving behavior

Waymo is actively developing end-to-end approaches that collapse the traditional pipeline:

  • EMMA (End-to-End Multimodal Model): Maps raw camera data directly to planner trajectories, perception outputs, and road graph elements using a single model built on Gemini
  • The production Waymo Driver now uses a foundation model trained end-to-end, similar in philosophy to approaches by Tesla and Wayve
  • Co-training across perception, detection, and planning tasks improves all three domains simultaneously

6. Machine Learning & AI

Foundation Models

EMMA (End-to-End Multimodal Model for Autonomous Driving)

AttributeDetail
PublishedOctober 2024 (arXiv: 2410.23262); updated September 2025
Base ModelGoogle Gemini (multimodal large language model)
InputRaw camera sensor data, navigation instructions, ego vehicle status
OutputPlanner trajectories, 3D object detections, road graph elements
RepresentationAll non-sensor inputs/outputs encoded as natural language text
Multi-taskCo-trained on planning, object detection, and road graph tasks
PerformanceState-of-the-art motion planning on nuScenes; competitive on WOMD and WOD
LimitationsProcesses limited image frames; no LiDAR/radar input; computationally expensive
VenueICLR 2025

Waymo Foundation Model (Production)

  • Single large foundation model powering the production Waymo Driver
  • Trained on Google TPU pods (same infrastructure used for Gemini)
  • "Generalizable" -- designed to transfer across vehicle platforms and geographies
  • Built on JAX framework (same as Google DeepMind's Gemini)

Training Infrastructure

ComponentDetail
FrameworkJAX (primary), TensorFlow (legacy/supplementary)
HardwareGoogle TPU pods (v4, v5e, likely v6/Trillium); NVIDIA GPUs for some workloads
TPU EfficiencyUp to 15x more efficient training vs. GPU baselines
Distributed TrainingCustom libraries for job scheduling, resource management, data distribution, model synchronization
Training DataHundreds of millions of real-world driving miles + billions of simulated miles
Data ProcessingPetabytes of complex sensor data processed through scalable pipelines
AutoML/NASReinforcement learning and random search over 10,000+ architectures; final selection based on accuracy and inference cost
Shared InfrastructureOptimizations from Gemini training directly portable to Waymo Driver training

Model Architecture Components

ArchitectureApplicationDetails
Vision Transformers (ViTs)Camera perceptionSelf-attention over image patches for object detection and segmentation
BEVFormerMulti-camera 3D perceptionGenerates bird's-eye-view feature maps from surround cameras
PointPillarsLiDAR 3D detectionPoint cloud encoded into vertical pillars, processed by 2D CNN; runs at 62 Hz
VectorNetHD map & agent encodingHierarchical GNN operating on vectorized map and trajectory representations
MultiPath / MultiPath++Behavior predictionFixed anchor trajectories with Gaussian mixture outputs; state-of-the-art on WOMD
Diffusion ModelsMotion forecasting & simulationUsed in SceneDiffuser for traffic simulation initialization and rollout
Gemini-based LLMEnd-to-end driving (EMMA)Multimodal LLM mapping sensor data to driving outputs via natural language

Waymo collaborates with Google Brain/DeepMind to apply AutoML to autonomous driving:

  • Process: Explore 10,000+ architectures using RL and random search; pre-select 100 candidates; choose 1 winner based on accuracy and inference cost
  • Applications: LiDAR semantic segmentation, traffic lane detection/localization
  • Results: Architectures with 10% lower error rate at same latency, or 20-30% faster at same quality
  • NAS Cells: Family of neural network building blocks (NAS cells) composed to build task-specific architectures

7. Mapping & Localization

HD Map Creation

Waymo relies on pre-built High-Definition (HD) maps as a core component of its autonomy stack.

Map Creation Process:

  1. Manual Survey Drives: Sensor-equipped vehicles manually driven through every street in a new operational domain
  2. 3D LiDAR Reconstruction: Custom LiDAR paints a detailed 3D picture of the environment
  3. Annotation: Maps enriched with semantic information -- lane boundaries, traffic signals, stop signs, crosswalks, speed limits, curb heights, driveway locations
  4. Validation: Automated and manual quality checks ensure centimeter-level accuracy
  5. Updates: Maps continuously updated as the fleet detects changes in the environment

Map Specifications

AttributeDetail
AccuracyCentimeter-level (cm-level)
Content3D geometry, lane-level topology, traffic control devices, crosswalks, speed zones, construction zones
FormatProprietary vectorized representation
CoverageAll operational cities fully mapped before service launch
Update FrequencyContinuous, driven by fleet observations and dedicated mapping runs

Localization

  • Primary Method: LiDAR-based localization against pre-built HD point cloud maps
  • Supplementary: GPS/GNSS, IMU, wheel odometry
  • Accuracy: Centimeter-level position accuracy in 6-DOF (position + orientation)
  • Robustness: Multi-sensor fusion ensures localization even when individual sensors are degraded (e.g., GPS in urban canyons)
  • SLAM Integration: LiDAR SLAM used during map creation; real-time localization uses scan-matching against pre-built maps with misalignment correction

8. Simulation Platform

Overview

Waymo's simulation infrastructure is among the most extensive in the AV industry, with over 20 billion simulated miles driven (vs. 200+ million real-world autonomous miles).

SimulationCity

Waymo's primary simulation platform, unveiled in July 2021.

FeatureDetail
ScaleUp to 25,000 virtual Waymo vehicles driving up to 10 million miles per day
FidelityRecreates raindrops on sensors, dimming light, solar glare, road surface variations
Data Sources20M+ real autonomous miles, NHTSA Crash Data Systems, Naturalistic Driving Study Data
Scenario TypesReplay of real-world encounters, procedural generation, adversarial testing, regression testing
CapabilitiesNew vehicle platform bring-up, geographic validation, operational refinement
Statistical ModelingDraws random outcomes from statistical distributions of real-world behaviors

SurfelGAN

  • Published: CVPR 2020
  • Purpose: Generate realistic sensor data for novel viewpoints from limited LiDAR + camera data
  • Method: Texture-mapped surfels for efficient scene reconstruction preserving 3D geometry, appearance, and scene conditions
  • Output: Photorealistic camera images for novel vehicle positions and orientations
  • Application: Data augmentation, counterfactual scenario generation

Waymo World Model (February 2026)

Waymo's most advanced simulation system, built on Google DeepMind's Genie 3.

FeatureDetail
FoundationGoogle DeepMind's Genie 3 (general-purpose world model for photorealistic 3D environments)
OutputMulti-sensor: generates both camera AND LiDAR data simultaneously
Control Modes(1) Driving action control, (2) Scene layout control, (3) Text prompt control
Rare Event GenerationCan simulate tornadoes, flooded streets, elephants on the road, and other near-impossible scenarios
Data ConversionConverts ordinary dashcam or cell phone videos into full multimodal (camera + LiDAR) simulations
3D TransferPre-trained on vast 2D video corpus, then post-trained to produce 3D LiDAR outputs matching Waymo's hardware
Use CaseEvery dashcam incident on the internet becomes a potential training scenario

SceneDiffuser / SceneDiffuser++

  • SceneDiffuser: Scene-level diffusion model for traffic simulation initialization and rollout (NeurIPS 2024). Achieves 16x fewer inference steps via amortized denoising. Top open-loop and best closed-loop diffusion performance on Waymo Open Sim Agents Challenge.
  • SceneDiffuser++: City-scale traffic simulation via generative world model. First end-to-end generative world model capable of point A-to-B simulation at city scale.

Simulation Scale

MetricValue
Total Simulated Miles>20 billion
Daily Simulated MilesUp to 10 million
Virtual Fleet SizeUp to 25,000 simultaneous virtual vehicles
Closed-Course Scenarios>40,000 unique scenarios
Simulated Driving Per YearBillions of miles

9. Cloud & Data Infrastructure

Google Cloud Integration

As an Alphabet subsidiary, Waymo has deep integration with Google Cloud infrastructure.

ComponentDetail
ComputeGoogle TPU pods (v4, v5e, Trillium/v6) for training; GPU clusters for inference development
StorageGoogle Cloud Storage (GCS) for petabyte-scale sensor data
Data WarehouseBigQuery for structured analytics and metrics
ML PlatformVertex AI and custom training orchestration
NetworkingHigh-bandwidth interconnect between TPU pods (9.6 Tb/s inter-chip, per Ironwood specs)
HBM Per SuperpodUp to 1.77 PB shared High Bandwidth Memory (9,216-chip Ironwood superpod)

Data Pipeline

  • Data Volume: Many petabytes of complex sensor data (LiDAR point clouds, camera images, radar returns, audio)
  • Pipeline Function: Raw sensor data ingestion, preprocessing, annotation, training data generation, model evaluation
  • Auto-Labeling: High-accuracy 3D auto-labeling system generates 3D bounding boxes for road agents
  • Scale: Data volume "exploding" as Waymo scales to new cities and vehicle platforms
  • Annotation Tools: Combination of automated labeling (ML-based) and human annotation for ground truth

Data Flow Architecture

Fleet Vehicles (20 GB/s per vehicle)
    |
    v
Cellular Upload (prioritized data selection)
    |
    v
Google Cloud Storage (petabyte-scale raw data)
    |
    v
Data Processing Pipeline (preprocessing, auto-labeling, quality checks)
    |
    v
Training Data Store (curated datasets for ML training)
    |
    v
TPU Pods (distributed training)
    |
    v
Model Registry (versioned models, A/B testing)
    |
    v
OTA Deployment to Fleet

10. Programming Languages & Tools

Programming Languages

LanguagePrimary Use
C++Core autonomy stack (perception, planning, control), real-time onboard software, sensor drivers, low-latency inference
PythonML model development, training scripts, data analysis, prototyping, tooling
GoBackend services, fleet management, cloud infrastructure, APIs
JavaSome backend services (Alphabet ecosystem compatibility)
StarlarkBuild configuration language (Python dialect for Bazel BUILD files)

Build System

ToolDetail
BazelPrimary build system (open-source version of Google's internal Blaze). Handles multi-language monorepo builds across C++, Python, Go, Java. Hermetic builds ensure reproducibility.
BlazeGoogle-internal predecessor to Bazel; Waymo likely uses Blaze internally given Alphabet's infrastructure
Key BenefitsIncremental builds, remote caching, remote execution, cross-platform compilation, deterministic outputs

Internal Tools and Frameworks

ToolPurpose
Custom ML LibrariesEnhance TensorFlow and JAX for Waymo-specific scalability, reliability, and performance
Training OrchestrationCustom distributed training infrastructure: job scheduling, resource management, data distribution, model synchronization
Simulation FrameworkSimulationCity and Waymo World Model toolchain
Data LabelingAuto-labeling systems + annotation management tools
Fleet ManagementProprietary vehicle dispatch, routing, monitoring, and remote assistance systems
Map ToolchainHD map creation, validation, and update pipeline
CI/CDContinuous integration and deployment for both onboard software and cloud services

ML Frameworks

FrameworkUsage
JAXPrimary ML framework (rebuilt entire training stack on JAX); same framework as Google DeepMind's Gemini
TensorFlowLegacy training pipelines and some production inference
Flax/HaikuNeural network libraries on top of JAX
XLACompiler for optimizing TPU/GPU execution

11. Safety & Redundancy Architecture

Safety Methodology

Waymo employs a multi-layered safety approach organized across three technology layers.

Three Layers of Safety:

LayerScopeMethods
Hardware (Architecture)Sensors, compute, actuatorsRedundant sensors, dual compute, backup braking/steering, DFMEA, FTA
ADS Behavior (Software)Perception, prediction, planning, controlSTPA, scenario-based verification, simulation testing, ML robustness
OperationsFleet management, remote assistance, maintenanceOperational Design Domain (ODD) constraints, geo-fencing, remote support

Hazard Analysis Methods:

  • STPA (Systems-Theoretic Process Analysis): Identifies hazards from control structure interactions
  • FTA (Fault Tree Analysis): Top-down deductive analysis of failure paths
  • DFMEA (Design Failure Modes and Effects Analysis): Systematic evaluation of component failure modes
  • Custom Software Safety Analysis: Waymo-specific safety analysis for ML-based systems

Hardware Redundancy

SystemRedundancy Approach
ComputeDual onboard computers running simultaneously; backup can bring vehicle to safe stop
BrakingRedundant braking actuators (specified as necessary for driverless operation)
SteeringRedundant steering actuators
PowerIndependent power paths for primary and backup systems
SensorsOverlapping fields of view across LiDAR, cameras, and radar; any single sensor failure covered by remaining sensors
CommunicationMultiple cellular connections for fleet management and remote assistance

Testing Portfolio

Test TypeScope
SimulationBillions of miles; regression testing; rare event coverage; system-level
Closed-Course>40,000 unique scenarios; controlled environment testing; hardware-in-the-loop
Public Road200+ million autonomous miles; real-world validation
Unit TestsSoftware component verification
Integration TestsSubsystem interaction validation
Bench TestsHardware component testing
Hardware-in-the-Loop (HIL)Sensor and compute hardware tested with simulated inputs

Safety Readiness Determination

Waymo uses a formal Safety Readiness Determination (SRD) process before deploying in new domains:

  1. Hazard analysis and risk assessment
  2. Design and development with built-in robustness
  3. Scenario-based verification (simulation + closed-course + limited public road)
  4. Operational readiness review
  5. Phased deployment (safety drivers first, then driverless)

Reference: Waymo's Safety Methodologies and Safety Readiness Determinations (arXiv:2011.00054, published November 2020)


12. Fleet Operations & Ride-Hailing

Waymo One Service

MetricValue (as of March 2026)
Weekly Paid Rides450,000+ (Dec 2025); targeting 1M by end of 2026
Fleet Size~2,500 vehicles (Nov 2025); scaling to 3,500+ I-PACEs + Ojai additions
Total Autonomous Miles200+ million (fully autonomous, public roads)
Rider-Only Miles96+ million (no human driver present)
AppWaymo One (iOS and Android); also available via Uber app in select cities

Operational Cities (as of March 2026)

CityLaunch DateAppFleet VehicleNotes
Phoenix, AZOct 2020 (driverless)Waymo OneJaguar I-PACELargest and most mature market
San Francisco, CAMar 2022 (driverless)Waymo OneJaguar I-PACEMajor urban deployment
Los Angeles, CA2024Waymo OneJaguar I-PACERapidly expanding service area
Austin, TXEarly 2025Uber appJaguar I-PACEPartnership with Uber for dispatch and fleet ops
Atlanta, GAJun 2025Uber appJaguar I-PACE65 sq mi initial area; Downtown to Buckhead
Miami, FLEarly 2026 (early access)Waymo OneJaguar I-PACEMoove handles fleet operations
Houston, TX2026 (announced Feb)TBDTBDInitial select riders
Dallas, TX2026 (announced Feb)TBDTBDRolling access
San Antonio, TX2026 (announced Feb)TBDTBDRolling access
Orlando, FL2026 (announced Feb)TBDTBDRolling access

Announced future cities: Washington D.C. (2026), Nashville, Detroit, Las Vegas, San Diego, Denver, and international targets including London and Tokyo.

Pricing Model

San Francisco:

  • Base fare: $9.52
  • Per mile: $1.66
  • Per minute: $0.30
  • Average ride cost: ~$20.43

Los Angeles:

  • Base fare: $5.37
  • Per mile: $2.50
  • Per minute: $0.32

Price Comparison (April 2025 data):

ServiceAverage Ride CostPremium vs. Lyft
Waymo$20.43+41%
Uber$15.58+8%
Lyft$14.44Baseline

Trend: The pricing gap is narrowing. By late 2025, Waymo's premium dropped to ~12.7% over Uber and ~27.3% over Lyft. Waymo's average cost dropped 3.62% while Uber and Lyft prices increased.

Subscription: $29.99/month subscription available, pays for itself in under five rides vs. standard fares.

Fleet Management Partners

PartnerRoleMarkets
Waymo (direct)Full fleet operationsPhoenix, SF, LA
UberDispatch via Uber app, fleet managementAustin, Atlanta
MooveFleet operations, facilities, chargingMiami (planned), Phoenix
Avomo (formerly Moove Cars)Fleet management for Uber-dispatched ridesAustin, Atlanta

13. Regulatory & Safety Record

Autonomous Miles and Crash Data

MetricValueSource
Total Autonomous Miles (public road)200+ millionWaymo (Feb 2026)
Rider-Only Miles96+ millionWaymo
NHTSA-Reported Incidents1,429 (Jul 2021 - Nov 2025)NHTSA SGO data
Reported Injuries117NHTSA
Fatalities2 (Waymo vehicle involvement)NHTSA

Peer-Reviewed Safety Comparisons

Study 1: 7.1 Million Rider-Only Miles (Kusano et al., 2024; published in Traffic Injury Prevention):

Crash TypeWaymo ADS (IPMM)Human Benchmark (IPMM)Reduction
Any Injury Reported0.412.80-85% (human rate 6.7x higher)
Airbag DeploymentSignificant reduction--Statistically significant

Study 2: 56.7 Million Rider-Only Miles (Kusano et al., 2025; published in Traffic Injury Prevention):

  • First statistically significant comparison at the Suspected Serious Injury+ (SSI+) level
  • Statistically significant reductions in: Any-Injury-Reported, Airbag Deployment, and SSI+ outcomes
  • Disaggregated into 11 crash types for granular analysis

California DMV Disengagement Reports

PeriodMiles Driven (CA)Fleet Size (CA)Notes
Dec 2022 - Nov 20235,872K km (3,670K mi)~430 vehiclesPeak testing mileage
Dec 2023 - Nov 20243,823K km (2,390K mi)1,035 vehicles35% fewer miles; shift from testing to commercial ops
Dec 2024 - Nov 2025Part of 9M+ mi industry total--Commercial operations dominate

Note: As Waymo transitions from testing to commercial operations, traditional disengagement reporting becomes less meaningful. Waymo's commercial driverless miles (rider-only) are reported through NHTSA's SGO framework rather than California DMV disengagement reports.

Regulatory Framework

  • Federal: Voluntary compliance with NHTSA Standing General Order (SGO) for incident reporting; no federal AV legislation as of March 2026
  • California: CPUC permit for commercial robotaxi operations; CA DMV autonomous vehicle testing permit
  • Arizona: Permissive regulatory environment; no specific AV legislation required for operation
  • Other States: Regulatory engagement underway in Texas, Florida, Georgia, and Washington D.C.

14. Key Partnerships & Suppliers

Vehicle Manufacturing Partners

PartnerRelationshipVehicle/Component
Geely / ZeekrOEM partner; designed and manufactures the Zeekr RT (Waymo Ojai)6th-gen robotaxi platform
Jaguar Land RoverOEM partner (winding down)Jaguar I-PACE base vehicle
Magna InternationalManufacturing partner; co-operates Mesa, AZ integration facilitySensor integration, assembly
Stellantis (FCA)Former OEM partnerChrysler Pacifica Hybrid (retired)

Ride-Hailing & Fleet Partners

PartnerRelationshipDetails
UberRide-hailing platform integrationWaymo rides available via Uber app in Austin and Atlanta
MooveFleet managementOperations, facilities, and charging in Miami; fleet ops in Phoenix
AvomoFleet managementDepot operations for Uber-dispatched markets

Autonomous Trucking Partners

PartnerRelationshipStatus
Daimler TrucksStrategic partnership for L4 autonomous trucksActive; Waymo provides autonomy stack for Freightliner Cascadia
J.B. HuntFreight testing partnerWaymo Via closed (2023), but Daimler partnership continues
C.H. Robinson3PL testing partnerTesting completed
UPSFreight trial partnerTrial runs in Texas completed
Uber FreightFreight platform integrationExploratory partnership

Technology Partners

PartnerRelationship
Google DeepMindGenie 3 world model foundation; shared research
Google CloudTPU infrastructure, storage, compute
Google Brain (now DeepMind)AutoML/NAS collaboration for architecture search
IntelHistorical: Xeon CPUs, Arria FPGAs, XMM modems, Gigabit Ethernet

15. Research & Publications

Publication Statistics

Waymo maintains an active research program with publications at top-tier venues.

YearPaper Count
2019~8
2020~15
2021~12
2022~14
202316
202413
20254+

Key Papers by Topic

Perception

PaperVenueYearKey Contribution
PointPillars: Fast Encoders for Object Detection from Point Clouds (Lang et al.)CVPR2019Pillar-based LiDAR encoding; 62 Hz real-time; 2-4x faster than prior methods
STINet: Spatio-Temporal-Interactive Network for Pedestrian Detection and Trajectory Prediction (Zhang et al.)CVPR2020Joint pedestrian detection + trajectory prediction
Scalability in Perception for Autonomous Driving: Waymo Open Dataset (Sun et al.)CVPR2020Introduced Waymo Open Dataset; benchmark for 3D detection
Depth Estimation Matters Most--2020Per-object depth estimation for monocular 3D detection and tracking
DeepFusion: Lidar-Camera Deep Fusion for Multi-Modal 3D Object DetectionCVPR2022Cross-modal feature fusion for 3D detection

Behavior Prediction

PaperVenueYearKey Contribution
MultiPath: Multiple Probabilistic Anchor Trajectory Hypotheses for Behavior Prediction (Chai et al.)CoRL2019Fixed anchor trajectories with Gaussian mixture; one-pass inference
VectorNet: Encoding HD Maps and Agent Dynamics from Vectorized Representation (Gao et al.)CVPR2020Hierarchical GNN on vectorized map/trajectory inputs
MultiPath++: Efficient Information Fusion and Trajectory Aggregation for Behavior Prediction (Varadarajan et al.)ICRA2022State-of-the-art on WOMD and Argoverse
Large Scale Interactive Motion Forecasting for Autonomous Driving: The Waymo Open Motion Dataset (Ettinger et al.)ICCV2021Introduced WOMD; 103K segments

End-to-End Driving

PaperVenueYearKey Contribution
EMMA: End-to-End Multimodal Model for Autonomous Driving (Hwang et al.)ICLR2025Gemini-based model mapping raw cameras to trajectories via natural language

Simulation

PaperVenueYearKey Contribution
SurfelGAN: Synthesizing Realistic Sensor Data for Autonomous Driving (Yang et al.)CVPR2020Surfel-based scene reconstruction + GAN for novel view synthesis
SceneDiffuser: Efficient and Controllable Driving SimulationNeurIPS2024Diffusion-based traffic simulation; 16x fewer inference steps
SceneDiffuser++: City-Scale Traffic Simulation via a Generative World Model--2025First end-to-end generative world model for city-scale simulation

Safety

PaperVenueYearKey Contribution
Waymo's Safety Methodologies and Safety Readiness DeterminationsarXiv: 2011.000542020Formal safety framework: STPA, FTA, DFMEA, SRD process
Comparison of Waymo Rider-Only Crash Data to Human Benchmarks at 7.1M Miles (Kusano et al.)Traffic Injury Prevention202485% crash reduction vs. human benchmark
Comparison of Waymo Rider-Only Crash Rates at 56.7M Miles (Kusano et al.)Traffic Injury Prevention2025First statistically significant SSI+ comparison

Conference Presence

Waymo regularly publishes at and sponsors:

  • CVPR (Computer Vision and Pattern Recognition)
  • NeurIPS (Neural Information Processing Systems)
  • ICRA (International Conference on Robotics and Automation)
  • ICLR (International Conference on Learning Representations)
  • CoRL (Conference on Robot Learning)
  • ICCV (International Conference on Computer Vision)
  • ECCV (European Conference on Computer Vision)
  • RSS (Robotics: Science and Systems)
  • IV (Intelligent Vehicles Symposium)

Patent Portfolio

MetricValue
Total Patents Globally3,476
Unique Patent Families1,077
USPTO Applications1,237
USPTO Grants929
Grant Rate97.07%
Active Patents3,199
LiDAR Patents626
Autopilot/Control Patents186
Citation Impact1.6x higher citation rate than Toyota; 2.3x higher than GM
Core R&D ThemesAutonomous vehicle, LiDAR sensor, light detection, trajectory planning

16. Engineering Organization

Leadership

NameTitleFocus
Dmitri DolgovCo-CEOTechnology, engineering, and research. Former CTO and VP of Engineering. PhD in robotics; designed Stanley's perception stack (DARPA Grand Challenge). At the project since 2009.
Tekedra MawakanaCo-CEOBusiness operations, commercialization, public policy. Joined Waymo in 2017.
Dragomir (Drago) AnguelovVP, Head of AI Foundations (Research)Leads research team focused on ML for autonomous driving. Joined Waymo 2018. Distinguished Scientist.

Historical Key Figures

NameRoleContribution
Sebastian ThrunFounding LeadStanford professor; led Project Chauffeur at Google X (2009-2013); built Stanley for DARPA Challenge
Chris UrmsonEngineering LeadLed software development; CEO until departure in 2017; co-founded Aurora
Anthony LevandowskiCo-founderBuilt first self-driving test vehicle; departed 2016; founded Otto (acquired by Uber); trade secret lawsuit
John KrafcikCEO (2015-2021)First dedicated CEO; led transition from project to company; launched Waymo One commercial service
Nathaniel FairfieldVP, EngineeringLong-tenured engineering leader

Team Structure

Waymo's engineering organization is broadly organized into:

  • AI/ML Research (led by Drago Anguelov): Perception, prediction, planning, simulation research
  • Systems Engineering: Onboard software, hardware integration, real-time systems
  • Hardware: Sensor design (LiDAR, radar, cameras), compute platform, vehicle integration
  • Simulation & Testing: SimulationCity, Waymo World Model, closed-course testing
  • ML Infrastructure: Training infrastructure, data pipelines, model deployment
  • Fleet Operations Engineering: Remote assistance, dispatch optimization, fleet monitoring
  • Safety Engineering: System safety, hazard analysis, validation & verification
  • Mapping: HD map creation, localization, map maintenance

17. Competitive Differentiators

What Makes Waymo Unique

DifferentiatorDetail
Longest Track Record17+ years of continuous AV development (since 2009), more than any competitor
Most Autonomous Miles200+ million real-world autonomous miles + 20+ billion simulated miles
Vertical Integration of SensorsIn-house LiDAR, radar, and camera design -- not reliant on third-party sensor companies
Google/Alphabet EcosystemAccess to TPU pods, DeepMind research, Google Cloud, JAX/Gemini infrastructure
Commercial Operations at ScaleOnly AV company operating commercial rider-only service in multiple major US cities
Multi-Modal Sensor FusionLiDAR + cameras + radar + audio (vs. Tesla's camera-only approach)
HD Map + ML HybridCombines pre-built HD maps with learned perception/planning (vs. purely map-free or purely map-dependent)
Foundation Model ArchitectureProduction system uses end-to-end trained foundation model, benefiting from Gemini ecosystem
Safety RecordPeer-reviewed studies showing 85%+ crash reduction vs. human drivers
Simulation DepthWaymo World Model (Genie 3-based) can generate training scenarios from any video input

Comparative Architecture: Waymo vs. Tesla

DimensionWaymoTesla
SensorsLiDAR + cameras + radar + audioCameras only (vision-only)
MapsPre-built HD maps (cm-level)No pre-built maps
Deployment ModelRobotaxi fleet (no private ownership)Personal vehicle upgrade
Training Data SourceOwn fleet (200M+ miles) + simulation (20B+ miles)Customer fleet (4M+ vehicles, billions of miles)
Hardware Cost Per VehicleHigher (LiDAR + sensor suite: est. $10-15K)Lower (cameras only: ~$400)
Operational DomainGeo-fenced cities with HD mapsAny road (no geo-fencing)
Autonomy LevelSAE Level 4 (fully driverless in ODD)SAE Level 2+ (driver supervision required)
Revenue ModelRide-hailing revenue per tripSoftware subscription ($99-199/month FSD)
Compute (Training)Google TPU podsCustom Dojo + NVIDIA GPU clusters
Foundation ModelGemini-derived (EMMA)Custom transformer (FSD v12+)

Key Advantages Over Competitors

  1. Vs. Cruise (GM): Cruise suspended operations in Oct 2023; Waymo has uninterrupted commercial service
  2. Vs. Tesla: Waymo operates fully driverless (no safety driver); Tesla FSD requires driver supervision
  3. Vs. Baidu Apollo: Waymo has US regulatory relationships and operates in more diverse US cities
  4. Vs. Zoox (Amazon): Waymo has larger fleet, more cities, more autonomous miles
  5. Vs. Mobileye: Waymo operates its own fleet rather than licensing technology to OEMs only

18. Open Source & Datasets

Waymo Open Dataset (WOD)

The Waymo Open Dataset is one of the largest and most diverse autonomous driving datasets publicly available.

License: Non-commercial use

Perception Dataset

AttributeDetail
Segments2,030 segments, each 20 seconds at 10 Hz
Total Frames~390,000
Cameras5 high-resolution cameras (front, front-left, front-right, side-left, side-right)
LiDARTop LiDAR (100-200K points per sweep)
Annotations3D 7-DOF bounding boxes (vehicles, pedestrians, cyclists, signs) with globally unique tracking IDs
Additional Labels2D bounding boxes, panoptic segmentation, projected LiDAR on camera, per-pixel camera rays
Label QualityIndependent LiDAR and camera labels (not projections of each other)
Bounding Box ConstraintsZero pitch, zero roll
CalibrationFull intrinsic and extrinsic camera calibration provided

Motion Dataset (WOMD)

AttributeDetail
Segments103,354 segments, each 20 seconds at 10 Hz
Duration570+ hours of unique data
Road Coverage1,750+ km of roadways
Geographic Coverage6 US cities
Temporal Windows9-second windows (1s history + 8s future) with varying overlap
Auto-LabelingHigh-accuracy 3D auto-labeling system for all road agents
HD MapsCorresponding high-definition 3D maps for each scene
Data FormatSharded TFRecord files containing protocol buffer data
Split70% training, 15% testing, 15% validation
v1.2.0 AdditionLiDAR points for first 1 second of each 9-second window
v1.3.1 (Oct 2025)Added sdc_paths feature indicating valid future routes for the SDC

End-to-End Driving Dataset (WOD-E2E)

AttributeDetail
Segments4,021 segments, each 20 seconds
Duration~12 hours
FocusChallenging long-tail scenarios (occurrence frequency <0.03%)
Cameras8 JPEG cameras (front, front-left, front-right, side-left, side-right, rear, rear-left, rear-right) at 10 Hz
Resolution~1920x1280 native; typically downsampled to 768x768
Camera FOV70-90 degrees horizontal per camera; full 360-degree coverage
IncludesHigh-level routing information, ego states, 360-degree camera views
Evaluation MetricRater Feedback Score (RFS) -- measures match to human rater trajectory preference labels
2025 ChallengeWOD-E2E Challenge using held-out test set rater labels

Open Challenges

Waymo hosts annual challenges on its datasets:

ChallengeDatasetTask
3D DetectionWOD Perception3D object detection from LiDAR and/or camera
3D TrackingWOD PerceptionMulti-object tracking in 3D
Motion PredictionWOMDMulti-agent trajectory forecasting
Sim AgentsWOMDSimulating realistic agent behavior
Occupancy & Flow PredictionWOMDPredicting future occupancy grids and flow fields
End-to-End DrivingWOD-E2EVision-based end-to-end driving in long-tail scenarios

GitHub Repositories

RepositoryDescription
waymo-research/waymo-open-datasetOfficial dataset tools, tutorials, and evaluation code

Access

  • Download: waymo.com/open/download
  • Storage: Google Cloud Storage buckets for programmatic access
  • API: Python/TensorFlow API with tutorial notebooks
  • Dataset Format: TFRecord with protocol buffers

Sources

Public research notes collected from public sources.