Skip to content

Mobileye: Comprehensive AV/ADAS Technology Stack

Last updated: March 2026


Table of Contents

  1. Company Overview
  2. Product Lines
  3. Chip Architecture — EyeQ Lineage
  4. Sensor Strategy — True Redundancy
  5. Autonomy Software Stack
  6. Machine Learning & AI
  7. Mapping — REM (Road Experience Management)
  8. Simulation & Virtual Validation
  9. Cloud & Data Infrastructure
  10. Safety Architecture — RSS
  11. OEM Partnerships
  12. Fleet Operations & Robotaxi Deployments
  13. Regulatory & Certification
  14. Key Publications & Patents
  15. Competitive Position

1. Company Overview

Founding & History

Mobileye Global Inc. was founded in 1999 by Prof. Amnon Shashua and Ziv Aviram in Jerusalem, Israel. Shashua, a professor of computer science at the Hebrew University of Jerusalem, built the company on the then-radical premise that a single camera, powered by computer vision algorithms, could enable safe, scalable driver assistance — eliminating the need for expensive active sensors for basic ADAS functions.

Corporate Milestones

YearMilestone
1999Founded by Amnon Shashua and Ziv Aviram
2004First EyeQ1 silicon sampled
2007Partnership with BMW for series production
2008EyeQ1 commercial launch
2014IPO on NYSE — raised $890M, largest Israeli IPO in U.S. history at the time
2017Intel acquisition for $15.3 billion — largest-ever Israeli tech acquisition
2017RSS (Responsibility-Sensitive Safety) model published
2020REM mapping wins PACE Award
2021First AV test drives in New York City and Tokyo
2022Re-IPO on Nasdaq (ticker: MBLY) at $21/share, valued at ~$17B
2024200 millionth EyeQ chip shipped; ended internal FMCW LiDAR development
2025VW Group 10M-unit order; Intel reduces stake to ~80%
2026Acquisition of Mentee Robotics for $900M; 9M-chip deal with major U.S. OEM; Robotaxi deployments commence

Key Leadership

NameRoleBackground
Prof. Amnon ShashuaPresident & CEOCo-founder; Hebrew University professor; 160+ publications, 94+ patents
Prof. Shai Shalev-ShwartzChief Technology OfficerMachine learning theorist; leads ADAS/AV/RSS/REM development
Moran Shemesh RojanskyChief Financial Officer
Johann JungwirthSVP, Autonomous VehiclesFormer VW Group CDO
Liz Cohen-YerushalmiGeneral CounselHead of Legal
Diane Be'eryVP, Marketing

Corporate Facts

  • Headquarters: Jerusalem, Israel
  • Employees: ~3,800 (post-December 2025 layoff of ~200, representing 5% of workforce)
  • Stock: Nasdaq: MBLY (Intel retains ~80% ownership as of mid-2025)
  • 2024 Revenue: $1.7 billion
  • 2025 Revenue Guidance: $1.845B–$1.885B (12–14% YoY growth)
  • 2024 EyeQ Shipments: ~29 million units
  • Cumulative EyeQ Shipments: 200+ million (as of 2024)
  • Installed Base: 150+ million vehicles worldwide built with Mobileye technology

2. Product Lines

Mobileye offers a modular product portfolio spanning L0/L1 ADAS through full L4 driverless operation. All three advanced platforms share a common ECU form factor and software core, enabling OEMs to plan upgrade paths across their vehicle lineups.

Product Hierarchy

ProductAutomation LevelKey CapabilityChip(s)Sensors
EyeQ-based ADASL0–L2FCW, LDW, AEB, TSR1x EyeQ4/EyeQ6L1 front camera (+optional radar)
Surround ADASL2+360-degree perception, integrated parking, hands-off/eyes-on highway1x EyeQ6HMultiple cameras + radars
SuperVisionL2++Hands-off highway navigation, lane changes, overtaking; eyes-on2x EyeQ5 or 2x EyeQ6H11 cameras (seven 8MP) + radar
ChauffeurL2++/L3Eyes-off driving at up to 130 km/h on highways; geographically scalable3x EyeQ6H11+ cameras + surround imaging radar + front LiDAR
DriveL4Fully driverless MaaS/robotaxi in geofenced domains4x EyeQ6HUp to 13 cameras + imaging radars + LiDARs (27 sensors total on ID.Buzz AD)

SuperVision

SuperVision is the most widely deployed advanced platform, currently in production with Zeekr (Geely Group), Porsche (upcoming), and other VW Group brands. It provides:

  • 360-degree surround perception via 11 cameras (including seven 8-megapixel units)
  • Hands-off highway navigation with automatic lane changes and overtaking
  • REM crowdsourced mapping integration for localization
  • RSS driving policy for safety-critical decisions
  • OTA (over-the-air) update capability for continuous feature rollout

Chauffeur

Chauffeur extends SuperVision with a redundant active-sensor channel (imaging radar + front LiDAR) to enable eyes-off operation. A secondary computing board with an additional EyeQ6H chip provides hardware/software redundancy. Designed for speeds up to 80 mph (130 km/h) on all regular road types.

Drive

Drive is the full Level 4 platform for driverless Mobility-as-a-Service. It powers the VW ID.Buzz AD and is the basis for Lyft/Marubeni robotaxi deployments. The ECU contains 4x EyeQ6H chips connected to up to 13 cameras, multiple imaging radars, and LiDARs. Moovit (an Intel subsidiary) provides the rider-facing platform, fleet management tools, and tele-operations.

ECU Series Architecture

All three advanced platforms (SuperVision, Chauffeur, Drive) share:

  • A common primary board with 2x EyeQ6H + integrated MCU
  • Identical form factor, interface, and connectivity
  • Common software core

OEMs can move between platforms by adding or removing a secondary computing board and sensor modules, dramatically reducing engineering effort for product-line planning.


3. Chip Architecture — EyeQ Lineage

Mobileye has designed and shipped six generations of EyeQ system-on-chip (SoC) processors, with over 200 million units shipped cumulatively. The chips are purpose-built for vision processing and deep learning inference in automotive-grade environments.

EyeQ Generation Summary

GenerationYearProcessPerformancePowerCPU CoresKey Features
EyeQ12008180 nmFirst SoC; LDW, FCW, TSR, AHC
EyeQ220106x EyeQ1Pedestrian detection, full AEB
EyeQ320146x EyeQ2L2 capability, higher resolution
EyeQ4201828 nm FD-SOI (STMicro)2.5 TOPS~4.5 W4 MIPS coresMulti-sensor fusion (up to 8 sensors), 10x EyeQ3
EyeQ520217 nm FinFET (TSMC)24 TOPS (peak)<5 W (typical)8 CPU cores18 vision processor cores, 4 accelerator classes, up to 20 sensors
EyeQ6L20247 nm~11 TOPS~lowOptimized4.5x EyeQ4M compute at half the die area; front-camera ADAS
EyeQ6H20257 nm3x EyeQ5H~6.25 W1,000+ FPS on pixel-labeling NNs; built-in ISP, GPU, video encoder
EyeQ Ultra2025+5 nm176 TOPS<100 W12 RISC-V (dual-threaded)L4 single-chip AV; 16 CNN accelerators, 64 total cores

EyeQ4 — The Mass-Market Workhorse

  • Process: STMicroelectronics 28 nm FD-SOI
  • Performance: 2.5 TOPS
  • Power: ~4.5 W (EyeQ4H)
  • Architecture: 4 MIPS CPU cores + proprietary vision processing accelerators
  • Sensors: Fuses up to 8 sensors
  • Deployment: Tens of millions shipped; powers basic ADAS across dozens of OEMs

EyeQ5 — The SuperVision Backbone

  • Process: TSMC 7 nm FinFET (15 metal layers)
  • Performance: Up to 24 TOPS deep learning; 12 TOPS in typical workloads
  • Power: <5 W (typical operating)
  • Architecture:
    • 8 CPU cores
    • 18 vision processor cores
    • 4 classes of proprietary accelerators:
      • XNN — Dedicated deep learning / CNN accelerator
      • PMA — Programmable Macro Array (CGRA)
      • VMP — Vector Microcode Processor (SIMD VLIW)
      • MPC — Multi-thread Processor Cluster (barrel-threaded CPU cores)
  • Sensors: Fuses up to 20 sensors
  • Variants: EyeQ5 Mid (4.6 DL TOPS int8) and EyeQ5 High

EyeQ6L — Cost-Optimized ADAS

  • Process: 7 nm
  • Performance: 4.5x the compute of EyeQ4M at roughly half the physical footprint
  • Power: Similar to EyeQ4M
  • Target: L1–L2 front-camera ADAS (replacement for EyeQ4 in new designs)
  • Camera Support: 8 MP camera with 120-degree lateral FOV (20-degree improvement over EyeQ4M)
  • Status: In production as of April 2024

EyeQ6H — High-Performance ADAS/AV

  • Process: 7 nm
  • Performance: 3x the compute power of EyeQ5H, consuming only 25% more power
  • Built-in blocks: Dedicated ISP, GPU, video encoder
  • Real-world benchmark: >1,000 FPS on pixel-labeling neural networks
  • Status: Launched early 2025; basis for new ECU series

EyeQ Ultra — The L4 AV-on-a-Chip

  • Process: 5 nm (fabricated by TSMC on Intel's behalf)
  • Performance: 176 TOPS — equivalent to 10x EyeQ5
  • Power: <100 W
  • CPU: 12 dual-threaded cores on RISC-V ISA (first EyeQ to abandon MIPS)
  • Accelerators: 64 total cores including:
    • 16 CNN accelerators (XNN class)
    • PMA, VMP, MPC classes retained
  • GPU: Arm GPU at up to 256 GFLOPS
  • Additional blocks: Vision Processing Unit (VPU), Image Signal Processor (ISP), H.264/H.265 video encoding cores
  • Target: Single-chip solution for full L4 autonomous driving
  • Status: First silicon late 2023; automotive-grade production targeted for 2025

4. Sensor Strategy — True Redundancy

Philosophy

Mobileye's sensor architecture is built on a principle called True Redundancy, which is fundamentally different from the industry-standard approach of sensor fusion.

ApproachHow It WorksValidation Burden
Sensor Fusion (industry standard)Multiple sensors are fused into a single world model; sensors are complementaryVery high — millions of hours needed to validate the single fused pipeline
True Redundancy (Mobileye)Two independent subsystems each build a complete world model; either can achieve safety aloneMuch lower — tens of thousands of hours per channel suffice

Two Independent Subsystems

  1. Camera-Only Subsystem: Processes data from surround cameras (up to 13) using deep learning perception on EyeQ chips. This subsystem must independently achieve safety-level perception — detecting all road users, lanes, drivable paths, traffic signs, and signals from cameras alone.

  2. Radar/LiDAR Subsystem: Processes data from imaging radar and (optionally) LiDAR sensors. This subsystem must also independently achieve safety-level perception, with no camera input.

In production, the camera subsystem serves as the primary backbone, while the radar/LiDAR subsystem provides a diversified, redundant safety backup. Mobileye operates two separate developmental AV fleets: one camera-only, one radar/LiDAR-only.

Camera Specifications

  • SuperVision: 11 cameras (seven 8 MP, four 2 MP), 360-degree surround coverage
  • Chauffeur: 11+ cameras with surround radar and front LiDAR overlay
  • Drive (ID.Buzz AD): 13 cameras + 9 LiDARs + 5 radars = 27 total sensors
  • Sensor type: High-dynamic-range CMOS; operational in low light and high-glare conditions
  • Resolution: Up to 8 megapixels per camera

Imaging Radar

Mobileye developed a proprietary software-defined imaging radar SoC:

  • Configuration: 2,304 virtual channels (48 Tx x 48 Rx)
  • Range: Detection of vehicles, pedestrians, and objects at up to 1,000 feet (~300 m)
  • Capability: Detects motorcycles beyond 200 m; old tire on road at 140 m; low-profile hazards
  • Processing: Custom SoC with proprietary algorithms; 12x resolution increase without proportional compute increase
  • Production partner: WNC (manufacturing collaboration)
  • Status: Meeting performance specifications on B-samples; on track for production

LiDAR Strategy

Mobileye had been developing a proprietary FMCW (Frequency-Modulated Continuous-Wave) LiDAR-on-a-chip:

  • Specification: 4D velocity + position measurements up to 300 m range; 600 points per degree; 2 million laser pulses per second
  • Advantage: FMCW measures velocity directly (unlike ToF LiDAR); compact chip-scale form factor

Strategic pivot (September 2024): Mobileye ended internal FMCW LiDAR development, citing that advances in camera perception (EyeQ6-based) and imaging radar performance made proprietary FMCW LiDAR "less essential" to the eyes-off roadmap. The ~100-person LiDAR division was shut down.

Current approach: Mobileye partnered with Innoviz Technologies to supply third-party LiDARs for the Drive (L4) platform, with SOP in 2026. The imaging radar remains a proprietary, in-house development.


5. Autonomy Software Stack

Architecture Overview

Mobileye's autonomy software is organized into three main layers — Perception, Planning/Policy, and Actuation — each developed and refined independently for modularity. Two cross-cutting frameworks (RSS and REM) are woven through the stack.

+---------------------------------------------------------------+
|                     Cloud Services                             |
|  (REM Roadbook, VLSA slow-think models, OTA updates)          |
+---------------------------------------------------------------+
        |                    |                    |
+---------------+   +-----------------+   +----------------+
|  Perception   |   |   Planning /    |   |   Actuation    |
|               |   |   Driving       |   |   Control      |
| - Camera CNN  |   |   Policy        |   |                |
| - Radar proc  |   |                 |   | - Longitudinal |
| - LiDAR proc  |   | - RSS safety    |   | - Lateral      |
| - ViDAR       |   |   layer         |   | - Comfort      |
| - Fusion      |   | - Path planning |   |   constraints  |
| - Free space  |   | - REM map       |   |                |
| - Semantic    |   |   localization  |   |                |
|   labeling    |   |                 |   |                |
+---------------+   +-----------------+   +----------------+

Perception Pipeline

The perception system runs at 10 Hz (10 cycles per second) and produces:

  • Object detection & classification: Vehicles, pedestrians, cyclists, animals, debris
  • Lane detection & road geometry: Lane markings, road edges, drivable paths
  • Traffic infrastructure: Signs, signals, construction zones
  • Free-space estimation: Drivable area mapping
  • Semantic pixel labeling: Dense per-pixel scene understanding (>1,000 FPS on EyeQ6H)
  • Depth estimation (ViDAR): Camera-based 3D point cloud generation ("Visual LiDAR")

Under True Redundancy, the camera and radar/LiDAR channels each produce their own independent world model. These are compared/combined only at the planning layer.

ViDAR — Vision as Virtual LiDAR

ViDAR is Mobileye's technique for generating LiDAR-like 3D point clouds from camera data alone. Using deep neural networks trained on paired camera + LiDAR data, the system learns to predict dense depth maps and 3D structure from monocular or multi-camera images. This enables the camera-only subsystem to reason about 3D geometry without any active sensors.

Driving Policy — RSS Integration

The planning layer uses the RSS (Responsibility-Sensitive Safety) model as a formal safety envelope. The RSS module:

  1. Takes the perception output (object list with positions, velocities, classifications)
  2. Creates "constellations" — pairwise relationships between the ego vehicle and each detected object
  3. Computes safe following distances (longitudinal and lateral) per constellation
  4. Determines if the current state is "dangerous" (distance < minimum safe distance)
  5. Calculates a "proper response" — acceleration/deceleration limits that will restore safety
  6. Combines all per-object responses into a single actuation constraint

The planner is free to optimize for comfort, efficiency, and passenger experience within the safety envelope defined by RSS.

AV 2.0 / Compound AI Architecture

At CES 2025-2026, Mobileye unveiled its next-generation software architecture called Compound AI, which replaces the monolithic end-to-end approach with a modular, multi-model system:

Fast-Think / Slow-Think Split

LayerFrequencyFunctionCompute Location
Fast-ThinkHigh (~10 Hz)Reflexive safety decisions; RSS enforcement; immediate obstacle avoidanceOn-vehicle (EyeQ chips)
Slow-ThinkLow (~1 Hz)Scene-level reasoning; complex semantic understanding; edge-case resolutionOn-vehicle + cloud (VLMs)

VLSA (Vision-Language-Semantic-Action) Model

The slow-think layer uses a Vision-Language-Semantic-Action model that:

  • Processes deep scene semantics using vision-language foundations
  • Provides structured semantic guidance to the planner (not direct vehicle control)
  • Does NOT sit in the safety loop — safety remains in the fast-think system governed by RSS
  • Can run partly in the cloud, calling powerful VLMs at lower frequency
  • In many cases, can replace human remote operators for edge-case resolution

This architecture improves mean time between interventions while keeping safety-critical control deterministic and formally verified.


6. Machine Learning & AI

Training Infrastructure

  • Data scale: 200+ petabytes of driving data collected from production vehicles
  • Processing: 500,000 peak CPU cores on AWS; processes 50 million datasets/month (~100 PB/month, equivalent to ~500,000 hours of driving)
  • Training compute: Uses Amazon EC2 DL1 instances (Habana Gaudi accelerators) for deep learning training, reducing costs compared to GPU-based training
  • Hard mining: State-of-the-art computer vision coupled with natural language models enables mining of the 200 PB dataset for rare scenarios and edge cases

Neural Network Architecture

Mobileye employs multiple specialized neural network architectures rather than a single monolithic model:

  1. Object detection networks — Multi-class detection and tracking
  2. Semantic segmentation networks — Per-pixel scene labeling (>1,000 FPS on EyeQ6H)
  3. Lane and road geometry networks — Structured output for lane topology
  4. Depth estimation / ViDAR networks — Monocular and multi-view depth prediction
  5. VLSA networks — Vision-language models for scene-level semantic reasoning

All inference runs on the proprietary EyeQ accelerator stack (XNN, PMA, VMP, MPC), which provides extreme power efficiency compared to general-purpose GPUs.

Compound AI vs. End-to-End

Mobileye explicitly rejects pure end-to-end learning for safety-critical AV operation. Their Compound AI approach:

  • Combines purpose-built, verifiable algorithms (RSS, geometric reasoning) with learned components
  • Keeps safety-critical decisions in deterministic, formally provable layers
  • Uses learned perception for flexibility and generalization
  • Avoids placing generative AI models in the safety loop
  • Achieves explainability — each module's contribution to a driving decision can be traced

Bias-Variance Tradeoff in Autonomy

Mobileye frames the AV challenge as a bias-variance tradeoff:

  • High-bias (rule-based) systems are safe but inflexible
  • High-variance (pure ML) systems are flexible but unpredictable
  • Compound AI balances both by using formal safety rules as constraints on learned behavior

7. Mapping — REM (Road Experience Management)

Overview

REM is Mobileye's proprietary crowdsourced, continuously updated HD mapping system. Unlike traditional HD mapping (which uses dedicated LiDAR survey vehicles), REM harvests map data from millions of production vehicles already on the road equipped with Mobileye cameras and EyeQ chips.

How It Works

Production Vehicle               Cloud                    Consumer
(EyeQ + Camera)                 (AWS)                    (Roadbook)
     |                            |                          |
     | 1. Drives normally         |                          |
     | 2. EyeQ extracts           |                          |
     |    landmarks, geometry     |                          |
     | 3. Compresses to           |                          |
     |    ~10 KB/km               |                          |
     |--------------------------->|                          |
     |    Upload anonymized       | 4. Aggregates millions   |
     |    data packets            |    of drives             |
     |                            | 5. Builds/updates        |
     |                            |    Roadbook              |
     |                            |------------------------->|
     |                            |    Distribute updated    |
     |                            |    maps via OTA          |

Key Technical Specifications

ParameterValue
Data footprint~10 KB per kilometer
AccuracyCentimeter-level for vehicle localization and surrounding objects
CoverageMapped all of Japan (25,000 km of roads) in 24 hours; 400 MB total
Update frequencyContinuous — map freshness measured in hours/days, not months
Data source150+ million vehicles with Mobileye technology
ProcessingAWS (Amazon EKS, Amazon S3, Apache Spark)

Roadbook

The Roadbook is the output of the REM pipeline — a compressed, highly precise HD map database containing:

  • Lane-level geometry and topology
  • Road signs and markings (text, colors, positions)
  • Traffic signal locations and types
  • Guardrails, barriers, and road boundaries
  • Landmarks for centimeter-accurate localization
  • Traffic patterns and historical driving behavior

Applications

  1. Autonomous Vehicles: Primary localization and planning map for SuperVision, Chauffeur, and Drive
  2. Cloud-Enhanced ADAS: Even non-autonomous vehicles can receive REM data for enhanced warnings (e.g., curve speed warnings based on crowdsourced data, construction zone alerts)
  3. Infrastructure monitoring: Road authorities can leverage REM data for pothole detection, sign condition, and traffic pattern analysis

8. Simulation & Virtual Validation

Approach

Mobileye's validation philosophy combines three pillars:

  1. Formal safety proofs (RSS): Mathematical guarantees that the driving policy is safe, independent of simulation or real-world testing
  2. Massive simulation: Generative models of human driving behavior (including reckless drivers) using GAN-like techniques to create realistic traffic scenarios
  3. Real-world validation: Dedicated test fleets operating in multiple geographies

Simulation Technology

  • Sensor emulation: Multi-layered, real-time image sensor emulation models that satisfy ECU validity checks and produce video streams eliciting behavior from the perception layer similar to real-world driving
  • Hardware-in-the-Loop (HiL): SuperVision's 11-camera system requires HiL setups capable of scaling to accommodate simultaneous simulated sensors while maintaining real-time performance
  • Scenario generation: Uses RSS parameters to define the space of dangerous scenarios, then generates edge cases within that space
  • Coverage: True Redundancy reduces the validation burden — each independent channel needs only tens of thousands of hours of validation (vs. millions for a fused system)

Safety Validation Tools

  • Synopsys: Mobileye adopted Synopsys automotive functional safety verification solutions for ISO 26262 compliance of next-generation ADAS SoCs
  • RSS on NHTSA scenarios: Mobileye has published an analysis implementing RSS on all 37 NHTSA pre-crash scenario types, demonstrating that RSS-compliant driving avoids at-fault collisions in every category

9. Cloud & Data Infrastructure

AWS Partnership

Mobileye selected Amazon Web Services (AWS) as its preferred public cloud provider in November 2018. The infrastructure underpins REM mapping, ML training, simulation, and OTA updates.

Scale

MetricValue
Peak compute500,000 CPU cores (via Amazon EKS)
Concurrent instances400,000+ vCPUs on thousands of EC2 instances
Monthly processing50 million datasets (~100 PB/month)
Total data store200+ petabytes
Driving hours processed/month~500,000
Contributing vehicles150+ million with Mobileye technology installed

Technology Stack

ComponentAWS ServicePurpose
Container orchestrationAmazon EKS (Elastic Kubernetes Service)Core compute orchestration
Auto-scalingKarpenterIntelligent node provisioning
StorageAmazon S3Data lake for hundreds of PB of sensor data
ML trainingAmazon EC2 DL1 (Habana Gaudi)Deep learning model training
ML inference (REM)AWS Graviton (Arm-based) + TritonCost-optimized REM map inference
Big data processingApache Spark on Amazon EKSHD map creation pipeline

Efficiency Gains

  • 50% reduction in developer overhead after migrating to Amazon EKS
  • Graviton-based inference for REM achieved significant cost reduction vs. x86
  • Spark-on-EKS architecture enabled scalable, containerized HD map processing

10. Safety Architecture — RSS

Responsibility-Sensitive Safety (RSS)

RSS is a mathematically formal safety model for multi-agent driving, published by Shashua and Shalev-Shwartz in 2017. It translates human "common sense" driving rules into rigorous mathematical formulas that are transparent, verifiable, and provably safe.

Design Goals

  1. Soundness: The interpretation of "safe driving" must align with how humans interpret traffic law (formalized from Tort law's "Duty of Care")
  2. Usefulness: The model must produce agile driving behavior, not overly defensive driving
  3. Scalability: Safety guarantees must hold regardless of geography or traffic density

The Five Rules

RuleNameDescription
1Do not hit the car in frontMathematical formulation of the "two-second rule"; defines minimum longitudinal safe following distance
2Do not cut in recklesslyApplies the same safe-distance principle laterally; defines minimum lateral clearance
3Right of way is given, not takenEven when the AV has right-of-way, it must account for other drivers who may not yield
4Be cautious with limited visibilityAbsence of sensor detection does not mean the path is clear; assume potential hazards in occluded areas
5If you can avoid a crash without causing another, you mustUltimate collision-avoidance obligation; permits rule violations (e.g., crossing a lane marking) if necessary to prevent a collision

Mathematical Formulation

Longitudinal Safe Distance (Rule 1):

The minimum safe following distance d_min between a following vehicle (ego) and a leading vehicle is defined such that even if the leading vehicle applies maximum braking force instantaneously, the ego vehicle — after a bounded reaction time rho — can apply its own maximum braking and come to a stop without collision.

Key parameters:

  • v_r — velocity of rear (ego) vehicle
  • v_f — velocity of front vehicle
  • rho — response time
  • a_max_accel — maximum acceleration during response time
  • a_min_brake — minimum braking deceleration of rear vehicle
  • a_max_brake — maximum braking deceleration of front vehicle

Lateral Safe Distance (Rule 2):

An analogous formula defines the minimum lateral gap, accounting for lateral velocities, response times, and lateral braking capabilities.

Proper Response:

When the distance between vehicles drops below d_min, the AV must execute the proper response: braking (longitudinally or laterally) until a safe following distance is restored or the vehicle comes to a complete stop. The proper response is expressed as acceleration limits: [a_min, a_max] for both longitudinal and lateral dimensions.

Implementation

  • Input: Object list from perception (position, velocity, classification of all detected objects)
  • Per-object processing: For each object, RSS creates a "constellation" (geometric relationship with ego) and computes safe distances
  • Aggregation: Individual proper responses are combined into a single actuation constraint (longitudinal and lateral acceleration bounds)
  • Output: The planner must operate within these bounds; any trajectory satisfying the constraints is provably safe

Standards Adoption

  • IEEE P2846: Technology-neutral standard for AV safety assumptions, developed with significant Mobileye/RSS influence
  • Open-source implementation: intel/ad-rss-lib on GitHub
  • RSS has gained traction with regulatory bodies in the EU, China, and Japan

11. OEM Partnerships

Major OEM Relationships

OEM / GroupProductsDetailsVolume / Timeline
Volkswagen GroupSurround ADAS, SuperVision, Drive10M-unit EyeQ order (March 2025); VW MQB platform; ID.Buzz AD robotaxi10M units; series production 2026–2027
PorscheSuperVisionSuperVision for future models (announced May 2023); EyeQ6H-based; brand-tuned integrationProduction TBD
Audi, Bentley, LamborghiniSuperVision (via VW Group)Mobileye SuperVision available as platform solution within VW Group
Geely Group / ZeekrSuperVisionZeekr 001 (110,000 vehicles updated via OTA), Zeekr 009; 3 additional Geely brands (incl. Polestar)In production; expanding
FordADASDriver-assist technology supply agreement (2020)
NIOAV developmentPartnership for consumer AV development in China and other markets (2019)Development stage
MahindraSuperVision + Surround ADASSelected for at least 6 future vehicle modelsSOP 2027
Major U.S. OEM (likely GM)EyeQ6H ADAS9 million chip deal announced at CES 2026~9M chips
FAW GroupADASChinese OEM partnershipIn production
smartSuperVisionAdvanced driving automation for smart brandAnnounced
BMWADASLong-standing partnership; early EyeQ adopterIn production

Market Reach

  • Mobileye technology is in vehicles from 50+ OEMs worldwide
  • ~29 million EyeQ units shipped in 2024; 32–34 million expected in 2025
  • Cumulative: 200+ million EyeQ chips shipped through 2024

12. Fleet Operations & Robotaxi Deployments

Active Test & Deployment Locations

LocationStatusPartnerVehicle
Munich, GermanyTestingSixt / Moovit
Hamburg, GermanyCommercial launch 2026VW / MOIAID.Buzz AD
Berlin, GermanyTesting (BVG)VW / MOIAID.Buzz AD (near-series prototypes)
Dallas, TexasCommercial launch 2026Lyft / MarubeniTBD
Los Angeles, CaliforniaCommercial launch 2026Uber / VWID.Buzz AD
Austin, TexasR&D testingMobileyeDevelopment fleet
Detroit, MichiganR&D testingMobileyeDevelopment fleet
New York CityTestingMobileyeDevelopment fleet
Tokyo, JapanTestingMobileyeDevelopment fleet

Key Robotaxi Partnerships

Lyft / Marubeni (U.S.)

  • Announced: February 2025
  • Launch: Dallas, TX — as soon as 2026
  • Structure: Marubeni (Japanese conglomerate; 900,000+ vehicle fleet globally) owns and finances the vehicles; Lyft provides the ride-hailing platform and fleet management via Flexdrive
  • Scale: Plan to expand to thousands of vehicles across multiple U.S. cities after Dallas debut

Volkswagen / MOIA (Europe & U.S.)

  • Vehicle: ID.Buzz AD — SAE Level 4; 27 sensors (13 cameras, 9 LiDARs, 5 radars)
  • Hamburg launch: 500+ vehicles during 2026 (initially with safety drivers)
  • LA launch: Partnership with Uber for 2026
  • Expansion target: 6 cities by end of 2027; 100,000+ self-driving vehicles by 2033
  • Pre-series production: VW Commercial Vehicles ramped up in March 2026

Moovit Integration

Moovit (Intel subsidiary) provides the full robotaxi service layer:

  • Rider-facing mobile app
  • Fleet management tools
  • Tele-operations system
  • Mobility intelligence for route optimization and deployment planning
  • Rider-experience services

Customers can order rides through Sixt and Moovit mobile apps.


13. Regulatory & Certification

Safety Methodology

Mobileye published the industry's first formal safety model for AVs (RSS) in 2017 and has built a comprehensive safety methodology around it:

  1. RSS formal safety layer — Mathematical guarantees of collision avoidance
  2. True Redundancy — Independent sensor channels each meeting safety independently
  3. SDoV (Safety of the Driving Vehicle) — Mobileye's internal safety framework ensuring the AV meets global standards and is engineered to be safer than human drivers
  4. Comprehensive validation — Combination of formal proofs, simulation, and real-world testing

Standards & Certifications

StandardRelevance
IEEE P2846Mobileye-influenced standard defining minimum reasonable assumptions for AV safety models
ISO 26262Functional safety for automotive electronics; EyeQ chips designed to ASIL-B/D compliance
EU GSR (General Safety Regulation)Mobileye launched the world's first vision-only Intelligent Speed Assist (ISA) certified across all 27 EU countries + Norway, Switzerland, Turkey
UN R157Automated Lane Keeping Systems regulation (relevant to Chauffeur L3 operation)
NHTSA pre-crash scenariosMobileye published RSS implementation analysis across all 37 NHTSA pre-crash types

Regulatory Engagement

  • Active participant in IEEE, SAE, and ISO working groups
  • Engaged with regulators in the EU, U.S., China, Japan, and Israel
  • Proponent of performance-based (not prescriptive) AV regulation
  • RSS open-sourced to encourage industry-wide adoption as a safety baseline

14. Key Publications & Patents

Amnon Shashua — Academic Output

  • 160+ peer-reviewed papers in computer vision and machine learning
  • 94+ patents in computer vision, ADAS, and autonomous driving
  • European Inventor Award finalist (2019, European Patent Office)
  • Automotive Hall of Fame inductee

Foundational Papers

PaperAuthorsYearVenueSignificance
On a Formal Model of Safe and Scalable Self-driving CarsShalev-Shwartz, Shammah, Shashua2017arXiv:1708.06374Introduces RSS; foundational safety model
Safe, Multi-Agent, Reinforcement Learning for Autonomous DrivingShalev-Shwartz, Shammah, Shashua2016arXiv:1610.03295Multi-agent RL framework for AV safety
Implementing the RSS Model on NHTSA Pre-Crash ScenariosMobileyeTechnical reportValidates RSS against all 37 NHTSA scenario types
Responsibility-Sensitive Safety (extended)Mobileye2022arXiv:2206.03418Comprehensive RSS technical specification
A Safety Architecture for Self-Driving SystemsMobileyeTechnical whitepaperFull SDoV safety architecture

Open-Source Projects

  • intel/ad-rss-lib — C++ library implementing the RSS model for autonomous vehicles; includes situation analysis, response computation, and integration examples

Key Patent Areas

  • Monocular and multi-camera depth estimation
  • Crowdsourced HD map construction from sparse visual landmarks
  • Formal safety envelope computation for multi-agent driving
  • Low-power CNN accelerator architectures
  • Camera-based free-space estimation
  • Real-time semantic segmentation on custom SoCs

15. Competitive Position

ADAS Market Dominance

Mobileye commands an estimated 65–70% market share in vehicles equipped with ADAS vision systems, making it the single largest supplier of production ADAS technology globally. With 200+ million chips shipped and ~29 million units in 2024 alone, no competitor matches its installed-base scale.

Revenue Model

Unlike Waymo (fleet-based) or Tesla (consumer product), Mobileye operates a licensing/royalty model:

  • Sells EyeQ chips and perception software to OEMs
  • Earns revenue per vehicle shipped
  • No capital-intensive fleet ownership burden
  • Scales with global vehicle production

Competitive Comparison

DimensionMobileyeWaymoTeslaQualcomm (Snapdragon Ride)NVIDIA (Drive Orin/Thor)
Business modelChip + SW supplier to OEMsFleet operator (robotaxi)Vertically integrated OEMChip supplier to OEMsChip supplier to OEMs
ADAS market share~65–70%N/A (no ADAS product)In-house only (Tesla vehicles)GrowingGrowing
AV approachTrue Redundancy; RSS formal safetySensor fusion; simulation-heavyVision-only; end-to-end MLPlatform-agnostic SoCPlatform-agnostic SoC
Sensor philosophyCamera primary + radar/LiDAR redundancyHeavy LiDAR + camera + radar fusionCamera-onlyOEM choiceOEM choice
Safety modelRSS (formal, mathematical)Internal metrics + simulationNo published formal modelNone proprietaryNone proprietary
MappingREM (crowdsourced, 150M+ vehicles)Proprietary survey mappingNone (real-time only)Depends on OEMDepends on OEM
Custom siliconYes (EyeQ, 6 generations)No (uses commercial HW)Yes (FSD chip, HW3/HW4)Yes (Snapdragon Ride)Yes (Orin, Thor)
OEM partners50+None (fleet only)None (Tesla only)BMW, Hyundai, othersMercedes, JLR, BYD, others
Robotaxi fleetVia partners (Lyft, Uber, MOIA)Owned & operatedPlannedN/AN/A

Strengths

  • Unmatched scale: 200M+ chips shipped; 150M+ vehicles contributing REM data
  • Formal safety framework: RSS provides regulatory and liability clarity
  • Full-stack from chip to cloud: Custom silicon + software + mapping + safety model
  • OEM diversity: Not dependent on any single automaker
  • Cost efficiency: EyeQ chips deliver high performance at low power and cost

Challenges

  • OEM in-housing: Some major automakers developing proprietary ADAS (Tesla, Chinese OEMs)
  • Competitive silicon: NVIDIA Drive Thor and Qualcomm Snapdragon Ride gaining OEM traction
  • Robotaxi timing: Behind Waymo in commercial driverless operations
  • Revenue concentration: Dependent on automotive cycle; 2024 saw 20% volume decline due to OEM inventory corrections
  • Stock performance: Trading below 2022 IPO price as of mid-2025

Recent Strategic Moves

  • Mentee Robotics acquisition ($900M, January 2026): Expands into humanoid robotics; leverages AV AI stack for physical AI. MenteeBot uses camera-only sensing, Sim2Real learning, and proprietary electric motors. Initial commercial deployment targeted for 2028.
  • 9M-chip U.S. OEM deal (CES 2026): Major validation of EyeQ6H platform
  • VW Group 10M-unit order (2025): Largest single ADAS order; spans MQB platform
  • Lyft/Marubeni robotaxi partnership: Asset-light model for U.S. robotaxi expansion
  • Compound AI / VLSA architecture: Positions Mobileye to reduce reliance on human tele-operators

Appendix: Financial Summary

Metric202320242025 (Guidance)
Revenue~$2.1B$1.7B$1.845B–$1.885B
EyeQ shipments37.4M29.0M32M–34M
Cumulative EyeQ shipped~170M+200M+232M–234M+
Headcount~4,000~3,800~3,600 (est.)

Sources: Mobileye corporate disclosures, SEC filings, CES 2025/2026 presentations, IEEE publications, arXiv papers, AWS case studies, OEM press releases, and industry analyses.

Public research notes collected from public sources.