Waymo, once hailed as the gold standard in autonomous driving, faces intense scrutiny after incidents in Austin, Texas. In January 2026, a robotaxi illegally passed a stopped school bus boarding children, following a remote agent's error, with Austin ISD documenting over two dozen similar violations since September.
Just last Sunday, another Waymo blocked an ambulance rushing to a mass shooting scene, forcing police intervention amid chaos. NHTSA and NTSB have launched formal probes, spotlighting failures in high-stakes urban scenarios. Austin is integral to Waymo's expansion, with service growing from 37 to 140 square miles by early 2026 via Uber partnerships. In contrast, San Francisco operations—ongoing since 2023—have a mixed record: stellar overall stats show 80-90% fewer injury crashes than humans, backed by millions of autonomous miles.
A 2025 blackout exposed vulnerabilities, with stalled vehicles obstructing traffic and emergency access. Waymo insists its safety outperforms humans, citing weekly navigation of thousands of bus encounters without collisions and filing software recalls for updates. Critics argue urban complexities — like erratic traffic, pedestrians, and emergencies—challenge AV stacks.
The cited real misses have dented the "gold standard" image despite expansions to cities like Dallas and Miami in 2026. Software fixes address coding flaws but not fully the confidence gap in unpredictable environments. Regulators face a catch-22: throttle growth risking innovation or demand ironclad proofs amid life-or-death stakes. Robotaxis must evolve beyond recalls to robust risk interpretation for crowded chaos.
SAFETY FIRST OR STALLED REVOLUTION?
