Tesla has published an in-depth analysis detailing the operational safety and comparative performance of its advanced driver assistance software, Full Self-Driving (Supervised). This release arrives mere weeks after Tekedra Mawakana, co-CEO of Waymo, publicly advocated for greater data disclosure from companies developing autonomous driving technologies, particularly during a prominent industry conference. The move by the electric vehicle pioneer signifies a notable shift toward increased transparency in a rapidly evolving sector often criticized for its opaque safety reporting practices.
The Quest for Autonomous Safety and Transparency
The development of autonomous vehicles (AVs) and advanced driver assistance systems (ADAS) has long been championed as a potential revolution for road safety, aiming to significantly reduce the human error responsible for the vast majority of traffic accidents. However, the path to widespread adoption is fraught with technical complexities, regulatory challenges, and a persistent need to build public trust. Early iterations of ADAS, such as adaptive cruise control and lane-keeping assistance, have gradually evolved into more sophisticated Level 2 systems like Tesla’s Autopilot and, more recently, Full Self-Driving (Supervised). These systems, while highly capable, still demand constant human supervision and intervention, differentiating them from truly autonomous Level 4 or Level 5 vehicles that operate without human oversight in defined conditions or universally.
The debate over how to accurately measure and report the safety performance of these systems has intensified as their capabilities grow. Critics argue that comparing driver-assist features, which still rely on human vigilance, directly against aggregate human driving statistics can be misleading. Companies like Waymo, which deploy fully autonomous robotaxis in geo-fenced areas without safety drivers, have typically led the way in publishing extensive safety metrics, including detailed incident reports and collision rates per million miles driven. This approach sets a high bar for accountability and transparency, creating pressure on other industry players to follow suit. Mawakana’s recent remarks underscored this sentiment, emphasizing that companies have a responsibility to be transparent about their vehicle fleets’ performance, especially when removing or minimizing human drivers. "If you are not being transparent," she stated, without directly naming Tesla, "then it is my view that you are not doing what is necessary in order to actually earn the right to make the road safer."
Tesla’s New Data: A Deeper Dive
In response to this industry-wide call for enhanced data, Tesla has dedicated a new section of its corporate website to present a more granular view of its FSD (Supervised) safety statistics. The company asserts that, within North America, vehicles utilizing its Full Self-Driving (Supervised) software are covering approximately 2.9 million miles for every major collision recorded. For minor collisions, the reported figure stands at around 986,000 miles per incident. These numbers are presented in stark contrast to aggregated national averages derived from data provided by the National Highway Traffic Safety Administration (NHTSA). According to Tesla’s interpretation of NHTSA statistics, all drivers collectively experience a major collision approximately every 505,000 miles and a minor collision every 178,000 miles.
This marks a significant departure from Tesla’s previous quarterly "vehicle safety reports," which had been widely criticized for their perceived inadequacy. Earlier reports primarily focused on Autopilot, a less advanced ADAS designed primarily for highway use, where accident rates are generally lower due to simpler driving environments. The lack of specific data for FSD (Supervised), despite its wider operational domain encompassing city streets and complex scenarios, fueled a narrative of insufficient disclosure. Furthermore, Tesla had offered minimal information regarding its Robotaxi pilot program in Austin, Texas, which still involves human safety operators. The current release, by specifically segmenting FSD (Supervised) data, aims to address these long-standing criticisms and provide a clearer picture of its most advanced driver-assistance system’s performance.
Defining the Metrics: What the Numbers Mean
A crucial aspect of Tesla’s latest data release is the explicit articulation of its methodologies and definitions for collision events, a level of detail previously absent. The company clarifies that it aligns with Federal Motor Vehicle Safety Standards, specifically 49 C.F.R. § 563.5, for its reporting framework. "Major collisions" are defined as incidents involving higher-severity impacts, characterized by the deployment of a vehicle’s airbags or other non-reversible pyrotechnic restraints. This objective criterion offers a standardized measure of collision severity, mitigating subjective interpretations.
Crucially, Tesla states that if FSD (Supervised) was active at any point within a five-second window immediately preceding a collision event, that incident is included in the reported dataset. This approach is intended to capture a comprehensive range of scenarios. As Tesla explains, "This calculation ensures that our reported collision rates for FSD (Supervised) capture not only collisions that occur while the system is actively controlling the vehicle, but also scenarios where a driver may disengage the system or where the system aborts on its own shortly before impact." The company also commits to updating this data quarterly, presenting a rolling twelve-month aggregation of miles and collisions to maintain relevance to recent trends and ongoing progress. However, Tesla acknowledges that it will not release other metrics, such as injury rates, explaining that its automated data collection focuses on objective, programmatic indicators like collision frequency and airbag deployment, which serve as a reliable proxy for severity.
Industry Scrutiny and the Transparency Imperative
The push for greater transparency in autonomous driving safety data reflects a broader societal demand for accountability in advanced technological deployments. The autonomous vehicle industry operates under intense public and regulatory scrutiny, especially following high-profile incidents involving both fully autonomous vehicles and ADAS-equipped cars. While Waymo has consistently published detailed reports, including white papers on its safety methodology and real-world operational statistics, other players have been less forthcoming. This disparity has fueled calls for standardized reporting frameworks that allow for meaningful comparisons across different technologies and operational design domains.
Regulators, notably NHTSA, have been actively monitoring the safety performance of ADAS and AVs. While specific federal regulations for fully autonomous vehicles are still under development, NHTSA collects data on crashes involving these systems and has opened numerous investigations into incidents involving ADAS, particularly Tesla’s Autopilot. The agency’s role is critical in fostering a data-driven approach to safety oversight, but it often faces challenges due to proprietary data and varying reporting methodologies among manufacturers. Consumer advocacy groups and safety organizations have also consistently pressed for more comprehensive and accessible data, arguing that public understanding and trust are paramount for the successful integration of these technologies.
Challenges in Comparative Safety Analysis
While Tesla’s latest data release is a step towards greater transparency, comparing its FSD (Supervised) statistics directly with general human driving data or fully autonomous vehicle metrics presents inherent complexities. A primary challenge lies in the fundamental difference between a Level 2 ADAS, which requires constant human supervision, and Level 4/5 autonomous systems. In a Level 2 system like FSD (Supervised), the human driver remains ultimately responsible for the vehicle’s operation, monitoring the environment, and being prepared to intervene at any moment. This dynamic introduces a confounding variable: the interaction and performance of the human supervisor.
Critics often point out that "miles driven with FSD engaged" is not equivalent to "miles driven by a fully autonomous system." The quality of human supervision, driver attentiveness, and the types of roads or conditions where drivers choose to engage FSD can significantly influence collision rates. For instance, drivers might be more inclined to activate FSD in less challenging environments, or conversely, rely on it in situations where they feel less confident, potentially skewing the data. Furthermore, the national average for human driving encompasses all drivers, regardless of age, experience, vehicle type, or road conditions, making it a broad benchmark that may not perfectly reflect the specific operational contexts where FSD (Supervised) is typically used.
Another point of analytical contention revolves around the definition of "collision" and the level of detail provided. While airbag deployment offers an objective measure of severity, it doesn’t account for all types of incidents, such as minor fender-benders that don’t trigger restraints but still represent a safety event. Moreover, the absence of data on injury rates, property damage, or specific circumstances (e.g., road type, weather, time of day) limits the depth of insight that can be gleaned from the reported figures. A truly holistic safety picture would ideally incorporate these additional dimensions, allowing researchers, regulators, and the public to understand not just how often incidents occur, but also their nature and consequences.
Looking Ahead: The Future of Autonomous Reporting
Tesla’s decision to provide more detailed safety data for its FSD (Supervised) system reflects an evolving landscape where transparency is increasingly seen as a cornerstone of public acceptance and regulatory trust. This move, potentially influenced by Waymo’s consistent advocacy for data disclosure, could catalyze other ADAS and AV developers to enhance their own reporting standards. As the technology continues to advance and more autonomous features become commonplace, the demand for clear, comprehensive, and standardized safety metrics will only grow.
The industry’s collective challenge is to move beyond mere comparison with human driving statistics to establish a robust framework for assessing the unique safety profiles of diverse autonomous and semi-autonomous systems. This will likely involve collaborative efforts between manufacturers, regulators, and independent researchers to define common terminology, establish consistent data collection methods, and agree upon a set of universally accepted safety performance indicators. The goal is not just to prove that these systems are safer than human drivers in certain contexts, but to understand precisely how and why they perform, identifying areas for continuous improvement and ultimately paving the way for a future where autonomous technology truly enhances road safety for everyone.



