The Road Ahead: Tech Giants Defend Self-Driving Cars as Congress Weighs New Safety Rules
Industry Leaders Make Their Case for Autonomous Vehicles
In a crucial hearing on Wednesday, executives from two of America’s leading autonomous vehicle companies stood before the Senate Committee on Commerce, Science and Transportation to defend their technology and make a bold claim: their self-driving cars are safer than human drivers. Mauricio Peña, chief safety officer at Waymo, and Lars Moravy, Tesla’s vice president of vehicle engineering, faced pointed questions from senators about recent troubling incidents involving their vehicles. The hearing comes at a pivotal moment as Congress contemplates creating the first comprehensive federal framework to regulate self-driving vehicles, which are rapidly becoming a fixture in major American cities. Currently, the United States lacks uniform regulations for autonomous vehicles, with roughly half of all states maintaining their own differing laws while others have no rules at all, creating what many describe as a patchwork regulatory landscape that fails to adequately protect the public or provide clear guidelines for manufacturers.
Recent Incidents Raise Serious Safety Concerns
The urgency of the hearing was underscored by several recent incidents that have raised alarm bells about autonomous vehicle safety. Last month, the National Transportation Safety Board launched an investigation into multiple Waymo robotaxis that passed school buses in Austin, Texas, failing to yield as required by law—behavior that Senator Ted Cruz, the committee chairman, flatly called “unacceptable.” Even more concerning was an incident earlier this month in Santa Monica, California, where a Waymo vehicle struck a child near her elementary school. According to the company, the child darted out from behind another parked vehicle, and fortunately suffered only minor injuries. Waymo acknowledged identifying a software issue and claimed to have released an update in November to address the problem, yet the company has continued to receive multiple violations since then. Meanwhile, Tesla, which recently began rolling out its robotaxi service in Austin, faces scrutiny over data analysis suggesting that its vehicles may have experienced crash rates worse than human drivers last year, according to a report examining National Highway Traffic Safety Administration data.
Companies Defend Their Safety Records and Technology
Despite these incidents, both companies mounted a vigorous defense of their technology’s safety record. When pressed by Senator Cruz about what safeguards have been implemented following the Austin and Santa Monica incidents, Peña emphasized that “safety is our top priority, especially the safety of children and pedestrians.” He explained that Waymo is evaluating every incident and developing fixes to address them, with many software changes already incorporated to dramatically improve performance. The company is also working directly with the Austin Independent School District to collect data on different lighting patterns and conditions, incorporating those learnings into their systems. Peña pointed out that Waymo vehicles “safely navigate thousands of school bus encounters every single week” and that the company is “continuously learning and improving because our work on safety is never done.” Regarding the Santa Monica incident, Peña asserted that Waymo’s analysis found their vehicle “would have responded faster than our models of an attentive human driver,” suggesting that the autonomous system actually mitigated harm compared to what a human driver might have done in the same situation.
The safety statistics presented by Waymo paint an impressive picture, at least according to the company’s own data. Peña told the committee that over more than 100 million miles driven, Waymo vehicles are ten times less likely to be involved in a serious injury collision compared to human drivers in the cities where they operate. Even more striking, the data shows Waymo vehicles are twelve times less likely to be involved in a pedestrian injury collision in those same cities, leading Peña to conclude that “we’re making a difference already.” Tesla’s Moravy took a broader historical perspective, noting that while the National Highway Traffic Safety Administration has a “legacy of safety” and the auto industry made tremendous progress reducing vehicle crashes after the 1970s, that progress has essentially stalled over the last two decades. With approximately 40,000 Americans dying each year in vehicle incidents, Moravy made an impassioned case for autonomous technology: “I can tell you without a shadow of a doubt that the next big jump we have in reducing that number from 40,000 to hopefully a day where it’s zero is autonomous driving. Simply put, an autonomous driver, the system or the computer that operates it, doesn’t sleep, doesn’t blink, and doesn’t get tired.”
Senators Express Both Hope and Concern
The senators questioning the executives walked a delicate line between enthusiasm for the life-saving potential of autonomous vehicles and deep concern about accountability and safety. Ranking member Senator Maria Cantwell of Washington captured this tension perfectly when she stated, “Fully autonomous vehicles offer the potential to reduce crashes on roads, but we have seen the risk of letting companies beta test on our roads with no guardrails.” This concern that American streets are becoming testing grounds for unproven technology without adequate oversight resonated throughout the hearing. When Republican Senator Bernie Moreno raised the critical question of liability—asking who accepts responsibility for a collision resulting from a software or hardware failure—both companies provided clear answers. Moravy stated that “in the unlikely event that a software error occurred in our autonomous driving system, we would take liability for that event, much in the same way that a driver takes liability in our current legal system if they make an error.” Peña echoed this commitment, saying simply, “Likewise.”
Push for Federal Standards and Greater Transparency
The hearing revealed a rare area of bipartisan agreement: the need for uniform federal standards to govern autonomous vehicles, though Republicans and Democrats may envision different approaches. Senator Cruz advocated for a light regulatory touch, arguing, “If we want to save lives and avoid tragedy for almost 40,000 families each year, we don’t need lawmakers saddling automakers with expensive junk mandates that make little to no real difference. Instead, we should follow the data, follow the evidence, which increasingly shows advanced AVs reduce crashes and prevent serious injuries. We need a consistent federal framework to ensure uniform safety standards, liability clarity and consumer confidence.” However, not all committee members shared Cruz’s relatively hands-off philosophy. Bryant Walker Smith, an associate professor of law at the University of South Carolina, testified that greater oversight is essential, emphasizing a crucial point: “There are no self-driving or driverless cars. The companies that develop and deploy AVs are the drivers. This means that an AV is only as safe as the companies responsible for it. We can and should proactively assess their trustworthiness.”
Democrats on the committee are pushing for more robust regulation and transparency. Senator Ed Markey of Massachusetts sent letters on Tuesday to seven major autonomous vehicle companies requesting detailed information about their remote assistance operations—the individuals who intervene when an autonomous vehicle encounters a problematic situation it cannot handle on its own. Markey warned that “without proper safeguards, the AV industry’s reliance on RAOs could create serious safety, national security, and privacy risks.” His letters ask pointed questions about whether remote operators ever “tele-drive a vehicle” and how frequently such remote assistance sessions occur. Markey is championing two pieces of legislation aimed at increasing industry transparency. The first, the AV Safety Data Act, would require NHTSA to mandate that autonomous vehicle companies report data including miles traveled, injuries involving human drivers, pedestrians and bicyclists, and unplanned stoppages. “We need more honesty from the industry so that there is in fact transparency in everything that they know that the American public should know as well,” Markey said. Partnering with Connecticut Senator Richard Blumenthal, Markey also introduced the “Stay in Your Lane Act,” which would require autonomous vehicle manufacturers to clearly define the roads and driving conditions in which their systems are safe and designed to operate, and would prohibit their vehicles from operating outside those specified parameters. Blumenthal envisions this working similarly to how airplane manufacturers certify that a plane meets FAA standards, telling the committee, “Right now we have the Wild West. I want to see some rules of the road so that cars stay within their lanes, so to speak.”
The path forward for autonomous vehicle regulation remains uncertain, but Wednesday’s hearing made clear that the days of self-regulation are numbered. As these vehicles become more common on American streets, the tension between innovation and safety, between allowing technology to develop and protecting the public, will only intensify. The coming months will reveal whether Congress can craft legislation that both enables this potentially life-saving technology to flourish while ensuring adequate safeguards to protect pedestrians, cyclists, and other drivers who share the road with these computer-controlled vehicles.













