Jump to content
  • Are self-driving cars already safer than human drivers?

    Karlston

    • 289 views
    • 20 minutes
     Share


    • 289 views
    • 20 minutes

    I learned a lot by reading dozens of Waymo and Cruise crash reports.

    August was an eventful month for driverless taxis in San Francisco. On August 10, the California Public Utilities Commission voted to allow Google’s Waymo and GM’s Cruise to begin charging customers for driverless taxi rides across the city. A week later, Cruise vehicles were involved in two serious crashes within hours of one another. The next day, the California Department of Motor Vehicles demanded that Cruise cut its driverless taxi fleet in half while these crashes were investigated.

     

    A few days later, New York Times reporter Cade Metz appeared on the Times’s flagship podcast, The Daily, to discuss these developments and the state of the self-driving industry.

     

    Metz argued that in recent weeks, it has become “more and more clear to the people riding the cars, and to other citizens in the city, that they are flawed, that they do make mistakes, that they can gum up traffic, that they can cause accidents.”

     

    Of course self-driving cars are flawed—all technologies are. The important question is whether self-driving cars are safer than human-driven cars. And here Metz proclaimed ignorance.

     

    “We don't know yet whether it's safer than a human driver,” he said.

     

    But we actually do know a fair amount about the safety of driverless taxis. Waymo and Cruise have driven a combined total of 8 million driverless miles (a Waymo spokeswoman told me the company has completed more than 4 million driverless miles, and Cruise has said the same). That includes more than 4 million in San Francisco since the start of 2023. And because California law requires self-driving companies to report every significant crash, we know a lot about how they’ve performed.

     

    For this story, I read through every crash report Waymo and Cruise filed in California this year, as well as reports each company filed about the performance of their driverless vehicles (with no safety drivers) prior to 2023. In total, the two companies reported 102 crashes involving driverless vehicles. That may sound like a lot, but they happened over roughly 6 million miles of driving. That works out to one crash for every 60,000 miles, which is about five years of driving for a typical human motorist.

     

    These were overwhelmingly low-speed collisions that did not pose a serious safety risk. A large majority appeared to be the fault of the other driver. This was particularly true for Waymo, whose biggest driving errors included side-swiping an abandoned shopping cart and clipping a parked car’s bumper while pulling over to the curb.

     

    Cruise’s record is not impressive as Waymo’s, but there’s still reason to think its technology is on par with—and perhaps better than—a human driver.

     

    Human beings drive close to 100 million miles between fatal crashes, so it will take hundreds of millions of driverless miles for 100 percent certainty on this question. But the evidence for better-than-human performance is starting to pile up, especially for Waymo. It’s important for policymakers to allow this experiment to continue because, at scale, safer-than-human driving technology would save a lot of lives.

    Waymo’s impressive safety record

    Waymo_Trusted-Tester_Final-640x366.jpg

    Waymo

     

    Back in February, Waymo released a report celebrating its first million miles of fully driverless operation, which mostly occurred in the suburbs of Phoenix. Waymo’s autonomous vehicles (AVs) experienced 20 crashes during those first million miles. Here are some representative examples:

     

    • “A passenger car backed out of a parking space and made contact with the Waymo AV.”
    • “An SUV backed out of a driveway and made contact with the Waymo AV.”
    • “The vehicle that had been previously stopped behind the Waymo proceeded forward, making contact with the rear bumper of the Waymo AV.”
    • “A passenger car that had been stopped behind the Waymo AV passed the Waymo AV on the left. The passenger car’s rear passenger side door made contact with the driver side rear of the Waymo AV.”

     

    In short, these were mostly low-speed collisions initiated by the other diver.

     

    There were only two cases where a Waymo ran into another vehicle. In one, a motorcyclist in the next lane lost control and fell off their bike. The driverless Waymo slammed on its brakes but couldn’t avoid hitting the now-riderless motorcycle at 8 miles per hour. In the other case, another vehicle cut in front of the Waymo, and the AV braked hard but couldn’t avoid a collision.

     

    There were two crashes that Waymo thought were serious enough for inclusion in a federal crash database. The more serious of these was when another driver rear-ended a Waymo while looking at their phone.

     

    One of Waymo’s biggest challenges during its first million miles was avoiding inanimate objects. Waymo vehicles bumped into a construction pylon, a parking lot barrier arm, and a shopping cart—all at speeds of between 8 and 13 miles per hour. Clearly, Waymo needs to do a better job of recognizing irregularly shaped objects like these. But when it comes to interacting with other vehicles, Waymo had a basically spotless driving record over those first million miles.

     

    Now let’s look at how Waymo has done in San Francisco since the start of 2023. Waymo is still struggling to avoid inanimate objects. Its vehicles collided with cardboard road debris and a chain connecting a sign to a temporary pole. A Waymo also drove into a pothole that was big enough to puncture a tire. And there were two incidents where Waymos scraped parked vehicles. That’s a total of five crashes where the Waymo vehicle was clearly at fault.

     

    The rest of Waymo’s driverless crashes in San Francisco during 2023 do not seem to have been Waymo’s fault. I count 11 low-speed crashes where another vehicle rear-ended a Waymo, backed into a stopped Waymo, or scraped a stopped Waymo while trying to squeeze by. There was also an incident where a Waymo got sideswiped by another vehicle changing lanes.

     

    Waymo had two more serious crashes in San Francisco this year:

     

    • A driverless Waymo was trying to turn left, but another car “proceeded into the intersection from the left and made contact with the left side of the Waymo AV.”
    • An SUV rear-ended a Waymo hard enough that the passenger in the Waymo reported injuries.

     

    I should also mention the Waymo crash that killed a dog back in May. I didn’t mention this earlier because I’ve been focusing on driverless vehicles and the Waymo that hit the dog had a safety driver behind the wheel. But this crash is worth mentioning since it’s one of the most serious ones Waymo has experienced.

     

    In an emailed statement, Waymo said that it “reviewed the event from many different perspectives” and concluded there was no way either Waymo’s software or a human driver could have avoided hitting the dog. Waymo hasn’t provided the public with enough information to verify this claim, but I hope California regulators check Waymo’s work if they haven’t done so already.

    We don’t have great data on the safety of human drivers

    To sum up, Waymo’s driverless fleet has experienced:

     

    • 17 low-speed collisions where another vehicle hit a stationary Waymo
    • 9 collisions where another vehicle rear-ended a Waymo
    • 2 collisions where a Waymo got sideswiped by another vehicle
    • 2 collisions where a Waymo got cut off and wasn’t able to brake quickly enough
    • 2 low-speed collisions with stationary vehicles
    • 7 low-speed collisions with inanimate objects like shopping carts and potholes

     

    There are two things to notice about this list. First, other vehicles ran into Waymos 28 times, compared to just four times a Waymo ran into another vehicle (and Waymo says its vehicle got cut off in two of these cases). Second, Waymo was only involved in three or four “serious” crashes, and none of them appear to have been Waymo’s fault.

     

    This is impressive because these statistics reflect more than 2 million miles of driving (a Waymo spokeswoman told me the company has logged more than 1 million miles in San Francisco since the start of 2023). The National Highway Traffic Safety Board estimates that there are around 6 million car crashes reported to the police each year. Americans drive around 3 trillion miles per year, so roughly speaking, a “major” crash occurs on the roads once every 500,000 miles.

     

    Most crashes involve two vehicles. So if Waymo’s vehicles drove as well as a typical human driver, you’d expect it to be involved in around eight serious crashes over 2 million miles of driving.

     

    It’s important to emphasize that there’s a lot of uncertainty about these figures.

     

    “We know very little about the safety of our roads,” the legal scholar Bryant Walker Smith told me. “If we're looking at just crashes, given how little information is carefully collected and studied, we don't have any sense of the circumstances of these low-level crashes.”

     

    Not all crashes—even serious ones—are reported to the police.

     

    Moreover, Smith said, “these companies are not driving a representative sample of miles.”

     

    Both Waymo and Cruise have their driverless cars avoid freeways, which tend to have fewer crashes per mile of driving. Both companies are active in San Francisco, which has more chaotic streets than most US cities.

     

    On the other hand, a small minority of drivers—including teenagers, elderly people, and drunk drivers—account for a disproportionate share of crashes. An alert and experienced driver gets into crashes at a rate well below the national average. So if we want AVs to drive as well as an alert and experienced driver, we'll want to set the bar higher than the national average.

     

    With all that said, it seems that Waymo cars get into serious crashes at a significantly lower rate than human-driven cars. I’ll have more to say about this after we look at Cruise’s safety record.

    Cruise has room for improvement

    20210408_BaxTowner_Cruise_CamA_NEIGHBORH

    Cruise

     

    Cruise released a report back in April about its first million driverless miles. The company reported 36 crashes, compared to 20 for Waymo’s first million driverless miles. I wouldn’t put too much stock into that difference, since Cruise was operating mainly in San Francisco, a more chaotic driving environment than the Phoenix suburbs where Waymo started out.

     

    So far in 2023, Cruise has filed an additional 27 crash reports related to its fully driverless cars. What follows is a summary of all 63 crashes Cruise reported through August 25. I’ll also count a widely publicized August 17 crash with a fire truck even though there’s still no report on this crash on the website of the California Department of Motor Vehicles.

     

    Like Waymo, Cruise has had trouble with its vehicles hitting inanimate objects. Two Cruise vehicles ran into downed power cables. Cruise vehicles also ran into a scooter (without someone on it), a tow dolly on the back of a double-parked truck, a “motorized articulating boom lift,” and a pothole. The pothole punctured a tire, causing the Cruise AV to swerve into a parked car.

     

    Cruise has also experienced a large number of low-speed crashes where another vehicle (including a scooter in one case and a skateboarder in another) either rear-ended a Cruise AV, backed into one at low speeds, or scraped the side of a Cruise while trying to pass it.

     

    There were also a few rare situations:

     

    • A Cruise vehicle was “stuck in a sideshow event and stationary with vehicles driving around it on either side.” (A sideshow is an illegal late-night show where young people perform donuts and other stunts in an intersection.) One of the other cars ran into the Cruise AV.
    • An Infinity Q50 was “performing donuts” in an intersection before crashing into a Cruise vehicle.
    • A driver drove the wrong way down a one-way street while staring at a phone. The car hit a stopped Cruise vehicle facing the right way.

     

    There were about a dozen side-swipe events where another vehicle either ran into the Cruise AV from the side during a lane change or tried to make a turn from a middle lane, crossing the path of the Cruise AV. Most of these crashes occurred during Cruise’s first million miles, so Cruise may be getting better at handling these situations.

     

    It’s important to note that Cruise has logged more than four million miles in San Francisco, so Cruise’s crash reports represent roughly twice as many miles as Waymo’s. Once you adjust for that, Waymo and Cruise seem to have been involved in low-stakes crashes at similar rates.

     

    For example, Cruise vehicles got rear-ended 17 times over about 4 million miles, while Waymo vehicles got rear-ended seven times over roughly 2 million miles. That makes sense given that Cruise drove twice as many miles and that Waymo logged almost half of its miles in the tame Phoenix suburbs.

     

    But even taking those differences into account, there are a couple areas where Cruise’s performance does not seem to be on par with Waymo.

     

    One is significant crashes where Cruise was clearly at fault. I saw three examples of this:

     

    • A Cruise AV mistakenly thought the vehicle ahead of it was starting to turn left. The Cruise ran into the other vehicle when it turned right instead.
    • A Cruise AV changed lanes when there wasn’t enough space to do so, cutting off another vehicle and leading to a crash.
    • A Cruise AV ran into the back of a city bus. Cruise subsequently determined that its software got confused because it was an articulated bus (the kind with an accordion joint in the middle) and Cruise’s software couldn’t handle two parts of a vehicle moving in slightly different directions.

     

    Each of these mistakes strikes me as more serious than any of Waymo’s mistakes (recall that all of Waymo’s clearly at-fault crashes were low-speed collisions with inanimate objects or parked vehicles).

    Cruise might have a problem with intersections

    Cruise-car-in-San-Francisco-streets-640x

     

    Cruise’s other trouble spot is intersections. Cruise says two bicyclists have run stop signs and crashed into Cruise vehicles. And there have been five vehicles that ran red lights and crashed into Cruise vehicles:

     

     

    A passenger in that last Cruise AV was taken to the hospital; Cruise described their injuries as “non-severe.”

     

    Perhaps all of these crashes (with the possible exception of the fire truck) were the fault of the other drivers (and cyclists). Still, it’s interesting that over two million miles of driverless operation, no Waymo AVs got hit by cars running red lights or bicycles running stop signs.

     

    Again, this may be partly because Cruise has driven more miles—and especially more miles in San Francisco. Also, Cruise has largely operated at night, when there might be more impaired drivers on the road.

     

    But I think there might be something else going on here.

     

    A couple of years ago, Waymo published research exploring the potential for self-driving cars to prevent crashes by anticipating the reckless behavior of other drivers. Waymo researchers obtained detailed records about fatal crashes that occurred in and around the Phoenix suburb of Chandler (where Waymo launched its first driverless taxi service). Waymo then hired an independent engineering firm to create detailed digital reconstructions of these crashes. Then the company loaded this data into its simulator to explore how Waymo’s self-driving software would have reacted in the seconds preceding each crash.

     

    Waymo found its software could prevent every crash if it took the role of the “initiator,” the vehicle whose erratic behavior set the crash in motion. More surprisingly, Waymo also found its software could prevent 82 percent of crashes playing the role of the other driver.

     

    The most common setting for fatal crashes in this data set was intersections—including a number of vehicles running red lights. Waymo found that when its software played the role of the “other driver,” it was able to avoid crashes in 81 percent of scenarios at intersections.

     

    In the wake of the Cruise collision with a fire truck on August 17, Waymo told industry analyst Brad Templeton that its vehicles would have handled the situation better than Cruise did:

     

    When we hear sirens, our vehicle will slow and then depending on how the situation develops, we will either pull over or stop ahead of intersections where there might be crossing emergency vehicles, even if we have a green light. The system is designed to remain cautious and not enter an intersection if it is still reasoning whether the emergency vehicle is approaching the intersection based on what it is sensing.

     

    I think technology like this may explain why Waymo has been successful at avoiding major crashes at intersections. Not only do Waymo’s vehicles follow the letter of the law (like stopping at red lights), they may also try to anticipate and avoid dangerous situations (like vehicles running red lights).

     

    Cruise vehicles do not seem especially cautious about intersections. For example, a Reddit user posted a video from August 22 showing a Cruise vehicle crossing an intersection several seconds after the opposing traffic got a green light. Cruise says its vehicle was already in the intersection when its light turned red so the vehicle didn’t break the law. Maybe that’s technically true—I’m not an expert on California traffic law. But I’m pretty sure it would have been safer for the car to stay where it was and wait for the next green light.

    Cruise’s technology is pretty good, but Waymo’s is better

    1_EM5DO3idvRu3-k_HS_b92Q-640x426.jpg

    Waymo tested its technology for more than 20 million miles before launching a driverless service.

     

    The bottom line is that I’m convinced that Waymo vehicles drive more safely than Cruise vehicles. This isn’t surprising; Waymo started its life as the Google self-driving project several years before Cruise was founded. Back in 2020, Waymo announced it had completed 20 million miles of on-road testing (almost all of them with safety drivers). The same year, Cruise reached 2 million miles.

     

    In short, Waymo has invested more time and resources into its technology. It would be surprising if all that extra work didn’t yield superior performance. With that said, I don’t want to be too negative about Cruise. Because while the company’s technology doesn’t seem to be as good as Waymo’s, it’s still pretty good.

     

    Earlier, I discussed why it’s so difficult to develop a good benchmark for human driving performance. We only know about crashes that get reported to the police or other authorities, giving us a patchy understanding of how many crashes really occur.

     

    Cruise tried to address this problem by hiring a team of prominent academic researchers to study the driving behavior of ride-hail drivers in San Francisco. The researchers examined 5.6 million miles of data and concluded that collisions involving San Francisco ride-hail drivers occur about once every 20,000 miles. That includes a lot of minor crashes that wouldn’t be reported to police.

     

    Based on this data, Cruise claimed that over its first million miles, its vehicles crashed 56 percent less often per mile than a typical human driver. Moreover, Cruise estimated that its cars were 73 percent less likely to be in a crash with a risk of a serious injury and 93 percent less likely to be the “primary contributor” to a crash.

     

    One should take these conclusions with a grain of salt given that the research was commissioned by Cruise. But they don’t seem crazy. Cruise vehicles really do seem to crash into other vehicles much less often than vice versa. So I wouldn’t be surprised if Cruise vehicles already drive more safely than the average human driver.

    The need for real-world testing

    The big question for policymakers is whether to allow Waymo and Cruise to continue and even expand their services. This should be an easy call with respect to Waymo, which seems to be safer than a human driver already. The faster Waymo scales up, the more crashes can be prevented.

     

    I think Cruise’s tech is probably safer than a human driver too, but it’s a closer call. I could imagine changing my mind in the coming months as more data comes in.

     

    Still, it’s important to remember that access to public roads is essential for testing and improving self-driving technology. This is not a technology Waymo or Cruise can meaningfully test “in the lab.” The companies need exposure to the full complexity of real public streets in order to make progress. And given that both companies are likely to eventually develop products that are much safer than human drivers, slowing down the development of the technology could easily cost more lives than it saves.

     

    So while the DMV’s decision to cut the size of Cruise’s fleet in the wake of the August 17 crashes was understandable, I hope the decision is short-lived. Ultimately the only way for Cruise to improve its technology is by testing it on public roads. And we’ll all benefit from the widespread availability of self-driving cars that are dramatically safer than human drivers.

     

    One easy way for policymakers to improve safety—or at least accountability—would to require self-driving companies to be more even more transparent about their safety records. This story relied heavily on California’s excellent website that publishes all of the Waymo and Cruise crash reports. I’d love for the California Department of Motor Vehicles to go a step further and require self-driving companies to submit video footage of the seconds before and after each crash. That way, members of the public could evaluate whether companies’ descriptions of crashes are accurate.

     

    It would also be very helpful for regulators in other states—or perhaps federal officials—to require the same kind of crash reporting that they have in California. For example, Waymo is running a substantial driverless taxi service in Phoenix, but we know very little about how well Waymo's AVs have performed there in recent months. More transparency here and in other states could help to build public trust.

     

    Tim Lee was on staff at Ars from 2017 to 2021. He recently launched a new newsletter, Understanding AI. It explores how AI works and how it's changing our world. You can subscribe to his newsletter here.

     

    Source


    User Feedback

    Recommended Comments

    There are no comments to display.



    Join the conversation

    You can post now and register later. If you have an account, sign in now to post with your account.
    Note: Your post will require moderator approval before it will be visible.

    Guest
    Add a comment...

    ×   Pasted as rich text.   Paste as plain text instead

      Only 75 emoji are allowed.

    ×   Your link has been automatically embedded.   Display as a link instead

    ×   Your previous content has been restored.   Clear editor

    ×   You cannot paste images directly. Upload or insert images from URL.


  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...