Connecting state and local government leaders
A pedestrian killed by a self-driving Uber in Tempe, Ariz. shows that the legal implications of autonomous cars are as important, if not more so, than the technology.
On Sunday night, a self-driving car operated by Uber struck and killed a pedestrian, 49-year-old Elaine Herzberg, on North Mill Avenue in Tempe, Arizona. It appears to be the first time an automobile driven by a computer has killed a human being by force of impact. The car was traveling at 38 miles per hour.
An initial investigation by Tempe police indicated that the pedestrian might have been at fault. According to that report, Herzberg appears to have come “from the shadows,” stepping off the median into the roadway, and ending up in the path of the car while jaywalking across the street. The National Transportation Safety Board has also opened an investigation. It’s still hard to know exactly what took place, at this time, without some speculation.
Likewise, it’s difficult to evaluate what this accident means for the future of autonomous cars. Crashes, injuries, and fatalities were a certainty as driverless vehicles began moving from experiment to reality. In 2016, a Tesla operating in its unique “autopilot” mode in Florida crashed into a tractor-trailer that made a left turn in front of the vehicle, killing the Tesla’s driver. At the time, it was the first known fatality from a self-driving vehicle—but at the time of the accident, the car had apparently been warning its driver to disengage the autopilot mode and take control of the vehicle.
Advocates of autonomy tend to cite overall improvements to road safety in a future of self-driving cars. Ninety-four percent of car crashes are caused by driver error, and both fully and partially autonomous cars could improve that number substantially—particularly by reducing injury and death from speeding and drunk driving. Even so, crashes, injuries, and fatalities will hardly disappear when and if self-driving cars are ubiquitous. Robocars will crash into one another occasionally and, as the incident in Tempe illustrates, they will collide with pedestrians and bicyclists, too. Overall, eventually, those figures will likely number far fewer than the 37,461 people who were killed in car crashes in America in 2016.
The problem is, that result won’t be accomplished all at once, but in spurts as autonomous technology rolls out. During that period, which could last decades, the social and legal status of robocar safety will rub up against existing standards, practices, and sentiments. A fatality like the one in Tempe this week seems different because it is different. Instead of a vehicle operator failing to see and respond to a pedestrian in the road, a machine operating the vehicle failed to interpret the signals its sensors received and process them in a way that averted the collision. It’s useful to understand, and even to question the mechanical operation of these vehicles, but the Tempe fatality might show that their legal consequences are more significant than their technical ones.
Arizona Governor Doug Ducey has turned the state into a proving ground for autonomous cars. Ducey, a businessman who was the CEO of the franchised ice-cream shop Cold Stone Creamery before entering politics, signed an executive order in 2015 instructing state agencies to undertake “any necessary steps to support the testing and operation of self-driving cars on public roads within Arizona.” While safety gets a mention, the order cites economic development as its primary rationale. Since then, Uber, Waymo, Lyft, Intel, GM, and others have set up shop there, testing self-driving cars in real-world conditions—a necessity for eventually integrating them into cities.
The 2015 order outlines a pilot program, in which operators are required to “direct the vehicle’s movement if necessary.” On March 1, 2018, Ducey issuedan updated order, which allowed fully autonomous operation on public roads without an operator, provided those vehicles meet a “minimal risk condition.” For the purposes of the order, that means that the vehicle must achieve a “reasonably safe state ... upon experiencing a failure” in the vehicle’s autonomous systems. The new order also requires fully autonomous vehicles to comply with registration and insurance requirements, and to meet any applicable federal laws. Furthermore, it requires the state Departments of Transportation and Public Safety, along with all other pertinent state agencies, to take steps to support fully autonomous vehicles. In this case, “fully autonomous” means a level four or five system by SAE standard, or one that a human need not operate at all, but which can be taken over by a human driver if needed.
If Uber’s vehicles are considered to operate at SAE level 4, the company would be required to file notice, within 60 days of the issuance of the new Arizona executive order, that the vehicle and driver meet certain conditions, including licensure, insurance, and the achievement of the “minimal risk condition” described, in order to be allowed on the road. But the order was issued March 1, making Uber’s compliance with it irrelevant for the March 18 pedestrian fatality. Given the preliminary police report that appears to exonerate Uber and its driver from culpability, combined with Arizona’s industry-friendly policies toward autonomous vehicles, the likelihood of any kind of state intervention into the incident seems low. Worse, Tempe police speculate that Herzberg was homeless, lowering the chance that her death will earn substantial scrutiny. Uber has ceased testing of its autonomous fleet in all cities, but soon enough, they will likely resume, just as they did after their last accident, and the preparations for a driverless future will return to the desert.
Arizona residents might not be satisfied with that outcome. After all, they’re the ones who have to live in a robot-car test facility. Even if Uber’s self-driving apparatus did not “fail” under the definition of the order that permits its vehicles to operate on public roads, this feels like a different accident—and a different death—than the hundreds of others that occur every day.
When I reported on Uber’s Tempe autonomous test fleet last November, I observed how different the cars made the city feel. Standing on a median not unlike the one Herzberg allegedly stepped off into the path of Uber’s Volvo SUV, the entire relationship between humans and automobiles seemed to shift—the very texture of urban life is altered when a person cannot look a driver in the eye to gauge their intentions, or when a two-ton machine is run by an array of sensors and computers, whose decisions are foreign to human reasoning.
And that’s under normal operation, absent pedestrian death or its threat. “Will a self-driving car recognize the micro-drama of an unsupervised toddler, or a professor lost in his smartphone?” Ed Finn, a professor of arts and media at Arizona State University, wonders when I ask him about the Uber fatality. An autonomous-vehicle crash feels different, and maybe worse, than a human-caused one partly because of the tangled relationship between driving, liability, and human frailty. When people get into car crashes with one another, vehicular negligence is typically the cause. Determining which party is negligent, and therefore at fault, is central to the common understanding of automotive risk. Negligence means liability, and liability translates the human failing of a vehicle operator into financial compensation—or, in some cases, criminal consequence.
Overall, there’s recognition that self-driving cars implicate the manufacturer of the vehicle more than its driver or operator. That has different implications for a company like GM, which manufactures and sells cars, than Google, which has indicated that it doesn’t have plans to make cars, only the technology that runs them. The legal scholar Bryant Walker Smith has argued that autonomous vehicles represent a shift from vehicular negligence to product liability. The latter legal doctrine covers claims against companies who manufacture and sell a defective or dangerous product. On today’s roads, product liability claims arise in cases like the failure of Bridgestone/Firestone tires in the late 1990s, and the violent rupture of Takata airbags in the late aughts. These situations represent fairly traditional examples of product liability: A company designed, manufactured, or marketed a product that didn’t do what it promised, and harmed people as a result.
But these situations are different from a self-driving car. In Hertzberg’s case, at least according to the initial police report, a defective sensor or computer doesn’t appear to have caused the car or its operator to lose control or otherwise cause the crash. Instead, the operator’s judgment and response was replaced by the machine’s. If it’s true that no driver, human or robot, could have prevented the Tempe crash and death, then it offers a particularly intriguing test of Smith’s theory. Normally, the driver and his or her insurance provider would be the ones litigated for the accident—that’s vehicular negligence. But, the fact that an autonomous vehicle caused the outcome might be enough to shift the liability and compensation to that of product liability on the part of Uber.
The problem is, Smith’s argument is based on a future in which self-driving cars are sold or leased for hire, such that individuals who choose to drive or ride in them have initiated a change in vehicular usage that would entail the shift in legal responsibility and blame. But that’s not the case today, with Uber or anyone. Uber is running tests of their technology, a feat allowed by Arizona’s liberal regulatory policy on autonomous cars. It’s possible that, upon review, the Hertzberg death might neither be construed as vehicular negligence, because a person both is and isn’t driving, nor product liability, because there is no product being leased or sold.
Smith, for his part, holds that negligence still probably covers the kind of situation at issue in the Tempe collision this week, and similar ones that will come up in the future. The test, he says, is whether a natural or legal person had a duty that was violated by acting unreasonably, in a way that causes harm. “If the safety driver were negligent, then Uber could be vicariously negligent as an employer, but other decisions might be evaluated as well, including the decision to test or deploy the vehicle and the training provided to the operator,” Smith explains.
What about the state? Could Arizona be held accountable for allowing autonomous cars to roam public streets without sufficient oversight, including legal guidance for inevitable situations like pedestrian deaths? Smith doubts it. “In general, states are not liable for policy determinations. A state might be liable for not properly maintaining a road, but not for deciding whether or not to build the road.”
Still, since the law is set by precedents pursued by legal action, other interpretations of self-driving liability are possible. A different interpretation might compare operating autonomous test cars to taking dangerous or experimental equipment on city roads. There’s an argument to be made that a pedestrian death at the hands of an autonomous car, even one that would have been unavoidable, is no different from a human-driven car with a new, experimental combustion engine that malfunctions and blows up on a city road or interstate.
Meanwhile, the letter of the Arizona executive order seems to suggest that the human operator is on the hook for any traffic infractions while he or she is in the vehicle, even if it’s in fully autonomous mode. That means that an operator could, in theory, be charged with vehicular manslaughter—although the courts would inevitably have to adjudicate such a matter were the state to bring the charge. The whole situation is muddy and confused, and it might be impossible to understand it in the abstract, before legal precedent is set.
Furthermore, since the autonomous Ubers can and do, at times, pick up ordinary Uber passengers during their transit of Tempe’s streets, their autonomous vehicles might also be subject to the common carrier doctrine—a law that requires common carriers, like buses and taxis, but also hoteliers, insurers, and others (including, more recently, internet-service providers) to be held to a higher standard of care than ordinary operators. But there’s confusion about what a “higher standard of care” means—the Arizona Supreme Court has even held that common carriers in the state are only subject to “reasonable” care anyway, the same as any other agent. Worse, it’s not clear if ride-hail services even count as common carriers in Arizona or elsewhere.
Ducey’s executive order was written to encourage self-driving technology and manufacturing companies to move jobs and commerce to the state. As such, it sacrifices some of Arizona’s citizens’ rights to safety in the present in exchange for economic development, and the possibility of safer roads in the future, when and if autonomous cars become ubiquitous.
That said, an executive order doesn’t really do very much. The governor can direct the state and its agencies to do things, but statutory laws are made by the legislature—the regulation of self-driving vehicles in Arizona can’t really go beyond existing law anyway. Ducey is taking the implicit position that existing law is consistent with automated driving. Should the Hertzberg collision, or another situation, result in litigation, then the interpretation of the law can proceed as usual. “Tragedies have a way of bringing legal issues to a head,” Smith concludes.
There are signs that new laws might eventually intervene, reducing liability for self-driving vehicle operators. New federal legislation under consideration could push complaints arising from autonomous-vehicle collisions or injuries into private arbitration. In addition to reducing citizen’s rights, that move could prevent courts from hearing some cases that might produce new statutory precedents.
In the past, I’ve argued that autonomous cars could erode citizens’ rights to the public streets. Given sufficient economic incentive to pursue public-private partnerships between municipalities and technology companies, cities, counties, and states might choose to adopt industry-friendly regulatory policy in exchange for changes to the urban environment. Eventually, should autonomous cars become widespread, it might become more expedient just to close certain roads to pedestrians, bicyclists, and human drivers so that computer cars can operate at maximum efficiency. It would be a step too far to conclude that the fatality in Tempe rings the death knell for pedestrian and human-driver access to the roads. But it’s happened before: Jaywalking laws were essentially invented to transform streets into places for cars. Uber, Google, and other wealthy companies with big aspirations for autonomous driving might see this fatality as a sign that it’s time to get more serious about legal protection for their interests.
Ian Bogost is a contributing editor at The Atlantic. He is the Ivan Allen College Distinguished Chair in media studies and a professor of interactive computing at the Georgia Institute of Technology.
NEXT STORY The Perfect Selfishness of Mapping Apps