crumple zones
In 2018, Rafaela Vasquez was working as a "safety driver" for Uber in Arizona. Employed to sit in a "self-driving car", and seize control if something went wrong, she was behind the wheel when the car, a modified Volvo, hit and killed a pedestrian.
The details, as they always are, are messy. The car had been altered to disable Volvo's own automatic braking function, so as to test Uber's machine learning system. The pedestrian was crossing the road outside of a designated spot. Arizona had passed wildly permissive laws allowing testing of self-driving vehicles with minimal oversight, in an effort to tempt valuable engineering jobs from companies like Google and Uber. And Vasquez, at the time of the collision, was watching TV.
On Wednesday, she was charged with negligent homicide. Her former employer, Uber, was cleared of all criminal liability in 2019, despite a finding from the national transportation safety board that criticised Uber for "inadequate safety risk assessment procedures," and highlighted the problem of "automation complacency" amongst the safety drivers.
That's a nice phrase for a problem which was already well-known at the time of the crash. If you ask someone to monitor a function which does not need monitoring, they will not, even if their job, or life, depends on them catching it if it fails.
This is trivial to see outside the world of technology. If your toaster breaks, it could burn down your house, killing you and your family. It takes minutes to toast bread. And yet I would be prepared to bet that you have never once sat, staring, at the toaster as it toasts, just in case a short circuit sparks a fire.
It's also a lesson well-learned from the world of aviation, where autopilots have led to the same complacency amongst pilots, sometimes with deadly effect. There, a number of requirements have been tried to limit the problems: commercial pilots fly with co-pilots, they have strictly limited shifts, and, of course, they operate in a sphere where, for all the risks inherent to flying, there is frequently a large amount of time between awareness of a problem and the point of no return.
People do not tend to step in out front behind a tree into the path of an on-coming plane.
"Automation complacency" was well-known amongst self-driving car manufacturers by 2018 as well. In 2016, I went to visit Waymo – then still known as the Google X Self Driving Car Project – and discussed that problem with the company's engineers.
In the earliest days of the project, Google had offered its prototypes to employees, to test and train. They were only able to do the bare minimum – keep in lane on the Bay Area's vast highways, and break if something unexpected happened – and so employees needed to drive them onto and off the freeway, and were expected to be alert and aware all through the journey.
These were not low-paid freelancers, but Google staff members, and they took the rides in the full knowledge that every little movement would be recorded, by both the cameras pointing at them and the self-driving system itself. And yet they goofed off anyway.
Waymo now says that experience was crucial in guiding how it approached self-driving cars. Rather than aiming for so-called "level 4" autonomy, where the car can mostly drive itself but a human needs to take over in emergencies, the company decided to jump straight to "level 5" – where a human driver is never needed. Their experience was that human drivers simply weren't capable of serving as a back-up to a nearly-but-not-entirely infallible robot.
Uber took a different view.
The company's long-term goal officially remains level 5 autonomy, because the story it has sold its shareholders is that achieving that milestone will allow it to create a fully robotic taxi service, eliminating human drivers once and for all, and improving its profit margins accordingly.
In practice, however, it was, as of 2018, far more aggressively pushing level 4 autonomy on the streets of Arizona. The company's desperate push for self-driving competency caused headlines in a multitude of sectors, from a lawsuit over stolen Waymo trade secrets to controversy over the company effectively poaching an entire university's robotics department.
The Arizona crash was the culmination of those pushes. And yet it's Vasquez, not Uber, who faces the penalty.
In 2019, Madeleine Clare Elish published a paper formally defining a term she had first coined in 2016: "moral crumple zone". A moral crumple one is the human, employed in a situation where they have limited control over the behaviour of an automated or autonomous system, who nonetheless gets assigned responsibility for its (mis)behaviour.
"Just as the crumple zone in a car is designed to absorb the force of impact in a crash, the human in a highly complex and automated system may become simply a component—accidentally or intentionally—that bears the brunt of the moral and legal responsibilities when the overall system malfunctions," Elish writes.
We – humans – are not only bad at monitoring complex systems which mostly work. We are also bad at assigning blame when they don't. We look for the nearest human who could have prevented the problem, or the nearest human who was doing something they shouldn't have, and assign them the responsibility. Vasquez should not have been watching TV when performing her mind-numbingly rote and repetitive job. And so she faces the criminal charges.
Uber continues to publicly test its self driving cars. In March, just before coronavirus shut everything down, the company launched a limited test, with two cars, operating in daylight hours only, in San Francisco. It is still relying on safety drivers.