July 17, 2041.1 The two sat in a marked cruiser at the base of a hill. Nineteen years ago, Baley had spent an eternity in that exact spot, radar gun in hand. “Police work, like war, consists of long periods of boredom punctuated by short bursts of excitement,” he explained. “But it used to be you had more control. Traffic enforcement was never glamorous, but it was about more than traffic. It was a tool we used to make a difference. You’d see someone who didn’t belong. You could know in your gut they were up to no good, but the regs said you couldn’t do anything without ‘reasonable suspicion’ or ‘probable cause.’ So you’d wait, and you’d watch.” Noting Danielle’s youth, Baley added, “People were really bad drivers. If I needed a reason to pull someone over, all I had to do was wait.”2
Danielle nodded, understanding for the first time why Baley had volunteered for this assignment. Baley still drove a car with a steering wheel.3 His job required it. Moments ago he had taken manual control to maneuver the cruiser off-road. Autonomous cars had not taken his job, but they had changed it.4 Baley was hoping Danielle’s new tool would help restore a measure of his autonomy. He was looking for the opportunity to exercise professional judgment, to exercise discretion, to save lives.
Danielle remembered a time when her parents drove, but they shared the roads with autonomous taxis and long-haul trucks.5 By the time she was old enough to get her driver’s license, owning a manual car didn’t make financial sense. She didn’t need one to get around, and insurance for human drivers was astronomical.6 There was no question. The world had become a better place. Pre-automation, tens of thousands died every year in what people euphemistically called accidents. Distracted, tired or otherwise impaired people; that’s what caused those accidents.7 Today, annual traffic fatalities numbered in the hundreds.8
Eighteen years Danielle’s senior, Baley loved the open road. For him learning to drive had meant freedom. Professionally, the openings provided by human fallibility in the form of pretextual stops made his job possible. He’d see someone who clearly didn’t belong. He’d know they were a dealer, but his gut wasn’t enough to justify the stop. So he’d wait for their tires to touch the yellow line or a lane change without a signal. After that, it was easy. Either he’d talk drivers into a consensual search, or they’d do something that piqued his suspicion enough for him to invoke the magic words “reasonable suspicion.” Pretextual stops were like cheat codes for police procedure. As long as there was a violation to hang the stop on, the underlying motivation didn’t matter.
Twenty years ago the majority of Baley’s drug cases began their life as traffic stops. The property seized in such cases helped fill gaps in the department’s budget.9 This revenue and the window these infractions opened into criminal activity had saved lives. Baley was sure of it.10 He welcomed the prospect of never again conducting a field sobriety test, and he really did appreciate safer roads. But he had lost a powerful tool in the fight.
Much like encryption, driverless cars were allowing criminals to go dark.11 The fact that fewer criminals came to the attention of law enforcement didn’t mean there was a need for less law enforcement. On the contrary, it was a sign they needed new tools in their arsenal. Law enforcement had to adapt.12
As a safety measure, every autonomous vehicle was required to file a “flight plan” with local municipalities. This allowed for efficient traffic routing. When paired with passenger information, these “flight plans” removed the anonymity that would have otherwise made a shared autonomous vehicle the perfect dead drop, a space where one passenger could hide contraband only to have it retrieved by another, the two never meeting.13 They also allowed law enforcement to identify and track down wanted individuals with ease.14 The problem was that even this wasn’t enough to fill the hole left by the elimination of pretextual stops. That is where Danielle came in.
Danielle handed a small device to Baley, “Hopefully, this will help.” It resembled an old-time radar gun but smaller. Danielle had gone over the targeting system with Baley earlier. This was the obligatory field test before full-fledged beta testing.
“That guy,” Baley exclaimed, pointing the device at an approaching car. “I’ve got a feeling.” He aimed the device in his direction and pulled the trigger. A number immediately appeared on the screen, 98%. Baley smiled, tapping a small button adjacent to the trigger. The cruiser’s lights and sirens engaged, and the car Baley had targeted pulled over to the side of the road.
The device was more policy innovation than tech. The Commonwealth had created a new felony modeled on the Computer Fraud and Abuse Act, an obscure 20th-century federal law.15 It criminalized unauthorized use of any computer, an admittedly vague framing that in its federal incarnation had been interpreted to include activities as far-reaching as violating a website’s terms of service.16 The device scanned a passenger’s online personas for signs of criminal activity, and when it couldn’t find a smoking gun (such as pictures of contraband or discussion of criminal activity), the CFAA gave Baley additional bites at the apple.17 Lying about your height in a dating profile was often a violation of a site’s terms. This and a constellation of similar infractions would become the 21st-century equivalent of touching the yellow lines.18
The number displayed after targeting was a measure of how confident the officer could be that the target had committed a crime, and the department’s counsel had determined that anything above 90% was sufficient for a stop.19 Danielle was the technician tasked with this first field test.
The cruiser stopped behind the car, and Baley exited to speak with the car’s occupant. At first, Danielle watched the interaction. Then, out of curiosity, she aimed the device at a random car. The number came back: 96%.
She tried another, 98%, and another, 99%, and another, 96%, and another, 99%…
Welcome to the footnotes. I’m trying an experiment with this post—a heavily footnoted piece of flash fiction. And, yes, I’ve seen the recent New York Magazine article imagining a potential city-wide hack of New York City in all it’s fiction-annotation splendor. For the record, I pitched this story well before that was published. The real question is, “Have you listened to Rose Eveleth‘s Flash Forward podcast?” My humble suggestion is that you read the story without consulting the footnotes. Then if you want, scan the footnotes as a kind of author’s commentary. ↩
The use of pretextual traffic stops has been upheld by the Supreme Court on multiple occasions, and the practice is common enough that officers openly talk about its use. See, e.g., Bridging the Legal Gap between the Traffic Stop and Criminal Investigation (in which the author cites Devenpeck v. Alford for the proposition that “[A]n arresting officer’s state of mind (except for the facts that he knows) is irrelevant to the existence of probable cause …. That is to say, his subjective reason for making the arrest need not be the criminal offense as to which the known facts provide probable cause ….”). Translation: as long as you can find probable cause for an infraction, it doesn’t matter why you chose to stop someone. ↩
This is perhaps the most overlooked aspect of the driverless cars discussion. We have structured our lives around the automobile, making it part of a vast interconnected system. The consequences of change are sure to be numerous and far-reaching, and they will not stop with drivers. Many have been discussed before. For example, who will be liable in an accident? When faced with an unavoidable collision and forced to choose who should die, how does a car choose? What happens to organ donation given that accidents currently provide nearly 20% of viable organs? That last one is pretty easy. You’re still saving lives. Then, of course, there’s the loss of revenue that follows from the drop in fines associated with traffic and parking violations. But where does it end?
I got the idea for this story one day as I sat in court and realized that more than half of the cases on that day involved motor vehicles. I’m not talking just about the obvious stuff like traffic violations, DWIs, or personal injury. I’m talking about the vast number of criminal cases that initiate with a traffic stop. Of course, the proportions will differ across jurisdictions, esp. urban v. rural, but there’s no denying cars are a major part of our lives and a convenient starting point for criminal investigation. Just imagine if all of those cases never existed. It might sound odd, but the technology poised to most disrupt the legal profession, including criminal practice, might be the driverless car. ↩
The more quickly people adopt driverless cars, the more dramatic the consequences. So what could actually drive rapid adoption, assuming that there is no legal mandate to go autonomous? Money. If driverless cars really are safer than human drivers, it stands to reason that human drivers will assume the costs associated with their behavior. To be clear, insurance rates for human drivers probably aren’t going up so much as they will go down for users of cars with more safety features (like self-driving). In fact, if driverless cars work as promised, rates will probably go down for everyone. Also, it seems likely that damages in accidents caused by driverless cars will fall under some form of product liability. ↩
Before it was announced that the National Highway Traffic Safety Administration was investigating a fatality involving Tesla’s autopilot feature, this footnote read simply, “Cars don’t kill people. People (and bad luck) kill people.” The ambivalence embodied in that statement is finding voice as many argue over where to place blame. One can guess that the absence of Lidar range finders (those spinning bits on the top of Google’s driverless cars) contributed to a giant blind spot. But it seems that the big discussion is one of expectations. What is the point of a self-driving feature that requires a car occupant to be just as alert as when driving themselves? See, e.g., FN 3 and Is Autopilot a Bad Idea? Why Ford, Google, Volvo and others think Tesla is wrong about automation. It is clear that the collision was tragic, but it is also clear that self-driving cars are coming. ↩
Some estimate that Self-Driving Cars Could Save 300,000 Lives Per Decade in America. ↩
Laugh or cry, if there’s a preponderance of evidence to suggest that property was or could be used in a crime, chances are law enforcement can take it, sell it, and spend the proceeds how they see fit. ↩
The “going dark” narrative seems largely predicated on the assumption that warrantless spaces are something new, but such an argument forgets that for the majority of human history most conversations between people took place in warrantless spaces, due primarily to the ephemeral nature of the spoken word. ↩
This becomes particularly problematic when looked at from the perspective of the war on drugs. Drug crimes, unlike the property and violent crimes that tend to follow in their wake, are often by their nature private. There’s rarely a vocally aggrieved party saying “look over here, a crime is being committed.” Consequently, law enforcement must find ways into private spaces or accept their inability to prosecute all offenders. So we should think such enforcement through very carefully. Consider the fact that all racial groups use illegal drugs at similar rates. See U.S. Department of Health and Human Services, Center for Disease Control and Prevention, National Center for Health Statistics, Health, United States, 2015: With Special Feature on Racial and Ethnic Health Disparities, Table 50, p 192. Now square that with this observation from Michelle Alexander’s excellent book The New Jim Crow:
From the outset, the drug war could have been waged primarily in overwhelmingly white suburbs or on college campuses. SWAT teams could have rappelled from helicopters in gated suburban communities and raided the homes of high school lacrosse players known for hosting coke and ecstasy parties after their games. Suburban homemakers could have been placed under surveillance and subjected to undercover operations designed to catch them violating laws regulating the use and sale of prescription ‘uppers.’ All of this could have happened as a matter of routine in white communities, but it did not. Instead, when police go looking for drugs, they look in the ‘hood. ↩
Dead drops are an age old espionage tool, and you can see a few creative examples over at the Crypto Museum. Today, there are already some hoping to blend in among a sea of Uber users. Currently, however, drivers are caught in the middle. What will happen when they are removed from the equation? ↩
The Electronic Frontier Foundation, has a collection of critical writings on the CFAA. For examples of prosecutors getting “creative” with the CFAA’s prohibition against accessing computers “without authority,” see United States v. Drew, United States v. David Nosal, and perhaps most famously, the prosecution of Aaron Swartz. ↩
Just last month, U.S. Customs and Border Protection suggested that social media profiles could “provide DHS greater clarity and visibility to possible nefarious activity and connections by providing an additional tool set which analysts and investigators may use.” Not to mention, the NSA is probably already reading everything. ↩
The Computer Fraud and Abuse Act is a real law, and it is wicked scary. What happens when an overzealous prosecutor exacts revenge on a bad date they met online by getting them arrested for lying about their height or age? Of course, prosecutors tell us they would never use the law that way, but when the only thing standing between most of the populace and prosecution is the goodwill of prosecutors, the rule of law takes on a whole new meaning. Also, we’re not talking abut a civil infraction; we’re talking about a felony. ↩
The criminal justice system is already using secret (proprietary) algorithms to make similar determinations. ProPublica looked at a statistical risk model used in some courts to help inform questions of bail and sentencing, and their findings aren’t pretty. Algorithms aren’t bias-free. They are the products of human design and as such come burdened with their authors’ assumptions and blind spots. Worse yet, even when they work as planned, that’s often not good enough. Consider for example the false positive paradox. Basically, you’re probably going to flag a lot of innocent people when trying to identify bad actors. Normally, that’s a problem, but what happens to the incentives when those false positives can be leveraged into something “useful?” ↩