The ending of a spectacular—in many senses of the word—act of violence in Dallas, Tx. this past week came when the police, according to…the police, ended the standoff with the alleged sniper, Micah Johnson, by detonating some kind of device delivered to the suspect by a robot (or what’s called a “slamhound” in the fictional universe of William Gibson).
The “targeted killing” of a suspect on “U.S. soil,” as opposed to extraterritorial declared or undeclared war zones, where this operation also has clear precedents too, has captivated the attention of scholars and the public. Partly due to the nature of policing close up while ‘at a distance,’ to borrow from Laura Kurgan’s terminology, the event raises many issues about the rules of engagement and the constitutional rights of a suspect—issues that obviously the Dallas police completely skirted, and do not seem too willing to discuss in the aftermath.
The additional fact that the manufacturers of the robot that they sold to the police did not seem to have been designed for this particular use also raises all kinds of issues in regard to the entanglements between design and policing — something else I’ve been interested in exploring, before or after Dallas. Who are the designers in this particular technology, after all – is it the police itself? And, should designers even work for police? (Something I’ve addressed elsewhere). Interesting also how the robot moves between the operability of decoding suspicion (as in, removing a suspicious package), and of killing a suspect, and also with such simple ease between the two.
The use of a robot to not simply capture, but to kill a suspect, brings to mind all kinds of fears of automated assassination and potentially the dangers to “innocent bystanders” in such engagements. We should be cognizant, meanwhile that this was a killing performed extrajudicially, after Johnson had been identified as the suspect of having killed five officers, but this has not been well clarified as of yet, I would add, all while there is a well documented history of “friendly fire” incidents (including victims in Orlando at the Pulse nightclub, it appears).
But not to digress. How “targeted” is targeting, really? All of this performative concern over the roboticized future has itself been spectacular, and in ways that happen to render such concern quite hollow, by the way. Perhaps more concern has been shown over the hypothetical police power that is yet to come, and for future alterations to “our” rights, than for the dead at the hands of police or police-inspired killers (such as Omar Mateen in Orlando and George Zimmerman, also in an Orlando suburb, invoking stand-your-ground laws), as proven time and again, in recent and not-so-recent memory. Not to be forgotten, either, that the Johnson case in Dallas presents another troubling example of someone who kills after his time in a military institution (the Army, in this case).
Seperately but not unrelated to the previous, I am especially troubled by the persistent discourse of how automated warfare supposedly brings us the safe, clean, and precise police robot that makes no targeting errors (false, anyway). But this popular discourse omits how such automated warfare –somehow– also happens to continue propagating such socially abhorred and feared figures like a trigger-happy veteran/killer in the vein of Micah Johnson. This is not to say that I know what moved Johnson to do what he did; only to point out a central tension in this problematic promotion of robotically-enhanced warfare.
Much more will need to be studied in the weeks, months, and years ahead. However, I wanted to touch on a question about how “unprecedented” this case was, given how oft the words “first” and “unprecedented” are being thrown around. Anyone familiar with the MOVE bombing in Philadelphia should not be so surprised by this supposed “first”. More recently, the outcome of a standoff with Chris Dorner, a Black officer, ended with a robot shooting smoke bombs that burned down the cabin Dorner was hiding in. So, since it was not unprecedented, in effect, how come ‘we’ (what we?) are caught by surprise, playing catch-up with the ethics and capabilities of the police? Perhaps this raises more questions about the culture around policing with a certain lack of critical memory, than about the policing itself.
In the immediate hours after the news about the already-heroic robot (sarcasm), I started to do a bit of digging. One of the first instances of shooting at a target from a robot I could find can be located in the pages of (where else?) Wikipedia, citing The Hunt for the Engineer: How Israeli Agents Tracked the Hamas Master Bomber by Samuel M. Katz. According to this page, Israeli police remotely shot a fuse to deactivate a car bomb (and instead set off a massive explosion) in 1992.
But perhaps the most interesting document I came across in my search was a research study conducted by the Navy Systems Center in San Diego and published in 2000, which surveyed law enforcement personnel on their perceived needs for robots: “Robotics for Law Enforcement: Beyond Explosive Ordnance Disposal” (pdf), by H. G. Nguyen and J. P. Bott. Maybe what most caught my eye about this report is how the respondents mostly did not perceive a need for robots that would shoot weapons (or deliver, say, an explosive). Or to be more clear, they did not imagine robotic weapons used very frequently, although that does not mean they wouldn’t want to have them around, just in case. By contrast, respondents wanted to use the robot more frequently to “see” (with cameras or infrared), as the graphs below show:
I’m curious about these two goals for the robot; one as a ‘seeing’ entity, and another as a killing machine. These separated endeavours, anticipated more than a decade-and-a-half ago, bring up many questions about the nature of identification and violence. As a relative of mine put it, they did not send a robot to capture or kill, for an example, white supremacist Dylann Roof, the suspect in the mass killing inside a Black North Carolina church. So, thinking about the writing of Simone Browne here, in the very same context of the Black Lives Matter protests that were going on in Dallas in the wake of more police killings this past couple of weeks, it’s impossible to separate who becomes targeted by automated or semi-automated killing machines, and who is taken alive, and how are the visual regimes of each sort of operation organized.
*Thanks to my collaborator, Bryan Finoki @subtopes, for several links referenced above.