Things You Should Know But Don’t: AI Police Reports

Posted September 30, 2024

AI has aided the police in many ways, from its use in the forensic field and surveillance to crime forecasting. Recently, two new uses for AI have debuted in police departments nationwide.

The first is the use of AI to review body cam footage. With bodycams being standard in most police departments these days, thousands of hours of footage are generated daily. Without AI assistance, it would be impossible to review it all.

Of course, not all the footage needs to be reviewed closely—but some want to tap into the data it provides anyway. For psychological researchers, using AI to review the footage can lead to a better understanding of how police procedures work and, more importantly, why they might escalate. According to Jennifer Eberhardt, a psychology professor at Stanford, her research team can “look at the first 27 seconds of the stop, the first roughly 45 words that the officer spoke, and we could use this model to predict whether that driver was going to be handcuffed, searched or arrested by the end of the stop.

For police departments, monitoring bodycam footage with AI can help give insight into individual officers, correct behavior that may cause future issues, and flag behaviors that result in positive interactions. However, criticism has been that meaningful change in police culture won’t be fixed with AI.

The second new way police use AI goes hand-in-hand with the body cam analysis. Using the recordings of an incident from a bodycam, an AI chat program named “Draft One” can allegedly take the input and turn it into a police report in just a few seconds. With reports being a large part of an officer’s everyday work, the aim is to create more time for fieldwork and less time spent at the desk. As with most uses of AI, there are concerns with using it to generate police reports. Some argue that the use of AI may cause gross oversights in policing; others mention how common it is for ChatGPT to spread misinformation, and there is, of course, the ongoing concern with bias in AI-driven tools.

With the prevalence of AI, it’s easy to imagine more new uses being implemented. The question for the future is how to regulate it and make it safe—if possible.

Leave a Reply

Your email address will not be published. Required fields are marked *