Things You Should Know But Don’t: Using Social Media to Predict Crime

Posted February 19, 2024

Can AI predict crime?

In past blogs, I’ve written about crime mapping, a technique where police data is used to help predict criminal hotspots. AI can also be used in many other ways for policing. What if AI could be trained on specific types of data to predict crime, such as social media posts?

In 2021 the LAPD was considering a contract with a tech company that claimed its algorithms could help identify future criminals based on their social media posts and patterns. The company, Voyager Labs, alleged that “its artificial intelligence could discern people’s motives and beliefs… And it suggested its tools could allow agencies to conduct undercover monitoring using fake social media profiles.” Following the LAPD’s trial period with Voyager Labs, officers were instructed to ask for social media information from every civilian they stopped and questioned whether or not the civilian was involved in a crime. Some have questioned if this violates certain privacy and freedom of speech rights. More recently, in late fall of 2018, the NYPD had also signed a contract with Voyager Labs back. While the NYPD has stated that they don’t use any of the AI or predictive features that Voyager Labs offers, the LAPD has, and claims to have identified new targets to monitor.

Voyager Labs’ predictive process essentially works by targeting a single known person of interest and archiving all of their social media data (posts, likes, followers, and even private messages). Through this data it can not only assess “the strength of people’s ideological beliefs and the level of ‘passion’ they feel by looking at their social media posts,” but it can also identify other possible suspects. By analyzing their friends lists, Voyager Labs figures out who interacts with the principal person of interest’s profile the most—and even goes beyond that to find connections between friends of friends. In order to collect more sensitive data such as private messages without requiring a warrant, Voyager Labs has tools to help police create fake social media profiles—which eventually led to the company being sued by Meta and banned on Meta’s platforms in January 2023. Voyager Labs has stated that they intend for their software to be used with rigorous human oversight and that their software is a tool that can identify extremist communities and individuals who are prone to violence. Meredith Broussard, a New York University data journalism professor and author of Artificial Unintelligence: How Computers Misunderstand the World, referred to this as a “guilt by association” system. In the past it’s been noted that AI algorithms, especially when applied to policing practices, need to be used with extreme care, as they are often rife with bias.

We’re still years, if not decades, away from being able to use AI to effectively predict and prevent crime. While we may never reach the point depicted in the 2002 film, Minority Report, the fiction portrayed in films all too often becomes reality.  AI has made great strides in recent years, but it is still man made and carries over the biases of the people behind the code. The trend, however, seems to be that we’ll continue to use this technology, flaws and all.

Leave a Reply

Your email address will not be published. Required fields are marked *