Things You Should Know But Don’t: Facial Recognition & Law Enforcement

Posted July 7, 2020

Facial recognition technology isn’t new. In fact, since 2017, most people who use an iPhone have the ability to set up their phones so that it unlocks automatically when it recognizes their faces. The technology itself has been around even longer.  So why is the conversation around facial recognition getting so much attention in the media?

Understanding the fundamentals of how AI takes facial data and recognizes it is key in understanding the current controversy around its use. The process itself is similar to that of fingerprint identification. Using a picture of a face, a computer analyzes 80 unique nodal points on the face, and then converts the analysis into code that can be input in a database. This code is compared against those in the database until a match is found and the face is, hopefully, identified.

There are many concerns with using facial recognition technology including its error rate, the worries of privacy and data leaks, and the general misuse of data.

One of the major uses for facial recognition in recent years has been in law enforcement – either for finding and identifying suspects or in tracking those who have had a criminal background.  Further advancement of facial recognition in criminology, however, creates concerns.  In May, a study from Harrisburg University claimed the institution had developed software that could accurately predict what the faces of people who were likely to become criminals looked like. The press release was removed and the study dropped after backlash over concerns, but it has since been archived on the web.  It’s reminiscent of the 2002 movie, Minority Report where a special police unit uses AI to arrest criminals before they committed crimes.

Obviously, using facial recognition technology in law enforcement raises legitimate concerns.  The algorithms that run AI are programmed by people.  One cannot help but wonder if the personal prejudices of programmers will find their way into the algorithms. AI is not at the point where it can objectively think for itself and the moment one introduces things such as facial structure, there will most assuredly be false hits that reinforce the inherited biases. This can be devastating for those wrongfully accused.  Most major competitors in this field have already pulled their support for developing facial recognition technology in criminology or have, like Amazon, at least delayed allowing its use in law enforcement.

If facial recognition technology is prone to false positives and hurts those who may already be most vulnerable, should it be used at all?   Is the convenience it represents worth the risk?  When you unlock your phone screen using your face or allow apps that gather this data for camera filters, are you thinking about the fact that your face ends up in one of those databases?  Are you considering that your face has now become your password?  If not, now may be a good time to look at who you’ve allowed to access your faceprint and how to keep your privacy safe.

Leave a Reply

Your email address will not be published. Required fields are marked *