Things You Should Know But Don’t: The Dawn of AI Regulation

Posted April 15, 2024

If you follow tech news, it’s impossible to escape how AI has taken over the conversation.  No matter how frivolous it might seem, every company wants to take advantage of the AI trend; even Spotify recently announced an AI-powered playlist generator.  While AI can be used for harmless fun, I often discuss the more serious and, by extension, the more potentially dangerous uses of AI.

While companies are scrambling to find ways to include AI in their products or workflow, AI has not been championed in every community—artists especially have been rallying against AI-generated content since its first introduction into the mainstream.  As the use of AI has expanded, people are noticing its flaws more and more.  The New York Times, for instance, recently filed a lawsuit against OpenAI for copyright infringement.  It’s potentially a landmark case, and its outcome may determine the future of AI writing.  More importantly, it highlights the importance of regulating AI systems across every field.

Regarding ChatGPT and other large language models, the biggest issue has been the uncredited, non-consensual use of others’ works.  If you’ve put your work on the internet—written or visual—there’s a chance it’s being used to train AI.  Google recently announced that it was going to start training its AI using Google Docs, which are shared on the web.  In my last blog, I mentioned that even short voice or video clips on social media can be used to train AI—ChatGPT now boasts that it only needs a 15-second clip to mimic your voice.  Without regulations in place around AI, there are obvious concerns about scams, IP infringement, and privacy violations.

Perhaps even more concerning is the implementation of AI within government agencies and the pitfalls that come alongside it.  Only a few weeks ago, it was leaked that police in California used AI to create a composite sketch of a murder suspect in a cold case, which a detective then requested be run through a facial recognition database.  As Wired reported, “It emphasizes the ways that, without oversight, law enforcement can mix and match technologies in unintended ways.”

Another example of the government embracing AI is that in October of 2023, New York City announced the “New York City Artificial Intelligence Action Plan.” This plan aims to provide guidelines on properly integrating AI into government practices.  Part of the plan included an AI chatbot that could provide information to business owners and landlords based in NYC.

The problem, however, is that the city’s chatbot is telling businesses to break the law,” writes The Markup.  In The Markup’s testing of the chatbot, they found it gave blatantly false information about rent and business practices.  Some examples include telling landlords that they can discriminate among tenants based on source of income or that business owners can take workers’ tips.  While one hopes that no one would take legal advice from a chatbot alone, it’s easy to imagine it as a possibility in this age of misinformation.

If the government wants to take steps to include AI, it’s clear that it will need more regulation, just as companies still do.  Interestingly, the New York Times lawsuit isn’t the only effort to make necessary changes.  Several initiatives have been made within the government to increase oversight for corporations and federal agencies.  At the time of writing, a new bill was proposed that would require AI companies to disclose the data sources used in training their models.

Additionally, Vice President Kamala Harris announced in March that all US federal agencies must appoint a chief AI officer to oversee all AI systems they use.  This includes annual reports outlining all AI systems in use, their risks, and the plan to reduce them.

So, it would seem regulations are finally starting to follow with the rise of AI.  The question now is whether it will be too little too late.

Leave a Reply

Your email address will not be published. Required fields are marked *