Despite AI’s recent popularity, most people would still agree that some things about it can be eerie or even dystopian, such as deepfakes and voice clones. While celebrities and politicians need to worry about scandals, the everyman is worried about scams. And though deepfakes aren’t new, it seems that the rapid expansion of AI in the tech world has made them all the more realistic and easier to create— thus making them more dangerous.
In May, OpenAI released a new, AI-powered personal assistant with a voice named “Sky.” Soon after, the company fell under fire from actress Scarlett Johansson as she and many others noticed that the simulated voice sounded similar to hers. Before the official announcement, OpenAI CEO Sam Altman teased the release by referencing a sci-fi movie wherein Johansson voiced an AI assistant. Johansson even said she declined an offer to license her voice to OpenAI nine months prior. OpenAI denied the connection, and an NPR forensic study found the voice wasn’t a perfect match for Johansson’s, but they ended up pulling “Sky” from the voice options and issuing an apology. At the very least, it’s likely that “Sky” was inspired by Johansson, who was understandably upset by the resemblance.
This incident happened right on the heels of one of the first AI regulation laws being introduced and passed in the US. Tennessee passed the ELVIS (Ensuring Likeness Voice and Image Security) Act in March, designed to “protect musicians from unauthorized artificial intelligence deep fakes and voice clones.” It is the first legislation of its kind, and many hoped that new federal laws would follow.
Luckily, it does look like some headway is being made on a federal level. In late July, two US senators introduced an updated version of a previously proposed bill called the No Fakes Act—another anagram for “Nurture Originals, Foster Art and Keep Entertainment Safe.” The most significant difference is that this legislation would protect more than just artists from AI deepfakes, extending coverage to everyday people. Meanwhile, the US Copyright Office released a report on digital replicas around the same time—concluding that current IP laws have too many gaps to be effective against deepfakes and calling for an overhaul as concerns about AI regulation grow.
It is comforting to see that action is being taken against the misuse of AI technology. But laws can take months, if not years, to pass—so, as always, make sure you protect yourself and be careful about what you post online. You never know who might want to use it.