When it comes to AI safety, the laws being introduced often focus on certain types of regulations that address concerns of copyright/IP infringement, misinformation, and data privacy. While these are important to tackle in the ongoing age of AI, there are other important safety issues being brought to the table each day.
The most recent concern? How children and young users are interacting with AI, and what those consequences might look like. It’s no secret that the next generations, Gen Z and now Gen Alpha, have been entrenched in modern technology since they were born. Gen Z is known as “digital natives,” spending an average of over 7 hours a day on their phones. Meanwhile, Gen Alpha has been nicknamed “iPad Kids,” referring to the very early introduction of phones and tablets that is common among them—with 43% having access to a tablet before the age of 6 and 58% having their own smartphone by age 10.
Even between the two generations, the internet has changed drastically. While Gen Z was inundated with social media woes, Gen Alpha has been dealing with the AI “takeover” of the web.
Just this past October, there was a lawsuit filed against Character.AI (an AI-powered chatbot that imitates fictional characters) and Google for the wrongful death of a teenager. According to the lawsuit, the teen’s final interaction with the chatbot happened just moments before he took his own life, seemingly encouraged by the chatbot.
Young children can have a difficult time differentiating reality and fiction. This is part of the reason why we have a ratings system in place for TV shows, movies, and games. The internet, and by extension the AI programs it hosts, are not subject to these ratings in the same way. The developers of Character.AI are being accused of creating “generative AI systems with anthropomorphic qualities to obfuscate between reality and fiction… these developers rapidly began launching their systems without adequate safety features, and with the knowledge of potential dangers.” Additionally, Character.AI even has chatbots that can pose as therapists, despite obviously having no credentials to dispense mental health advice.
While it is up to a child’s guardian to monitor their internet use, is it so outrageous to believe that AI should be regulated or held accountable? Last September, the National Association of Attorneys General issued a letter urging Congress to study AI and the lasting, harmful effects it may have on children and to create legislation that may protect them.
Thankfully, Character.AI has taken some actions to change their platform in hopes of making it safer for minors. Hopefully the call for federal regulations on AI can also help prevent more tragedies from happening in the future.