After recently promising new safety measures for teens, OpenAI introduced new parental controls for ChatGPT. The settings allow parents to monitor their teen’s account, as well as restrict certain types of use, like voice chat, memory, and image generation.
The changes debuted a month after two bereaved parents sued OpenAI for the wrongful death of their son, Adam Raine, earlier this year. The lawsuit alleges that ChatGPT conversed with their son about his suicidal feelings and behavior, providing explicit instructions for how to take his own life, and discouraging him from disclosing his plans to others.
The complaint also argues that ChatGPT’s design features, including its sycophantic tone and anthropomorphic mannerisms, effectively work to “replace human relationships with an artificial confidant” that never refuses a request.
In a blog post about the new parental controls, OpenAI said that it worked with experts, advocacy groups, and policy makers to develop the safeguards.
In order to use the settings, parents must invite their teen to connect accounts. Teen users must accept the invitation, and they can also make the same request of their parent. The adult will be notified if a teen unlinks their account in the future.
Once the accounts are connected, automatic protections are applied to the teen’s account. These content restrictions include reduced exposure to graphic material, extreme beauty ideals, and sexual, romantic, or violent roleplay. While parents can turn off these restrictions, teens can’t make those changes.
Parents will also be able to make specific choices for their teen’s use, such as designating quiet hours during which ChatGPT can’t be accessed; turning off memory and voice mode; and removing image generation capabilities. Parents can’t see or access their teen’s chat logs.
Mashable Trend Report
Importantly, OpenAI still sets teen accounts to be used in model training. Parents must opt out of that setting if they don’t want OpenAI to use their teen’s interactions with ChatGPT to further train and improve their product.
When it comes to handling sensitive situations wherein teens talk to ChatGPT about their mental health, OpenAI has created a notification system so that parents can learn if something may be “seriously wrong.”
Though OpenAI did not describe the technical features of this system in its blog post, the company said that it will recognize potential signs that a teen is thinking about harming themselves. If the system detects that intention, a team of “specially trained people” reviews the circumstances. OpenAI will contact parents by their method of choice — email, text message, and push alert — if there are signs of acute distress.
“We are working with mental health and teen experts to design this because we want to get it right,” OpenAI said in its post. “No system is perfect, and we know we might sometimes raise an alarm when there isn’t real danger, but we think it’s better to act and alert a parent so they can step in than to stay silent.”
OpenAI noted that it’s developing protocols for contacting law enforcement and emergency services in cases where a parent can’t be reached, or if there’s an imminent threat to a teen’s life.
Robbie Torney, senior director of AI Programs at Common Sense Media, said in the blog post that the controls were a “good starting point.”
Torney recently testified in a Senate hearing on the dangers of AI chatbots. At the time, he referenced the Raine lawsuit and noted that ChatGPT continued to engage Adam Raine in discussion about suicide, rather than trying to redirect the conversation.
“Despite Adam using the paid version of ChatGPT — meaning OpenAI had his payment information and could have implemented systems to identify concerning patterns and contact his family during mental health crises — the company had no such intervention mechanisms in place,” Torney said in his testimony.
At the same hearing, Dr. Mitch Prinstein, chief of psychology at the American Psychological Association, testified that Congress should require AI systems accessible by children and adolescents to undergo “rigorous, independent, pre-deployment testing for potential harms to users’ psychological and social development.”
Prinstein also called for limiting manipulative or persuasive design features that maximize chatbot engagement.