OpenAI said it would add parental controls to ChatGPT a few weeks ago, after a teen died by suicide allegedly using the chatbot for assistance. The family sued the AI firm following the tragic event, and the case is ongoing. In the wake of this incident, OpenAI is rolling out its new parental control features this month. The company said in a blog post that teen safety will be prioritized over user privacy and freedom.
Among the new safety features coming to ChatGPT soon, there’s one AI capability that might stir some controversy. OpenAI will let ChatGPT predict the user’s age to determine how it should respond to their prompts. The AI age prediction feature is similar to what Google announced in late July. Google is also employing AI to guess the age of YouTube users and Google account holders.
OpenAI CEO Sam Altman addressed the upcoming parental controls and mentioned the new AI age verification system in a separate blog post. Altman said, “For example, ChatGPT will be trained not to do […] flirtatious talk if asked, or engage in discussions about suicide or self-harm even in a creative writing setting.” The new parental controls will complement other safety features such as the mental health guardrails for ChatGPT that apply to users of all ages.
Parental controls in ChatGPT
Altman’s blog post explains OpenAI’s principles on privacy and freedom in addition to teen safety, and what the company is doing to address recent developments. Altman argues that conversations with ChatGPT should be as private as those with doctors and lawyers. OpenAI is building advanced tools to ensure user data can’t be seen by employees, but it’s also adding safety features to flag risky scenarios, including “threats to someone’s life, plans to harm others, or societal-scale harm like a potential massive cybersecurity incident,” that could be seen by humans. As for freedom, ChatGPT should remain the same for its adult users, and extend their freedom as long as they don’t pose risks to others.
OpenAI said that parental controls are coming to the ChatGPT experience by the end of September and will help families decide how ChatGPT functions for their kids. With the controls in place, parents will be able to link their account with their teen’s account (minimum age of 13) and choose how the chatbot responds, based on teen-specific model behavior rules. Parents can also customize which features to disable, including memory and chat history. As a safety measure, OpenAI will send notifications to parents when the system detects their teen is in a moment of acute distress. If the company can’t reach a parent in a rare emergency, it may involve law enforcement as a next step. Moreover, parents will be able to set blackout hours during which a teen cannot use ChatGPT.
How ChatGPT’s age prediction system will work
It’s unclear exactly when the age prediction features will roll out. OpenAI’s blog post explains how the feature will work in general terms, without providing specifics about how the AI determines a user’s age. However, when it determines the user is under 18, the ChatGPT experience will change according to the new policies, which will include blocking graphic sexual content and informing the authorities under extreme circumstances. If ChatGPT can’t determine the age of a user, it’ll default to the under-18 experience, giving adults ways to prove their age.
Altman said that OpenAI will ask for an ID to verify a user’s age when there’s doubt. The ID-check feature will be implemented in only a few cases and countries. However, the CEO did not detail the scenarios or countries where ID checks will be necessary to verify a user’s age. He also acknowledged the privacy compromise that these new policies may bring for adults, but said it’s a worthy trade-off to safeguard teens.