OpenAI Introduces Parental Controls For ChatGPT Amid Safety Concerns

3 Min Read

OpenAI, the company behind the widely-used AI chatbot ChatGPT, has launched a new suite of parental controls for its web and mobile platforms. This move comes in response to growing concerns over the safety of AI interactions for younger users and follows a lawsuit filed by the parents of a teenager who died by suicide, allegedly after the chatbot provided guidance on methods of self-harm.

How The New Controls Work

The new features allow parents and teenagers to link their accounts through a consent-based invitation system. Once linked, parents can implement stronger safeguards without accessing their teen’s private chat history. The controls enable parents to reduce exposure to sensitive topics, decide whether conversations can be used to train OpenAI’s models, and control if ChatGPT remembers past chats. Additional features include the ability to set “quiet hours” that block access during specific times and to disable functionalities like voice mode and image generation. In rare cases where the system detects a serious safety risk, parents may be notified with the necessary information to support the teen.

A Proactive Step Towards Age Verification

Alongside these immediate controls, OpenAI announced that it is developing an age prediction system. This technology aims to automatically identify users who are likely under 18 and apply teen-appropriate safety settings by default. This forward-looking measure signals a broader industry trend towards creating more age-aware AI environments, as other major tech companies like Meta have also recently announced enhanced safeguards for minors interacting with their AI products.

Relevance To The MENA Region

This development is highly significant for the MENA region, which has one of the world’s youngest and most digitally-native populations. As tools like ChatGPT become increasingly integrated into education and daily life, these parental controls offer a crucial layer of safety for families across the Middle East and North Africa. Furthermore, with regional governments and startups aggressively pursuing AI development as part of national strategies like Saudi Vision 2030 and the UAE’s National Strategy for Artificial Intelligence, OpenAI’s move sets an important precedent. It highlights the growing need for responsible AI frameworks and will likely influence the safety and ethical standards adopted by emerging local AI companies looking to build trust with users in the region.

About OpenAI

OpenAI is an artificial intelligence research and deployment company dedicated to ensuring that artificial general intelligence (AGI) benefits all of humanity. It is renowned for developing large-scale AI models that have pushed the boundaries of natural language processing and image generation, including the popular GPT series of models that power ChatGPT and the DALL-E text-to-image system.

Source: Zawya

Share This Article