Artificial Intelligence Companies Take Steps to Identify Underage Users
As the global debate on the use of chatbots and technology among under-18s continues to grow, major artificial intelligence companies like OpenAI and Anthropic are testing measures to identify underage users. This move comes on the heels of Australia’s recent crackdown on social media, highlighting the need for stricter regulations and guidelines for minors using online platforms. According to a report by The Verge, OpenAI, the parent company of ChatGpt, has updated its “model spec” guidelines to include four new indicators designed to prioritize teen safety.
New Guidelines for Underage Users
Dubbed ‘U18’, these principles aim to promote real-world support, encourage offline relationships and trusted resources, and treat teenagers as teenagers, without being condescending or thinking of them as adults. On its blog, OpenAI outlines ChatGpt’s new course, which emphasizes the importance of putting teen safety first, even when it may be challenged by the pursuit of maximum intellectual freedom. This shift in approach demonstrates the company’s commitment to creating a safer and more responsible online environment for minors.
Anthropic’s Approach to Underage Users
Meanwhile, rival Anthropic is also taking steps to address the issue of underage users. Unlike OpenAI, Anthropic expressly prohibits the use of AI by minors and is developing a new system capable of detecting “conversational cues that indicate when a user may be underage.” Once a user under 18 has been identified, the company will block the conversation by placing the registration email in a blacklist. As stated by Anthropic, “Anyone using our tools must represent that they are at least 18 years old. If anyone lies, we will report and disable accounts.” This strict approach underscores the company’s dedication to ensuring the responsible use of its AI tools.
Conclusion and Future Developments
The moves by OpenAI and Anthropic to identify and address underage users mark a significant step forward in the ongoing debate about the use of chatbots and technology among minors. As the AI landscape continues to evolve, it is essential for companies to prioritize teen safety and develop effective measures to prevent underage users from accessing their platforms. For more information on this developing story, please visit Here

