Australia Considers Blocking AI Platforms Over Age Verification Failures
Following a recent ban on social media for children under 16, Australia is now preparing to impose stricter regulations on artificial intelligence platforms, according to a Reuters report. The country's internet regulator has issued warnings that search engines and app stores could be instructed to block AI services that do not adequately verify users' ages.
New Age-Restriction Rules Set to Take Effect
The move comes after a Reuters review revealed that more than half of popular AI platforms had not publicly disclosed steps to comply with new age-restriction rules ahead of a deadline next week. Australia's eSafety commissioner announced that starting March 9, internet services, including AI chat tools like OpenAI's ChatGPT, must prevent users under 18 from accessing content related to pornography, extreme violence, self-harm, and eating disorders. Companies failing to adhere to these regulations could face substantial fines of up to A$49.5 million (approximately $35 million).
The regulator emphasized that enforcement actions could extend beyond AI platforms to include "gatekeeper services" such as search engines and app stores that provide access to these tools.
Limited Compliance Among AI Platforms
A Reuters analysis of the 50 most popular text-based AI products found that only nine had introduced or announced age-verification systems. Another 11 platforms either applied blanket content filters or planned to block Australian users entirely. However, 30 platforms showed no clear signs of implementing measures to meet the new rules.
Large AI providers, including ChatGPT, Replika, and Anthropic's Claude, have begun rolling out age controls or enhanced filters. Some companion chatbot providers have committed to compliance, while others have not published clear policies regarding the regulations.
Growing Global Scrutiny on AI Safety
This crackdown reflects increasing global concerns that AI chatbots may expose young users to harmful content or encourage risky behavior. OpenAI and other AI firms have faced lawsuits abroad over claims related to harmful interactions with minors.
Although Australia has not reported incidents of chatbot-linked violence, officials noted that children as young as 10 are spending hours daily on AI platforms. The regulator expressed worries that some AI tools might employ emotional engagement techniques that promote excessive use.
With this initiative, Australia appears poised to expand its youth online safety efforts from social media to artificial intelligence platforms, aiming to protect minors in the digital age.



