In a significant move for its tech future, the Australian government has officially launched a comprehensive strategy to accelerate the use of artificial intelligence (AI) throughout its economy. However, in a shift from earlier considerations, the plan will not introduce new, stringent laws for high-risk AI applications, choosing instead to lean on the nation's existing legal frameworks.
Australia's National AI Plan: Key Pillars and Regulatory Stance
The National AI Plan, made public on Tuesday, December 2, outlines the centre-left Labor government's vision. The strategy pivots on three core objectives: attracting major investment for advanced data centres, developing a skilled workforce to both support and safeguard jobs, and ensuring public safety as AI tools become more commonplace in everyday life.
A notable aspect of the plan is its regulatory approach. The government has decided against crafting specific AI legislation for now. Instead, it stated that Australia's "robust existing legal and regulatory frameworks" will serve as the foundation for tackling AI-related risks. This means various agencies and sectoral regulators will be tasked with identifying and managing potential harms from AI within their own domains.
Balancing Innovation with Risk Management
This roadmap follows the government's announcement last month to establish an AI Safety Institute by 2026. This body is intended to assist authorities in monitoring emerging threats and responding to dangers posed by advanced AI systems.
The decision comes at a time when global regulators are increasingly worried about issues like misinformation, especially from powerful generative AI platforms such as ChatGPT from Microsoft-backed OpenAI and Google's Gemini. Federal Industry Minister Tim Ayres emphasized the plan's goal is to ensure Australians reap the benefits of new technology. "As the technology continues to evolve, we will continue to refine and strengthen this plan to seize new opportunities and act decisively to keep Australians safe," Ayres said.
Expert Warns of Critical Gaps in the Strategy
While the government's plan has been framed as balanced, it has drawn criticism from some academic quarters. Niusha Shafiabady, an Associate Professor at Australian Catholic University, pointed out substantial shortcomings in the updated roadmap.
Shafiabady argued that the plan, while ambitious in unlocking data and boosting productivity, leaves critical gaps in areas like accountability, sovereignty, sustainability, and democratic oversight. "Without addressing these unexplored areas, Australia risks building an AI economy that is efficient but not equitable or trusted," she cautioned. This critique highlights the ongoing global debate about whether existing laws are sufficient to govern the rapid and profound changes brought by artificial intelligence.