By Joseph Dalessandro
On 2 December 2025, the Australian government released the National AI Plan (NAP). NAP has arrived at a pivotal moment, when artificial intelligence (AI) is the hot technology pathway for all organisations, touted as rapidly shaping economic structures, labour markets and critical digital infrastructure.
NAP is ambitious in scope: expand AI’s economic opportunity, ensure its benefits are widely distributed, and keep Australians safe as the technology becomes embedded in daily life, essential services, and banking. NAP frames AI not merely as a tool for productivity, but as a democratising national capability requiring coordinated investment in skills, compute, public-sector transformation, and international alignment (without the laws and regulations).
IIS welcomes the four key measures this bill introduces.
1) What is it
The core architecture of NAP is built around three pillars:
Capture the opportunity – Increase Australia’s AI capability through sovereign compute access, industry investment, research support, and a workforce strategy that emphasises inclusion and long-term adaptability.
Spread the benefits – Ensure AI adoption occurs not just in major corporations and government agencies but across regions, small businesses, social sectors, and public services. The Plan closely links AI growth to social equity, union negotiation, and regional skills pipelines. However, we already see AI redundancies and sector fear, see some examples here.
Keep Australians safe – Establish the Australian AI Safety Institute, enhance standards alignment, and build frameworks for responsible, trustworthy AI across public and private sectors.
This structure does mirror the strategies of peer nations such as the UK, Singapore, and Canada with some notable omissions. It does provide unity: a national vision that integrates economic development with safety, fairness, and social wellbeing.
2) Socio-technical benefits
a) National coordination
Australia has struggled with fragmented digital and AI policy, spread across departments, agencies, and states. NAP moves toward a unified national architecture. This could reduce duplication and create a reference point for regulators, industry, and research institutions.
b) Investment in sovereign AI capability
By emphasising compute infrastructure, cloud capacity, and research ecosystems, NAP begins shifting Australia from AI consumer to AI contributor. This infrastructure matters: without sovereign compute access, Australia risks dependency on foreign technology decisions, third party vendors (with concentration risk) and data-handling practices.
c) Worker protections and social equity
Few national AI strategies foreground labour and social outcomes as explicitly as NAP. It integrates unions, worker transition programs, and protections for vulnerable groups. This ensures AI adoption considers societal impacts, not solely economic metrics. Yes, as noted above we have already seen some missteps in this area and fear is very much at the front of mind of several sector-specific worker types.
By targeting small businesses, local councils and not-for-profits, NAP attempts to democratise AI adoption [1], reducing the risk of AI-driven inequality between large and small organisations. This will be challenging given that Australians are cautious and doubtful about AI.
d) Public sector modernisation
NAP emphasises AI-enabled public services such as health, education, welfare, and transport. When deployed safely, AI can increase accessibility, reduce administrative burden, and improve service delivery in remote and underserviced communities. Yes, this does assume a level of accountability and testing we did not see in Robodebt [2], and yes, we will have privacy concerns as we saw with Harrison.AI.
3) Socio-technical gaps
Despite its strengths, NAP contains structural weaknesses that carry real risk. The most significant dangers correspond to gaps in regulation, governance, and implementation.
a) Legal obligations and assurance
Unlike the EU AI Act or the US frameworks that mandate safety testing, reporting, and restrictions, NAP contains no enforceable legal obligations for high-risk AI systems. The Australian AI Safety Institute is promising but undefined. Without standards, authority, or enforcement powers, Australia risks deploying AI in financial services, healthcare, policing, and welfare without adequate safeguards.
Assurance is another area of potential harm for individuals. Globally, AI assurance, independent evaluation of robustness, bias, safety, and regulatory compliance is becoming essential and, in some cases, mandated by law. NAP does not define:
Assurance requirements
AI audit processes (or appropriate depth)
Documentation requirements
Pre-deployment testing
Model lifecycle controls
Ongoing continuous monitoring
Evaluation methods for generative AI.
Without an assurance regime, high-risk AI may be deployed in opaque, untested, or unsafe ways.
b) Risk identification and treatment
NAP does not specify which AI systems should be considered ‘high risk’ in banking, payments, energy, digital identity, critical infrastructure, healthcare, legal, national security or property systems.
Other nations treat critical infrastructure AI as a national security concern requiring heightened controls. Australia does not. The result could be AI-driven failures or exploitation in systems foundational to economic stability and social trust.
Government procurement is one of the most powerful levers for enforcing safe AI. The US and UK require impact assessments and supplier compliance with AI safety principles. NAP includes none of this. Australia may inadvertently purchase unsafe or non-compliant systems, embed risks such as bias, discrimination, or allow human harm within essential public functions.
NAP does not specify:
Which agency oversees AI risks in each sector
How regulators coordinate
How compliance will be enforced
Incident reporting for AI failures
Enforcement authority.
This creates a governance vacuum. In high-stakes and high risk domains, unclear jurisdiction leads to slow response, regulatory drift, and systemic risk.
c) Possible privacy concerns
NAP touches privacy indirectly. Potential gaps remain:
No new privacy protections tailored to AI-enabled data processing.
No guidance on model training using personal data or derived data or data use (consent).
No restrictions on biometric surveillance, emotional analytics, or behavioural prediction.
No provisions for transparency, contestability, opt-out, or rights when AI makes or influences decisions.
This leaves individuals exposed, particularly in welfare, policing, employment, and health contexts where Australia already has a history of algorithmic harm
d) Risk identification and treatment
Lastly, NAP is ‘civilian oriented’, Australia lacks a publicly articulated framework for military, defence, dual-use, or national-security AI governance, even though peer nations (US, UK, EU, Singapore) explicitly integrate defence considerations or maintain separate defence AI strategies. This is worrisome.
4) Conclusion
NAP is a credible and coherent strategic document with substantial socio-technical benefits: national coordination, sovereign capability, worker-centred policy, public-sector uplift, and inclusive AI diffusion. It positions Australia to participate more actively in the global AI landscape.
NAP also leaves dangerous gaps. The absence of enforceable safety rules, AI assurance infrastructure, sector-specific oversight, procurement standards, enforcement authority, unclear government roles and responsibilities, and privacy safeguards creates systemic risk.
NAP nods toward safety without building the machinery necessary to enforce it. NAP is aspirational and does not ensure or build resilience Australia will still need the regulatory, technical, and institutional backbone that transforms NAP from vision to real protection.
[1] In broad strokes, ‘democratise’ in an AI context equates to the notion that everyone and every organisation, regardless of socio-economic status, and regardless of technical skill or acumen or for companies and organisations without specialised or extensive IT, can have the same access to AI tools, workflows, and benefits.
[2] While Robodebt was not and AI making autonomous decisions, it was algorithmic bias that was relied upon without proper testing, safety, or human in the loop controls. See this article for a detailed explanation.