Viewing entries tagged
Privacy by Design

Australia’s National AI Plan: Big Vision, Missing Guardrails

Comment

Australia’s National AI Plan: Big Vision, Missing Guardrails

By Mike Trovato

On 2 December 2025, the Australian government released the National AI Plan (NAP). NAP has arrived at a pivotal moment, when artificial intelligence (AI) is the hot technology pathway for all organisations, touted as rapidly shaping economic structures, labour markets and critical digital infrastructure.

NAP is ambitious in scope: expand AI’s economic opportunity, ensure its benefits are widely distributed, and keep Australians safe as the technology becomes embedded in daily life, essential services, and banking. NAP frames AI not merely as a tool for productivity, but as a democratising national capability requiring coordinated investment in skills, compute, public-sector transformation, and international alignment (without the laws and regulations).

But there are legitimate concerns and questions about it. John Pane, Electronic Frontiers Australia Chair, said in a recent blog post, “We need strong EU style ex ante AI laws for Australia, not a repeat of Australia’s disastrous ‘light touch’ private sector privacy regime introduced in 2000. We need to also resist the significant geo-political pressure being brought to bear on Australia and others by the Trump administration, forcing sovereign nations to adopt US technology ‘or else’.

Most importantly from an IIS perspective, it puts additional pressure on already stretched regulators such as the Office of the Australian Information Commissioner (OAIC) who will bear the brunt of the enforcement burden, without a commensurate increase in funding.

What is it

The core architecture of NAP is built around three pillars:

  1. Capture the opportunity – Increase Australia’s AI capability through sovereign compute access, industry investment, research support, and a workforce strategy that emphasises inclusion and long-term adaptability.

  2. Spread the benefits – Ensure AI adoption occurs not just in major corporations and government agencies but across regions, small businesses, social sectors, and public services. The Plan closely links AI growth to social equity, union negotiation, and regional skills pipelines. [1]

  3. Keep Australians safe – Establish the Australian AI Safety Institute, enhance standards alignment, and build frameworks for responsible, trustworthy AI across public and private sectors.

This structure does mirror the strategies of peer nations such as the UK, Singapore, and Canada with some notable omissions. It does provide unity: a national vision that integrates economic development with safety, fairness, and social wellbeing.

Socio-technical benefits

National coordination

Australia has struggled with fragmented digital and AI policy, spread across departments, agencies, and states. NAP moves toward a unified national architecture. This could reduce duplication and create a reference point for regulators, industry, and research institutions.

Investment in sovereign AI capability

By emphasising compute infrastructure, cloud capacity, and research ecosystems, NAP begins shifting Australia from AI consumer to AI contributor. This infrastructure matters: without sovereign compute access, Australia risks dependency on foreign technology decisions, third party vendors (with concentration risk) and data-handling practices.

Worker protections and social equity

Few national AI strategies foreground labour and social outcomes as explicitly as NAP. It integrates unions, worker transition programs, and protections for vulnerable groups. This ensures AI adoption considers societal impacts, not solely economic metrics. Yes, as noted above we have already seen some missteps in this area and fear is very much at the front of mind of several sector-specific worker types.

By targeting small businesses, local councils and not-for-profits, NAP attempts to democratise AI adoption [2], reducing the risk of AI-driven inequality between large and small organisations. This will be challenging given the trust issues many Australians have with AI and with respect to privacy and community attitudes.

Public sector modernisation

NAP emphasises AI-enabled public services such as health, education, welfare, and transport. When deployed safely, AI can increase accessibility, reduce administrative burden, and improve service delivery in remote and underserviced communities. Yes, this does assume a level of accountability and testing we did not see in Robodebt [3], and yes, we will have privacy concerns as we saw with Harrison.AI.

Socio-technical gaps

Despite its strengths, NAP contains structural weaknesses that carry real risk. The most significant dangers correspond to gaps in regulation, governance, and implementation.

Legal obligations and assurance

Unlike the EU AI Act or the US frameworks that mandate safety testing, reporting, and restrictions, NAP contains no enforceable legal obligations for high-risk AI systems. The Australian AI Safety Institute is promising but undefined. Without standards, authority, or enforcement powers, Australia risks deploying AI in financial services, healthcare, policing, and welfare without adequate safeguards.

Assurance is another area of potential harm for individuals. Globally, AI assurance, independent evaluation of robustness, bias, safety, and regulatory compliance is becoming essential and, in some cases, mandated by law. NAP does not define:

  • Assurance requirements

  • AI audit processes (or appropriate depth)

  • Documentation requirements

  • Pre-deployment testing

  • Model lifecycle controls

  • Ongoing continuous monitoring

  • Evaluation methods for generative AI.

Without an assurance regime, high-risk AI may be deployed in opaque, untested, or unsafe ways.

Risk identification and treatment

NAP does not specify which AI systems should be considered ‘high risk’ in banking, payments, energy, digital identity, critical infrastructure, healthcare, legal, national security or property systems.

Other nations treat critical infrastructure AI as a national security concern requiring heightened controls. Australia does not. The result could be AI-driven failures or exploitation in systems foundational to economic stability and social trust.

Government procurement is one of the most powerful levers for enforcing safe AI. The US and UK require impact assessments and supplier compliance with AI safety principles. NAP includes none of this. Australia may inadvertently purchase unsafe or non-compliant systems, embed risks such as bias, discrimination, or allow human harm within essential public functions.

NAP does not specify:

  • Which agency oversees AI risks in each sector

  • How regulators coordinate

  • How compliance will be enforced

  • Incident reporting for AI failures

  • Enforcement authority.

This creates a governance vacuum. In high-stakes and high risk domains, unclear jurisdiction leads to slow response, regulatory drift, and systemic risk.

Possible privacy concerns

NAP touches privacy indirectly. Potential gaps remain:

  • No new privacy protections tailored to AI-enabled data processing.

  • No guidance on model training using personal data or derived data or data use (consent).

  • No restrictions on biometric surveillance, emotional analytics, or behavioural prediction.

  • No provisions for transparency, contestability, opt-out, or rights when AI makes or influences decisions.

This leaves individuals exposed, particularly in welfare, policing, employment, and health contexts where Australia already has a history of algorithmic harm.

It also puts additional pressure on already stretched regulators such as the OAIC.

Risk identification and treatment

Lastly, NAP is ‘civilian oriented’, Australia lacks a publicly articulated framework for military, defence, dual-use, or national-security AI governance, even though peer nations (US, UK, EU, Singapore) explicitly integrate defence considerations or maintain separate defence AI strategies. This is worrisome.

Conclusion

NAP is a credible and coherent strategic document with substantial socio-technical benefits: national coordination, sovereign capability, worker-centred policy, public-sector uplift, and inclusive AI diffusion. It positions Australia to participate more actively in the global AI landscape.

NAP also leaves dangerous gaps. The absence of enforceable safety rules, AI assurance infrastructure, sector-specific oversight, procurement standards, enforcement authority, unclear government roles and responsibilities, and privacy safeguards creates systemic risk.

NAP nods toward safety without building the machinery necessary to enforce it. NAP is aspirational and does not ensure or build resilience Australia will still need the regulatory, technical, and institutional backbone that transforms NAP from vision to real protection.

[1] However, we already see AI redundancies and sectoral fears, for example recently at CBA, when it revealed in July it would make 45 roles in its customer call centres redundant because of a new bot system it had introduced – then reversed the decision after deciding it needed the humans to cope with its growing workloads

[2] In broad strokes, ‘democratise’ in an AI context equates to the notion that everyone and every organisation, regardless of socio-economic status, and regardless of technical skill or acumen or for companies and organisations without specialised or extensive IT, can have the same access to AI tools, workflows, and benefits.

[3] While Robodebt was not and AI making autonomous decisions, it was algorithmic bias that was relied upon without proper testing, safety, or human in the loop controls. See Royal Commission into Robodebt.

Comment

Top five picks for making privacy a priority (PAW 2021)

Top five picks for making privacy a priority (PAW 2021)

By Lisa Hooper and Chong Shao

The 2021 Privacy Awareness Week (PAW) is scheduled for 3-9 May. The OAIC’s PAW theme is Make Privacy a Priority. Its recent survey of Australian attitudes towards key privacy issues revealed that most Australians have a clear understanding of why they should protect their personal information (85% agree) but half say they don’t know how (49% agree).

This year, the OAIC has published their privacy tips for the home and the workplace. In this post, we have compiled our own top picks that align with OAIC’s message for workplaces, along with our own commentary.

IIS top five picks

1. Making privacy a priority starts from the top

  • OAIC message: A strong leadership commitment to a culture of privacy is reflected in good privacy governance 

  • IIS view: Privacy needs to be front-of-mind for boards 

Good privacy governance enables innovative and trustworthy uses of personal information. This in turn promotes both performance (e.g., improve productivity, offer new digitally enabled products and services) as well as compliance (e.g., meet local and global privacy requirements, reduce and respond to privacy risks).

IIS believes that privacy should be a key consideration for boards and they should provide clear direction for the executive team to implement a privacy management framework that sets out the organisation’s privacy governance.

It is important for organisations in this rapidly changing environment to adopt a forward-looking posture. Organisations should actively monitor the latest privacy developments – including in technology, law and policy – and consider how they may be impacted. Privacy performance and developments should also be reported back up to the board level. 

This is discussed extensively in the book The New Governance of Data and Privacy: Moving beyond compliance to performance, co-authored by Mike Trovato (Managing Director) and Malcolm Crompton AM (Lead Privacy Advisor) and published by the Australian Institute of Company Directors (AICD).

2. Reduce the risks of data breaches caused by human error and prepare a data breach response plan 

  • OAIC message:

    • Reduce the risks of a human error data breach by educating staff and putting controls in place

    • Ensure that your organisation is prepared and equipped for a data breach 

  • IIS view:

    • Educate, prepare, rehearse and assess

    • Your data breach response plan needs to be up to date and fit for purpose. This needs to include the role of third-party providers that would play a key role in the response.

Over the past year, data breaches have continued to increase around the world. Many cyber-attacks are occurring in the context of wider disruptions caused by COVID-19. As large portions of the professional workforce transitioned to working remotely, there was a significant reliance on the use of personal devices and home networks to conduct work tasks, thereby increasing the security vulnerability for organisations.

It is vital for staff to continue to be educated on proper data handling and receive privacy training for their respective roles. How an organisation handles a data breach can significantly impact their reputation; ensuring that staff are well equipped to report a breach is an important factor when implementing action plans for handling a breach. 

IIS believes that now more than ever, organisations must ensure that staff are adequately trained, controls are regularly assessed and that data breach response plans are up to date and fit for purpose. Like other safety drills, the plan should be rehearsed periodically so that the organisation can respond efficiently and effectively if/when the real thing happens. It is better to be proactive than reactive. Is your organisation data breach ready?

3. Build in privacy by design (PbD)

  • OAIC message: Adopt a PbD approach to minimise, manage or eliminate privacy risks 

  • IIS view: Embedding PbD from the very start helps organisations with both privacy compliance and performance

IIS has been a strong advocate for PbD. We believe that implementing PbD strategically helps organisations to achieve their objectives while maintaining a high level of privacy protection. This saves the time and costs of “bolting on” measures down the track. Furthermore, PbD helps organisations  to focus on user-centric practices that are key to building trust with customers and reducing privacy risk over the long run.  

PbD should be prioritised in contexts where the value of the data and the associated privacy risks are high, for example: linking and matching big datasets, mobile location analytics, biometrics, and customer loyalty programs.

4. Put secure systems in place

  • OAIC message: Having strong and secure systems in place helps to protect personal information from misuse, loss or unauthorised access or disclosure

  • IIS view: Ensuring secure systems and appropriate controls are in place is one of the top priorities for preventing privacy breaches.

Cyber security and privacy should not be “set and forget”. Rather, organisations must regularly monitor the operation and effectiveness of their ICT security measures to ensure that they remain responsive to changing threats and vulnerabilities and other issues that may impact the security of personal information. This means cyber security is not just an IT issue, but a business and risk issue.

Having policy and procedure documents in place are necessary but not sufficient. Overall, organisations should focus on the basics: know your data assets, manage identity and least privileged access, protect endpoints, train staff and prepare for incidents.

5. Undertake a PIA

  • OAIC message: A PIA is an essential tool for protecting privacy, identifying solutions and building trust 

  • IIS view: A PIA an essential component of the organisation’s risk management process 

A privacy impact assessment (PIA) is an assessment of new or changing technologies (e.g., adopting a new CRM system), products (e.g., introducing a location-based customer service) and/or operational processes (e.g., revising the data governance policy) that might have an impact on the privacy of individuals.

When the organisation proposes to introduce a new project that involves (or could involve) the handling of personal information, it could lead to both anticipated benefits as well as unanticipated consequences. Conducting a PIA prior to and during the project – as part of the project’s overall risk management processes – can ensure that privacy risks are considered and that the potential impacts are mitigated. 

In IIS’s experience, a PIA is more than a compliance check. Conducting a PIA can provide organisations with a wider view on privacy throughout the business, which in turn can help organisations improve their privacy practices beyond the single project under review.

Participating in PAW 2021

IIS will once again be proudly supporting PAW this year. We have previously partnered with our clients during PAW to deliver presentations and participate in live Q&A sessions. If you have been considering taking steps to raise privacy awareness and/or strengthen your organisation’s privacy practices, participating in PAW is an excellent starting point.

Sign up today with the OAIC or contact IIS to help you and your organisation make privacy a priority.