Viewing entries tagged
Enforcement

Australia’s National AI Plan: Big Vision, Missing Guardrails

Comment

Australia’s National AI Plan: Big Vision, Missing Guardrails

By Mike Trovato

On 2 December 2025, the Australian government released the National AI Plan (NAP). NAP has arrived at a pivotal moment, when artificial intelligence (AI) is the hot technology pathway for all organisations, touted as rapidly shaping economic structures, labour markets and critical digital infrastructure.

NAP is ambitious in scope: expand AI’s economic opportunity, ensure its benefits are widely distributed, and keep Australians safe as the technology becomes embedded in daily life, essential services, and banking. NAP frames AI not merely as a tool for productivity, but as a democratising national capability requiring coordinated investment in skills, compute, public-sector transformation, and international alignment (without the laws and regulations).

But there are legitimate concerns and questions about it. John Pane, Electronic Frontiers Australia Chair, said in a recent blog post, “We need strong EU style ex ante AI laws for Australia, not a repeat of Australia’s disastrous ‘light touch’ private sector privacy regime introduced in 2000. We need to also resist the significant geo-political pressure being brought to bear on Australia and others by the Trump administration, forcing sovereign nations to adopt US technology ‘or else’.

Most importantly from an IIS perspective, it puts additional pressure on already stretched regulators such as the Office of the Australian Information Commissioner (OAIC) who will bear the brunt of the enforcement burden, without a commensurate increase in funding.

What is it

The core architecture of NAP is built around three pillars:

  1. Capture the opportunity – Increase Australia’s AI capability through sovereign compute access, industry investment, research support, and a workforce strategy that emphasises inclusion and long-term adaptability.

  2. Spread the benefits – Ensure AI adoption occurs not just in major corporations and government agencies but across regions, small businesses, social sectors, and public services. The Plan closely links AI growth to social equity, union negotiation, and regional skills pipelines. [1]

  3. Keep Australians safe – Establish the Australian AI Safety Institute, enhance standards alignment, and build frameworks for responsible, trustworthy AI across public and private sectors.

This structure does mirror the strategies of peer nations such as the UK, Singapore, and Canada with some notable omissions. It does provide unity: a national vision that integrates economic development with safety, fairness, and social wellbeing.

Socio-technical benefits

National coordination

Australia has struggled with fragmented digital and AI policy, spread across departments, agencies, and states. NAP moves toward a unified national architecture. This could reduce duplication and create a reference point for regulators, industry, and research institutions.

Investment in sovereign AI capability

By emphasising compute infrastructure, cloud capacity, and research ecosystems, NAP begins shifting Australia from AI consumer to AI contributor. This infrastructure matters: without sovereign compute access, Australia risks dependency on foreign technology decisions, third party vendors (with concentration risk) and data-handling practices.

Worker protections and social equity

Few national AI strategies foreground labour and social outcomes as explicitly as NAP. It integrates unions, worker transition programs, and protections for vulnerable groups. This ensures AI adoption considers societal impacts, not solely economic metrics. Yes, as noted above we have already seen some missteps in this area and fear is very much at the front of mind of several sector-specific worker types.

By targeting small businesses, local councils and not-for-profits, NAP attempts to democratise AI adoption [2], reducing the risk of AI-driven inequality between large and small organisations. This will be challenging given the trust issues many Australians have with AI and with respect to privacy and community attitudes.

Public sector modernisation

NAP emphasises AI-enabled public services such as health, education, welfare, and transport. When deployed safely, AI can increase accessibility, reduce administrative burden, and improve service delivery in remote and underserviced communities. Yes, this does assume a level of accountability and testing we did not see in Robodebt [3], and yes, we will have privacy concerns as we saw with Harrison.AI.

Socio-technical gaps

Despite its strengths, NAP contains structural weaknesses that carry real risk. The most significant dangers correspond to gaps in regulation, governance, and implementation.

Legal obligations and assurance

Unlike the EU AI Act or the US frameworks that mandate safety testing, reporting, and restrictions, NAP contains no enforceable legal obligations for high-risk AI systems. The Australian AI Safety Institute is promising but undefined. Without standards, authority, or enforcement powers, Australia risks deploying AI in financial services, healthcare, policing, and welfare without adequate safeguards.

Assurance is another area of potential harm for individuals. Globally, AI assurance, independent evaluation of robustness, bias, safety, and regulatory compliance is becoming essential and, in some cases, mandated by law. NAP does not define:

  • Assurance requirements

  • AI audit processes (or appropriate depth)

  • Documentation requirements

  • Pre-deployment testing

  • Model lifecycle controls

  • Ongoing continuous monitoring

  • Evaluation methods for generative AI.

Without an assurance regime, high-risk AI may be deployed in opaque, untested, or unsafe ways.

Risk identification and treatment

NAP does not specify which AI systems should be considered ‘high risk’ in banking, payments, energy, digital identity, critical infrastructure, healthcare, legal, national security or property systems.

Other nations treat critical infrastructure AI as a national security concern requiring heightened controls. Australia does not. The result could be AI-driven failures or exploitation in systems foundational to economic stability and social trust.

Government procurement is one of the most powerful levers for enforcing safe AI. The US and UK require impact assessments and supplier compliance with AI safety principles. NAP includes none of this. Australia may inadvertently purchase unsafe or non-compliant systems, embed risks such as bias, discrimination, or allow human harm within essential public functions.

NAP does not specify:

  • Which agency oversees AI risks in each sector

  • How regulators coordinate

  • How compliance will be enforced

  • Incident reporting for AI failures

  • Enforcement authority.

This creates a governance vacuum. In high-stakes and high risk domains, unclear jurisdiction leads to slow response, regulatory drift, and systemic risk.

Possible privacy concerns

NAP touches privacy indirectly. Potential gaps remain:

  • No new privacy protections tailored to AI-enabled data processing.

  • No guidance on model training using personal data or derived data or data use (consent).

  • No restrictions on biometric surveillance, emotional analytics, or behavioural prediction.

  • No provisions for transparency, contestability, opt-out, or rights when AI makes or influences decisions.

This leaves individuals exposed, particularly in welfare, policing, employment, and health contexts where Australia already has a history of algorithmic harm.

It also puts additional pressure on already stretched regulators such as the OAIC.

Risk identification and treatment

Lastly, NAP is ‘civilian oriented’, Australia lacks a publicly articulated framework for military, defence, dual-use, or national-security AI governance, even though peer nations (US, UK, EU, Singapore) explicitly integrate defence considerations or maintain separate defence AI strategies. This is worrisome.

Conclusion

NAP is a credible and coherent strategic document with substantial socio-technical benefits: national coordination, sovereign capability, worker-centred policy, public-sector uplift, and inclusive AI diffusion. It positions Australia to participate more actively in the global AI landscape.

NAP also leaves dangerous gaps. The absence of enforceable safety rules, AI assurance infrastructure, sector-specific oversight, procurement standards, enforcement authority, unclear government roles and responsibilities, and privacy safeguards creates systemic risk.

NAP nods toward safety without building the machinery necessary to enforce it. NAP is aspirational and does not ensure or build resilience Australia will still need the regulatory, technical, and institutional backbone that transforms NAP from vision to real protection.

[1] However, we already see AI redundancies and sectoral fears, for example recently at CBA, when it revealed in July it would make 45 roles in its customer call centres redundant because of a new bot system it had introduced – then reversed the decision after deciding it needed the humans to cope with its growing workloads

[2] In broad strokes, ‘democratise’ in an AI context equates to the notion that everyone and every organisation, regardless of socio-economic status, and regardless of technical skill or acumen or for companies and organisations without specialised or extensive IT, can have the same access to AI tools, workflows, and benefits.

[3] While Robodebt was not and AI making autonomous decisions, it was algorithmic bias that was relied upon without proper testing, safety, or human in the loop controls. See Royal Commission into Robodebt.

Comment

What’s next for Australian privacy regulation – Reflections on PAW 2025

Comment

What’s next for Australian privacy regulation – Reflections on PAW 2025

By Chong Shao

On Monday, 16 June 2025, IIS joined other IAPP members in Sydney for the launch of Privacy Awareness Week. Together we heard an address from, and fireside conversation with, Privacy Commissioner Carly Kind.

The past 12 months have been eventful for Commissioner Kind and the Office of the Australian Information Commissioner (OAIC). Here are some highlights:

At the Sydney PAW launch, Commissioner Kind gave further remarks about her office’s regulatory approach, given the current technological landscape and the uncertain timeframe of further privacy legislative reform.

This post summarises the key themes from those remarks, along with some practical takeaways to help you navigate both privacy compliance and good practice today.

1. The Commissioner takes a holistic view of privacy that emphasises organisational accountability and power imbalances

Throughout her remarks, Commissioner Kind highlighted the need for a broader conception of privacy than simply the protection of personal information.

This broader notion of privacy – autonomy to make decisions, free from interference and intrusion – is more important than ever in a world that is marked by technology that is always-on, collects data passively, subtly conditions our thoughts and behaviours, and removes friction from all manner of experiences.

Commissioner Kind noted that the problem is not that people aren’t aware of the importance of privacy these days, but that they feel helpless, fatalistic and disempowered. In pushing back against overreliance on individual responsibility for privacy, she memorably invoked a climate change analogy – ‘privacy settings are the plastic straws of the privacy world’.

Instead, Commissioner Kind wants entities to take accountability for doing the right thing in the first place, and for various groups and associations in our society to leverage their power as a counterbalance and advocate for more privacy-friendly approaches.

She noted that the scale of technological impact is a novel problem in our era, and that this informs her thinking with respect to regulatory priorities. In particular, she foreshadowed that her office will be looking at spaces where there are power disparities between individuals and organisations. As examples, she listed credit reporting, data brokerage and emerging technologies (such as AI and biometrics).

Practical takeaways:

  • Take accountability as an organisation to embed privacy into your culture and practice:

    • Set privacy culture from the top through strong messaging and financial investment in privacy; and

    • Limit over-collection of data and destroy what you don’t need.

  • Undertake a privacy review to identify potential gaps and opportunities to improve practice.

2. The Commissioner is committed to using the full toolkit of her regulatory powers

On the topic of enforcement, Commissioner Kind gave some additional thoughts on the powers now available to her office.

She noted that the power to issue infringement notices is limited to a relatively narrow set of APPs (e.g., Privacy Policy deficiencies, failure to offer direct marketing opt-out). However, it could potentially be used as part of a ‘compliance scan’ of a particular sector or market in relation to those privacy practices. This is similar to what the Australian Competition and Consumer Commission (ACCC) and the UK’s Information Commissioner’s Office (ICO) have done in the past.

Commissioner Kind reiterated that her office will prioritise enforcement action for violations that are persistent, egregious and/or manifest in real-life harms, as well as in places where intervention is likely to change market practices or help clarify aspects of policy or law.

She flagged that her office will be conducting more investigations and making more determinations this year, as well as taking more enforcement actions in a similar vein to the recent Australian Clinical Labs (ACL) and Medibank cases.

In response to an audience member question about what more can be done to get the C-suite to appreciate the importance of privacy, Commissioner Kind recognised the power of fines to highlight the risk of not taking sufficient action. She remarked matter-of-factly that her office is seeking to extract the largest fines possible.

Practical takeaways:

  • Review your Privacy Policy to ensure it is compliant, up-to-date and fit for purpose.

  • Revisit your organisation’s privacy risk appetite and posture (including raising this at the Board level), in light of the large fines now available under the Privacy Act and the OAIC’s more proactive enforcement stance.

3. The Commissioner recognises the importance of regulatory certainty and is willing to go to court to obtain it

One of the more interesting threads was what Commissioner Kind thought about her office’s role in providing regulatory certainty. She recognised that regulatory certainty is important because it helps entities know how to comply with the law and to innovate confidently.

Compliance can be challenging without guidance, examples and ultimately court cases that provide a firm interpretation and application of the law.

To this end, Commissioner Kind indicated that she not only wants to develop clear guidance and make regulatory decisions, but she also wants to actively pursue court cases (including inviting challenges to her investigations and determinations) that will either endorse or repudiate the OAIC’s position.

In taking this approach, it appears that she considers court cases to be a ‘win-win’ scenario – even if the court rejects the OAIC’s interpretation, this still moves the ball forward in terms of clarifying the law for everyone.

Commissioner Kind pointed to current cases on foot in the Federal Court (ACL, Medibank) that could bring more clarity on what is considered reasonable security steps under APP 11.1.

She also flagged other areas where regulatory and judicial interpretation is desirable:

  • Definition of personal information and what is ‘de-identification’ – especially in the relatively unchecked practices of data tracking and profiling where she is keen to establish clearer ‘red lines’ for that industry.

  • Definition of ‘reasonable expectations’ in the context of APP 6.2, which permits the use or disclosure of personal information for a secondary purpose where it is related to the primary purpose of collection, and it is reasonably expected by the individual.

Practical takeaways:

  • Keep watching this space for potential clarifications and (re-)interpretations of the current law, especially during a time when privacy law reforms are on a slow burn. [1]

4. The Commissioner is interested in fresh interpretations of current principles in the Privacy Act to keep pace with today’s privacy challenges

Speaking of the current law, the most significant insights from Commissioner Kind came when she was reflecting on how to make the most of the Privacy Act that we have, given the slow pace of legislative reform.

Commissioner Kind noted that many of the terms in the Act and the APPs are flexible in nature. She considered that they should be subject to a ‘purposive interpretation’ to keep pace with modern privacy risks and harms.

The key examples she gave come from APP 3, the collection principle:

  • APPs 3.1 (for agencies) and 3.2 (for organisations) posit that collection must be reasonably necessary for the entity’s functions or activities

  • APP 3.5 states that collection must take place via lawful and fair means.

Commissioner Kind noted that the language of ‘reasonably necessary’ and ‘lawful and fair’ approximate the ‘fair and reasonable’ test that has been proposed by the Privacy Act Review.

To consider what is reasonably necessary is to engage in an exercise of gauging reasonableness, proportionality and necessity. To consider what is fair is to incorporate notions of community values that evolve over time and adapt to changing circumstances.

Commissioner Kind gave an example of what (un)fairness could look like in the digital era – the scraping of publicly available information, bringing it together for profiling, and supporting predatory business practices. Assessing fairness should extend beyond the technical means of collection and extend to the purposes for which the collection takes place.

The Commissioner’s views cut against a legalistic and ‘minimum compliance’ reading of the current Privacy Act. Instead, she has laid down the challenge for organisations to take a ‘commonsense’ and proportionate approach to personal information collection and handling.

Practical takeaways:

  • Use commonsense and apply the ‘pub test’ to assess whether a proposed collection of personal information is reasonably necessary and fair.

  • With any personal information handling activity, ask ‘should we do this’ and not just ‘can we do this’.

  • Map how personal information collection leads to downstream uses and disclosures.

If you have any questions on the Privacy Act and its impact on your organisation, or would like assistance with any of the practical takeaways, please contact us. You can also subscribe to our newsletter to receive updates on the latest privacy developments, including law reform changes, further guidance and new interpretations.


[1] Asked about the status of the Tranche 2 reforms, Commissioner Kind observed that the timing was a matter for the Attorney-General’s Department. She did note that she had met with the new Attorney-General, the Hon Michelle Rowland MP, and was encouraged by her background and interest in privacy and digital regulation.

Comment

Key takeaways from the Privacy Amendment Bill 2024

Comment

Key takeaways from the Privacy Amendment Bill 2024

By Chong Shao

The Australian Government has introduced the Privacy and Other Legislation Amendment Bill 2024, as part of the first tranche of its long-awaited response to the Privacy Act Review. We knew that progress would be measured in years, and so far this is proving out.

The headline changes touted by the government include:

  • A new statutory tort to address serious invasions of privacy.

  • Development of a Children’s Online Privacy Code to better protect children from online harms (accompanied by further funding to support the OAIC in development the code).

  • Greater transparency for individuals regarding automated decisions that affect them.

  • Streamlined and protected sharing of personal information (PI) in situations of a declared emergency or eligible data breach.

  • Stronger enforcement powers for the Australian Information Commissioner.

  • A new criminal offence to outlaw doxxing (i.e., the malicious release of personal data online that could enable individuals to be identified, contacted, or located).

For many, these reforms are modest and therefore disappointing, given the scope and duration of the Privacy Act Review.

Notably missing from the Bill is:

  • Any update to the definition of Pl.

  • Inclusion in the Bill of the four elements along EU GDPR lines that make a consent valid.

  • The introduction of a ‘fair and reasonable test’ for the handling of PI.

  • A requirement for APP entities to conduct a Privacy Impact Assessment for activities with high privacy risks.

  • The right for individuals to request erasure of their PI.

Also missing is one of the more contentious recommendations, the gradual removal of the small business exemption.

On the other hand, the changes represent a moderate progression from the status quo, which needs to be monitored closely and will likely have bigger implications over time.

Some key takeaways:

1. Privacy as a major intersection point

The Bill confirms that privacy sits at the intersection of the major technological and societal issues of our time.

For example:

  • The statutory tort introduces a cause of action for individuals against another person or organisation where there is a serious invasion of privacy – organisations should be aware of this provision (no small business exemption here!); although it should not be an issue if they are focused on “doing the right thing”.

  • A Children’s Online Privacy Code will be developed alongside other initiatives in the online safety space, including Online Safety Codes and the eSafety Commissioner’s research and work on age assurance.

  • Greater transparency regarding automated decision-making comes as part of a broader push by the government around promoting safe and responsible AI.

  • The streamlining of PI sharing in emergency and eligible data breach scenarios is a welcome move but will have to be considered alongside notification requirements in other laws and schemes such as the Security of Critical Infrastructure Act 2018, Data Availability and Transparency Act 2022, and APRA’s Prudential Standard CPS 234 Information Security.

The Bill is a microcosm of the complex privacy, cyber security, and digital regulatory landscape that is taking shape in Australia. The picture is getting (understandably!) complicated, and the Bill contributes to this.

2. Enforcement will matter more

The government’s touting of ‘stronger enforcement powers’ for the Australian Information Commissioner is a bigger deal than it appears on the surface.

On closer inspection, the Bill provides a series of changes that enable more flexible and effective enforcement of the Privacy Act:

  • A civil penalty provision for interference with privacy of individuals (not just ‘serious’ interference).

  • Separately, the civil penalty for serious interference with privacy of individuals is retained, with better elaboration of factors that may be considered in determining if the interference is serious.

  • The Commissioner may seek civil penalty orders and issue infringement notices for breaches of certain Privacy Act provisions and certain Australian Privacy Principles (APPs).

  • Additional monitoring and investigation powers.

One of the biggest issues with compliance and enforcement of the Privacy Act has been the relative lack of flexibility with the existing law, where there is a (recently strengthened) civil penalty provision for ‘serious and repeated interferences with privacy’. OAIC enforcement actions have been few and far between, typically reserved for ‘high profile’ cases such as Meta (Facebook), Medibank, and Australian Clinical Labs.

These changes to the Privacy Act, especially in relation to civil penalty orders and infringement notices, provide the OAIC with a bigger ‘toolkit’ to enforce breaches of the Privacy Act and the APPs.

Privacy Commissioner Carly Kind, in a Privacy Awareness Week Sydney event earlier this year, spoke of the ‘exciting opportunity for the OAIC to become a more enforcement-based regulator’. During the Q&A, she noted that for the first time in a decade there are three dedicated commissioners, and that they would be thinking a lot more about how to conduct proactive and proportionate enforcement.

This was confirmed by the OAIC’s Corporate plan 2024-25, which commits the OAIC to a ‘risk-based, education and enforcement-focused’ posture.

The true effectiveness of the regulator will depend on the extent to which it is sufficiently resourced. We have been advocating for greater funding for the OAIC for over a decade in speeches, forums and submissions. We eagerly await the next budget to see if the government will put its money where its mouth is and that they are indeed serious about ‘ensuring the Privacy Act works for all Australians and is fit for purpose in the digital age’.

Nevertheless, the Bill and the OAIC’s recently publicised posture demonstrate a clear intent and capability for the regulator to conduct more enforcement. Organisations should take note.

3. Keep sticking to the basics

The Privacy Act Review was flagged five years ago, as part of the ACCC’s 2019 Digital platforms inquiry. In the meantime, organisations are facing an increasingly challenging environment:

  • Cyber security incidents (including data breaches and the sophistication of bad actors) continue to increase in size and scale.

  • The growing data economy and technologies like AI heighten business pressures to collect and use personal information, while exposing organisations to greater data governance risks.

  • Australians care more than ever about privacy – according to the OAIC’s Australian Community Attitudes to Privacy Survey 2023, 82% of respondents care enough about protecting their PI to do something about it, and 84% want more control and choice over the collection and use of their PI.

It has been a slow and winding journey to reach the first tranche of changes to the Privacy Act. 

Our key takeaway is not to get over-excited, nor complacent. Not over-excited, because in many ways these are modest changes that will take time to realise their full effects. Not complacent, because the Bill heralds a new era of enforcement for the OAIC, including compliance with the existing Privacy Act and its APPs.

Instead, we think it is best to keep calm and stick to the basics. This means:

  • Assess your privacy practices against the existing APPs with a focus on Pl collection and handling practices and ensure you are taking ‘reasonable steps’ (including technical and organisational measures) in securing and protecting personal information. [1]

  • Know what PI (including sensitive information) you have now, where it is, whether you should still have it and the ways in which you are using it.

  • Assess cyber security risks and controls and consider certification against relevant standards.

  • Establish an improvement and remediation plan based on the findings of points 1, 2 and 3.

Putting the foundations in place now will give you a simpler path to compliance and good practice for both the current legislative requirements and the new requirements to come, including whatever Tranche Two will bring.

IIS can help

IIS and our subsidiary TrustWorks 360 can help you:

  • Navigate the complexity of the privacy, cyber security, and digital regulatory landscape.

  • Get the basics right and help you comply with current and incoming requirements, to satisfy customer expectations and to avoid regulator scrutiny and enforcement.

  • Move beyond compliance to performance and resilience that builds trust and achieves business objectives in a fast-changing world.

Why? Because as we have said at IIS for two decades, “It is just good business.”

Please contact us if you have any questions about the Privacy Act reforms and how it may affect your organisation. You can also subscribe to receive regular updates from us about key developments in the privacy and security space.


[1] In a separate interview, Commissioner Kind discussed the OAIC’s enforcement action against Medibank, for activities leading up to the data breach. The OAIC is making the case that Medibank didn’t take ‘reasonable steps’ to protect the personal information they collected and held. Reasonable steps are described as:

  • State of the art security

  • Good governance

  • Organisational responsibilities.

Comment