Viewing entries tagged
Artificial intelligence

From ‘Black Box’ to Transparency: the Privacy Act’s New ADM Rules

Comment

From ‘Black Box’ to Transparency: the Privacy Act’s New ADM Rules

By John Pane

The introduction of the Privacy and Other Legislation Amendment Act 2024 (POLA Act) in 2024 represented the federal government’s phased approach to implementing the proposed changes to the Privacy Act 1988 (Cth) (Privacy Act) as published in the Government’s response to the Review of the Privacy Act Report.

One of the deceptively ‘simple’ changes made to the Privacy Act features in Part 15 to Schedule 1 of the POLA Act which introduced new transparency requirements for Automated Decision-Making (ADM) undertaken by APP entities and government agencies.  These reforms, which come into effect from 10 December 2026, specifically:

  • target how organisations use algorithms, AI and computer programs to make decisions that impact individuals;

  • seek to prevent ‘black box’ decision-making by forcing organisations to be open about when and how they use these technologies; and

  • uplift organisational privacy policies with all necessary changes.

What does Part 15 of the POLA Act say?

Part 15 of the POLA Act states organisational privacy policies must be amended for greater transparency under APP 1 where:

  • the organisation uses a ‘computer program’ to either:

    • perform decision-making functions in a fully automated manner (without a human decision-maker); or

    • substantially and directly assist human staff to make decisions;

  • the computer program uses personal information about that individual to perform its function (i.e. to either make the decision, or to assist the human decision-maker); and

  • the decision could ‘reasonably be expected to significantly affect the rights or interests of an individual’.

Defining the scope: What is a ‘computer program’?

The term ‘computer program’ is not restricted to emerging technologies like Generative AI or Large Language Models (LLMs). Instead, the POLA 2024 Bill Explanatory Memorandum clarifies that it encompasses a wide spectrum of automation, ranging from sophisticated machine learning to basic rule-based logic.

 While these reforms have obvious implications for emerging technologies such as autonomous AI agents, they also have the potential to capture a broad range of simpler automation use cases that are already widely used, such as:

  • software that assesses input data against pre-defined objective criteria and then applies business rules based on those criteria (e.g. whether to approve or reject an application);

  • software that processes data to generate evaluative ratings or scorecards, which are then used by human decision makers (e.g. predictive analytics); and

  • robotic process automation (which uses software to replace human operators for simple and repetitive rule-based tasks, such as data entry, data extraction and form filing).

If a tool (such as predictive analytics, algorithmic sorting or legacy macros) leverages personal information to either make a final determination or provide ‘substantial and direct assistance’ to a human decision-maker, it may fall within the scope of Part 15.

The materiality threshold: Assessing ‘significant effect’

Not every automated process requires disclosure in the organisational privacy policy. The obligation is triggered when a decision could ‘reasonably be expected to significantly affect the rights or interests of an individual.’

The POLA Act emphasises that the impact must be more than trivial and have the potential to significantly influence an individual’s circumstances. Management must conduct case-by-case assessments to determine if their use cases meet this threshold. The legislation provides a non-exhaustive list of high-impact domains:

  • Legal and Statutory Benefits: Decisions regarding the granting or revocation of benefits under law.

  • Contractual Rights: Decisions affecting insurance policies, credit facilities, or service agreements.

  • Access to Essential Services: Impacting an individual’s ability to access healthcare, education, or significant social supports.

For organisations operating in regulated sectors – such as financial services, healthcare, and law enforcement – the majority of automated processes are likely to meet this threshold. However, general corporate functions, such as automated fraud detection or facial recognition for security, must also be evaluated under the same materiality lens.

Mandatory updates to privacy policies

The most immediate operational requirement arising from the ADM reforms is ensuring your APP Privacy Policy is ready for the new APP 1.7-1.9 obligations, which commence on 10 December 2026. From that date, an APP entity must include additional information in its privacy policy if it has arranged for a computer program to use personal information to make (or substantially and directly support) decisions that could reasonably be expected to significantly affect an individual’s rights or interests.

1. What must be included in the privacy policy

Where the threshold is met, the privacy policy must describe the kinds of:

  • Data Inputs: What categories of personal information are fed into the relevant computer programs?

  • Fully Automated Decisions: Which processes are handled entirely by technology without human intervention?

  • Assisted Decision-Making: Which processes involve human-in-the-loop models where a computer program provides the primary evaluative data?

2. Practical drafting tip

The Office of the Australian Information Commissioner’s (OAIC’s) framing in its APP 1 Guidelines is deliberately ‘kinds of’ rather than ‘every single model/rule’. For many organisations, the cleanest approach is to add an ‘Automated decisions’ (or similar) subsection in the privacy policy that:

  • lists the key decision areas that meet the ‘significant effect’ threshold (e.g. onboarding/eligibility, credit/insurance decisions, fraud outcomes that affect access, service access/termination), and

  • for each, summarises the kinds of personal information used and whether it is fully automated or substantially and directly assists a human decision-maker.

Failure to update organisational privacy policies carries direct legal, financial and reputational risk. The OAIC has the authority to issue infringement notices for non-compliant policies.

Strengthening internal procedures and controls

Besides review and uplift to organisational privacy policies, impacted APP entities (including government agencies) should ensure alignment and compliance with these new obligations.: As a limited example this may include the following activities:

1. Automation mapping and inventory

Organisations should conduct a comprehensive audit of their ‘automation footprint.’ This involves identifying all instances where computer programs interact with personal information to influence outcomes. This inventory should distinguish between ‘back-office’ efficiencies (like data entry) and ‘evaluative’ functions (like credit scoring, tenancy applications, talent acquisition screening) that meet the materiality threshold.

2. Risk Impact Assessments

In line with existing practices for conducting a Privacy Impact Assessment (PIA), organisations should incorporate automated decision-making into their analysis. This would evaluate the logic of the program, the quality of the data inputs, and the potential for biased or inaccurate outputs, particularly for vulnerable groups such as children or individuals with disabilities.

3. Human-in-the-loop verification

Where computer programs/applications provide ‘substantial assistance’ to humans, internal controls must verify that the human involvement is meaningful rather than a ‘rubber-stamping’ exercise. Documenting the level of human oversight is essential for demonstrating compliance with the assisted decision-making disclosure requirements.

Navigating ambiguities and future privacy reforms

While the POLA Act provides a clearer transparency framework, certain areas remain subject to interpretation. The ‘second tranche’ of Privacy Act reforms may further impact these more recent changes. For instance, the current exemption for employee records and the evolving definition of ‘personal information’ (regarding metadata, disambiguated data or biometric templates) may impact how ADM rules apply to internal workplace monitoring.

Furthermore, while the POLA Act does not currently mandate changes to APP 5 Collection Notices, the existing requirements of APP 5.2 may already necessitate disclosures if personal information is shared with third-party AI platforms or automation vendors.  Management therefore should take a holistic view of their privacy compliance obligations across all organisational touchpoints of the personal information lifecycle.

IIS can help

IIS can help you with compliance and best practice in the ADM space, including:

  • Undertake automation mapping and inventory activities.

  • Assess whether ADM processes meet the materiality threshold.

  • Conduct PIAs that assess ADM-related projects.

  • Prepare updates to privacy policies to enhance ADM transparency.

More broadly, we help organisations:

  • Navigate the complexity of the privacy, cyber security, and digital regulatory landscape.

  • Get the basics right and help you comply with current and incoming requirements, to satisfy customer expectations and to avoid regulator scrutiny and enforcement.

  • Move beyond compliance to performance and resilience that builds trust and achieves business objectives in a fast-changing world.

Why? Because as we have said at IIS for two decades, ‘It is just good business.’

Please contact us if you have any questions about the Privacy Act reforms and how they may affect your organisation. You can also subscribe to receive regular updates from us about key developments in the privacy and security space.

Comment

Australia’s National AI Plan: Big Vision, Missing Guardrails

Comment

Australia’s National AI Plan: Big Vision, Missing Guardrails

By Mike Trovato

On 2 December 2025, the Australian government released the National AI Plan (NAP). NAP has arrived at a pivotal moment, when artificial intelligence (AI) is the hot technology pathway for all organisations, touted as rapidly shaping economic structures, labour markets and critical digital infrastructure.

NAP is ambitious in scope: expand AI’s economic opportunity, ensure its benefits are widely distributed, and keep Australians safe as the technology becomes embedded in daily life, essential services, and banking. NAP frames AI not merely as a tool for productivity, but as a democratising national capability requiring coordinated investment in skills, compute, public-sector transformation, and international alignment (without the laws and regulations).

But there are legitimate concerns and questions about it. John Pane, Electronic Frontiers Australia Chair, said in a recent blog post, “We need strong EU style ex ante AI laws for Australia, not a repeat of Australia’s disastrous ‘light touch’ private sector privacy regime introduced in 2000. We need to also resist the significant geo-political pressure being brought to bear on Australia and others by the Trump administration, forcing sovereign nations to adopt US technology ‘or else’.

Most importantly from an IIS perspective, it puts additional pressure on already stretched regulators such as the Office of the Australian Information Commissioner (OAIC) who will bear the brunt of the enforcement burden, without a commensurate increase in funding.

What is it

The core architecture of NAP is built around three pillars:

  1. Capture the opportunity – Increase Australia’s AI capability through sovereign compute access, industry investment, research support, and a workforce strategy that emphasises inclusion and long-term adaptability.

  2. Spread the benefits – Ensure AI adoption occurs not just in major corporations and government agencies but across regions, small businesses, social sectors, and public services. The Plan closely links AI growth to social equity, union negotiation, and regional skills pipelines. [1]

  3. Keep Australians safe – Establish the Australian AI Safety Institute, enhance standards alignment, and build frameworks for responsible, trustworthy AI across public and private sectors.

This structure does mirror the strategies of peer nations such as the UK, Singapore, and Canada with some notable omissions. It does provide unity: a national vision that integrates economic development with safety, fairness, and social wellbeing.

Socio-technical benefits

National coordination

Australia has struggled with fragmented digital and AI policy, spread across departments, agencies, and states. NAP moves toward a unified national architecture. This could reduce duplication and create a reference point for regulators, industry, and research institutions.

Investment in sovereign AI capability

By emphasising compute infrastructure, cloud capacity, and research ecosystems, NAP begins shifting Australia from AI consumer to AI contributor. This infrastructure matters: without sovereign compute access, Australia risks dependency on foreign technology decisions, third party vendors (with concentration risk) and data-handling practices.

Worker protections and social equity

Few national AI strategies foreground labour and social outcomes as explicitly as NAP. It integrates unions, worker transition programs, and protections for vulnerable groups. This ensures AI adoption considers societal impacts, not solely economic metrics. Yes, as noted above we have already seen some missteps in this area and fear is very much at the front of mind of several sector-specific worker types.

By targeting small businesses, local councils and not-for-profits, NAP attempts to democratise AI adoption [2], reducing the risk of AI-driven inequality between large and small organisations. This will be challenging given the trust issues many Australians have with AI and with respect to privacy and community attitudes.

Public sector modernisation

NAP emphasises AI-enabled public services such as health, education, welfare, and transport. When deployed safely, AI can increase accessibility, reduce administrative burden, and improve service delivery in remote and underserviced communities. Yes, this does assume a level of accountability and testing we did not see in Robodebt [3], and yes, we will have privacy concerns as we saw with Harrison.AI.

Socio-technical gaps

Despite its strengths, NAP contains structural weaknesses that carry real risk. The most significant dangers correspond to gaps in regulation, governance, and implementation.

Legal obligations and assurance

Unlike the EU AI Act or the US frameworks that mandate safety testing, reporting, and restrictions, NAP contains no enforceable legal obligations for high-risk AI systems. The Australian AI Safety Institute is promising but undefined. Without standards, authority, or enforcement powers, Australia risks deploying AI in financial services, healthcare, policing, and welfare without adequate safeguards.

Assurance is another area of potential harm for individuals. Globally, AI assurance, independent evaluation of robustness, bias, safety, and regulatory compliance is becoming essential and, in some cases, mandated by law. NAP does not define:

  • Assurance requirements

  • AI audit processes (or appropriate depth)

  • Documentation requirements

  • Pre-deployment testing

  • Model lifecycle controls

  • Ongoing continuous monitoring

  • Evaluation methods for generative AI.

Without an assurance regime, high-risk AI may be deployed in opaque, untested, or unsafe ways.

Risk identification and treatment

NAP does not specify which AI systems should be considered ‘high risk’ in banking, payments, energy, digital identity, critical infrastructure, healthcare, legal, national security or property systems.

Other nations treat critical infrastructure AI as a national security concern requiring heightened controls. Australia does not. The result could be AI-driven failures or exploitation in systems foundational to economic stability and social trust.

Government procurement is one of the most powerful levers for enforcing safe AI. The US and UK require impact assessments and supplier compliance with AI safety principles. NAP includes none of this. Australia may inadvertently purchase unsafe or non-compliant systems, embed risks such as bias, discrimination, or allow human harm within essential public functions.

NAP does not specify:

  • Which agency oversees AI risks in each sector

  • How regulators coordinate

  • How compliance will be enforced

  • Incident reporting for AI failures

  • Enforcement authority.

This creates a governance vacuum. In high-stakes and high risk domains, unclear jurisdiction leads to slow response, regulatory drift, and systemic risk.

Possible privacy concerns

NAP touches privacy indirectly. Potential gaps remain:

  • No new privacy protections tailored to AI-enabled data processing.

  • No guidance on model training using personal data or derived data or data use (consent).

  • No restrictions on biometric surveillance, emotional analytics, or behavioural prediction.

  • No provisions for transparency, contestability, opt-out, or rights when AI makes or influences decisions.

This leaves individuals exposed, particularly in welfare, policing, employment, and health contexts where Australia already has a history of algorithmic harm.

It also puts additional pressure on already stretched regulators such as the OAIC.

Risk identification and treatment

Lastly, NAP is ‘civilian oriented’, Australia lacks a publicly articulated framework for military, defence, dual-use, or national-security AI governance, even though peer nations (US, UK, EU, Singapore) explicitly integrate defence considerations or maintain separate defence AI strategies. This is worrisome.

Conclusion

NAP is a credible and coherent strategic document with substantial socio-technical benefits: national coordination, sovereign capability, worker-centred policy, public-sector uplift, and inclusive AI diffusion. It positions Australia to participate more actively in the global AI landscape.

NAP also leaves dangerous gaps. The absence of enforceable safety rules, AI assurance infrastructure, sector-specific oversight, procurement standards, enforcement authority, unclear government roles and responsibilities, and privacy safeguards creates systemic risk.

NAP nods toward safety without building the machinery necessary to enforce it. NAP is aspirational and does not ensure or build resilience Australia will still need the regulatory, technical, and institutional backbone that transforms NAP from vision to real protection.

[1] However, we already see AI redundancies and sectoral fears, for example recently at CBA, when it revealed in July it would make 45 roles in its customer call centres redundant because of a new bot system it had introduced – then reversed the decision after deciding it needed the humans to cope with its growing workloads

[2] In broad strokes, ‘democratise’ in an AI context equates to the notion that everyone and every organisation, regardless of socio-economic status, and regardless of technical skill or acumen or for companies and organisations without specialised or extensive IT, can have the same access to AI tools, workflows, and benefits.

[3] While Robodebt was not and AI making autonomous decisions, it was algorithmic bias that was relied upon without proper testing, safety, or human in the loop controls. See Royal Commission into Robodebt.

Comment