The OAIC’s new approach: An enforcement memo in complaints clothing

Comment

The OAIC’s new approach: An enforcement memo in complaints clothing

By Chong Shao and Malcolm Crompton

On 2 March 2026, Privacy Commissioner Carly Kind published a post announcing a ‘new approach’ to how the Office of the Australian Information Commissioner (OAIC) will handle individual privacy complaints. At one level, the post is about complaint handling. IIS reads this as an enforcement memo in complaints clothing.

Commissioner Kind has made a statement about regulatory priorities: in an environment of growing privacy risks, rising complaint volumes, and constrained public resources, the OAIC intends to focus its effort where it can have the greatest impact. The complaints-handling changes flow from that. They are a consequence of the strategy, not the story.

What the OAIC has announced

Four elements of the announcement are worth noting, all of which point in the same direction:

  1. Enforcement focus is now the headline. The OAIC describes an intentional shift over the past 12 months toward a greater focus on enforcement, citing deterrent and educative benefits, and a desire for ‘maximum impact’ across sectors. The results are already tangible: a $5.8 million civil penalty against Australian Clinical Labs, civil penalties proceedings filed against Optus and Medibank, and a $50 million settlement from Meta Platforms.

  2. Complaint handling will be more selective and threshold-driven. Not all complaints will be taken through to investigation. The OAIC will conduct a ‘strategic assessment’ and may decide not to investigate after considering all circumstances, including regulatory priorities.

  3. Complainants are being coached to bring better-formed complaints. The OAIC has published checklists, templates, and is clear about what information is required from the outset (including what happened, when, and the impact).

  4. Timing expectations are being reset. As of February 2026, new validly lodged complaints are unlikely to be substantially progressed for 6-12 months. That is a frank admission, and a deliberate signal.

It’s rare for a regulator to be this candid about the trade-offs it is making. The OAIC isn’t just explaining process – it is publicly setting out why individual casework is being deprioritised in favour of enforcement.

So what for organisations?

IIS advises four things with respect to this shift in focus:

1. Don’t confuse ‘slower complaint handling’ with ‘lower risk’

The OAIC is concentrating its effort, not retreating from the field. Organisations whose practices generate repeated complaints or patterns of non-compliance are now more likely to attract attention, not less.

The relevant question isn’t whether your next complaint gets processed in three months or twelve. It’s whether your privacy practices are the kind the OAIC will decide are worth pursuing at scale.

2. Complaints will increasingly function as signals, not just casework

The OAIC is deliberately narrowing the front door. Complainants are being directed to raise matters with organisations first, to use alternative pathways where available, and to understand that even a well-formed complaint may not be investigated.

The practical effect is that organisations become the primary forum for resolution. The complaints that do reach the OAIC will increasingly arrive as signals of something worth looking at, not as individual grievances to be managed. Treat your complaint themes accordingly. A pattern of similar issues across customers or channels is exactly what an enforcement-focused regulator scans for.

3. This is consistent with the direction the OAIC has been signalling

None of this is a surprise. IIS’ reflections on Privacy Awareness Week 2025 highlighted Commissioner Kind’s emphasis on organisational accountability, systemic power imbalances, and a more proactive regulatory posture. The March 2026 post is another milestone on that same trajectory: greater willingness to use the regulator’s full toolkit, and a clearer focus on shaping organisational behaviour and resilience at scale.

The direction of travel is clear: privacy compliance is increasingly about governance and accountability, not just documentation and process.

4. Privacy complaint handling still matters

Finally, and straightforwardly, make sure your privacy complaint handling process is in good shape. The OAIC requires complainants to raise matters with the organisation first and allow 30 days for a response. That makes the organisation the first and most important forum for resolution. The process does not need to be elaborate – but it does need to reach the right people, produce a genuine response, and generate enough of a record to identify repeat issues. Pattern detection, at even a basic level, is now a governance capability.

The way forward

Don’t read the Privacy Commissioner’s post as ‘complaints will take longer to process, so we can relax’. Read plainly, it signals the opposite: the OAIC is being explicit that it will deploy its resources toward enforcement and systemic impact. It will apply more robust thresholds to individual complaints to make that shift possible.

For organisations, the practical response has two dimensions. The first is operational: ensure privacy complaint handling is genuinely effective and allows for pattern detection over time. The second is strategic: treat complaint patterns as an early warning system for the kinds of systemic issues and market practices that the OAIC is now most focused on. That is where the real regulatory risk sits, and where board and executive attention should be directed.

IIS can help – if you would like assistance with this or any other privacy or data protection matters, please contact us.

Comment

FIIG and beyond: How regulators are converging on the same cyber standard

Comment

FIIG and beyond: How regulators are converging on the same cyber standard

By Chong Shao

On 9 February 2026, the Australian Securities and Investments Commission (ASIC) announced that the Federal Court ordered FIIG Securities Limited to pay $2.5 million in pecuniary penalties, following ASIC action over cyber security failures spanning more than four years.

This is the first time the Federal Court has imposed civil penalties for cyber security failures under the general Australian Financial Services (AFS) licence obligations. ASIC didn’t treat this as a one-off IT mistake. Its message was simple: cyber resilience is now part of doing business.

Whether or not you are in financial services, this case is significant. Across Australia’s regulatory ecosystem, we are seeing a steady convergence towards a practical, outcomes-focused cyber security standard, often described as ‘reasonable steps’.

FIIG in brief and why this outcome matters

ASIC’s media release sets out the core narrative clearly:

  • FIIG’s failures related to protecting thousands of clients from cyber security threats over a sustained period.

  • A 2023 cyber-attack resulted in around 385GB of confidential information being stolen, with highly sensitive client data leaked online (including identity documents and financial identifiers).

  • FIIG notified around 18,000 clients that their personal information may have been compromised. 

  • FIIG admitted that adequate measures suited to a firm of its size and the sensitivity of the data would likely have enabled earlier detection and response, and that complying with its own policies may have prevented some or all of the client information from being downloaded.

There are two takeaways from the FIIG case. Firstly, cyber security hygiene is being treated as a matter of ongoing governance, not just technology. Secondly, regulators and courts are increasingly interested in whether controls are operationalised – that is, implemented, monitored, tested and evidenced – not merely documented.

That shift is not unique to ASIC. It’s part of a broader move (including in privacy regulation) from policy compliance to demonstrable protection.

The cyber hygiene checklist: what regulators now expect as basics

ASIC was unusually specific about what FIIG did not have in place. This gives organisations a simple and helpful prompt: Are we covering the basics, and can we prove it? 

Here’s a practical checklist, using the categories ASIC highlighted:

  • Identity and access

    • Multi-factor authentication for remote access users

    • Strong passwords

    • Access controls for privileged accounts

  • Network and endpoint protection

    • Appropriate configuration of firewalls and security software

  • Testing and scanning

    • Regular penetration testing and vulnerability scanning

  • Patching and updates

    • A structured plan to ensure key software systems were updated to address security vulnerabilities

  • Monitoring

    • Qualified IT personnel monitoring threat alerts to identify and respond to cyber-attacks

  • Training

    • Mandatory cyber security awareness training to staff

  • Incident readiness

    • An appropriate cyber incident response plan, tested at least annually.

A key subtext in the FIIG outcome is that the ‘what’ is only half the story. The other half is whether controls are actually in place and operating day-to-day.

Most organisations can point to policies. Fewer can answer simple operational questions like:

  • Do we have multi-factor authentication in place for remote access users, and is it consistently enforced?

  • When did we last run penetration testing and vulnerability scanning, and what did we do about the findings?

  • When did we last test our incident response plan, and what changed as a result?

Operationalising the controls is how ‘reasonable steps’ become real.

The bigger shift: ‘reasonable steps’ is becoming the common standard

It’s tempting to read FIIG as a financial services story: AFSL obligations, ASIC enforcement, court-ordered penalties. But the more important trend is cross-regime.

A similar ‘reasonable steps’ story has been playing out under privacy law. The OAIC has been increasingly explicit about its enforcement posture, including civil penalty proceedings anchored in APP 11.1 (security) and the expectation of ‘reasonable steps’ to protect personal information. 

In Australian Clinical Labs, the Federal Court imposed $5.8 million in civil penalties, including $4.2 million for failing to take reasonable steps under APP 11.1 to protect personal information held on Medlab Pathology’s IT systems. The Court’s analysis focused on concrete security shortcomings – such as weak authentication, inadequate logging, lack of file encryption, unsupported systems and limitations in antivirus controls – reinforcing the same core message as FIIG: principles-based obligations are now being tested against real-world cyber hygiene.

When you put FIIG alongside recent privacy enforcement, a clear pattern emerges. Through different regulators and different statutes, there is a shared test: do you have security controls that match your data and risk profile, and can you demonstrate that in practice?

The shared test also points to why silos don’t work. You can’t assess whether controls are proportionate without understanding what data you hold, why you hold it, how long you keep it, and what expectations you’ve set with customers. In practice, cyber hygiene, data governance and privacy compliance end up being assessed together – because together they explain whether your safeguards are reasonable for your context.

Regulators are rarely interested in the elegance of any single framework. They’re interested in whether your organisation:

  • invested appropriately (people, process, technology) 

  • operated controls consistently over time to manage data risk

  • learned and improved

  • can demonstrate that through clear records.

Turning checklists into confidence: a practical next step

For many organisations, the right response to FIIG is not a massive multi-year program. It’s a practical sequence:

  1. Start with the data – confirm what sensitive data you hold, where is it held and who can access it; then check that you have the FIIG ‘baseline’ controls in place for that environment.

  2. Validate the controls work in practice – and that they’re prioritised around your highest-risk data and systems.

  3. Make it easy to demonstrate – keep clear, simple records that link your data and governance decisions to the controls you operate.

How IIS can help

We help organisations translate ‘reasonable steps’ into something practical. Depending on where you are starting from, that can include:

  • A short, targeted review of your current cyber hygiene controls, focusing on the gaps that matter most and what you can readily demonstrate.

  • Bringing privacy, security and data governance together so you have one joined-up view of what data you hold, how it's protected, and who is accountable.

  • Sharper governance and reporting for executives and boards – clear ownership, a realistic view of risk, and a sensible uplift plan rather than a long list of ‘to-dos’.

  • Practical incident response exercises that test how things work under pressure and result in concrete improvements.

Please contact us if you have any questions or would like assistance.

Comment

IIS and CBPR: A long view on building interoperable data transfer frameworks

Comment

IIS and CBPR: A long view on building interoperable data transfer frameworks

By Malcolm Crompton

Cross-border data transfers remain one of the hardest practical problems in privacy compliance. One framework IIS has worked on since its earliest days is the Cross-Border Privacy Rules (CBPR) system – first within the Asia-Pacific Economic Cooperation forum (APEC), and now through the Global CBPR Forum. This short reflection was prompted by a recent IAPP article on the Global CBPR Forum.

For the first years of its development (2003-2016), IIS was deeply involved in the creation of the APEC privacy framework and the CBPR system. Much of that process was led by officials from the Australian Attorney-General’s Department, especially Peter Ford and Colin Minihan. IIS led the design of the APEC Data Privacy Pathfinder in 2007, when Australia hosted APEC. After that, we collaborated closely with participants and officials from other economies including the United States, Japan and Korea, to agree on a workable CBPR system.

Much of our written contribution is collected on the Cross-Border Data Flows and International Regulation page of the IIS website. In particular, our short article ‘East meets West: striving to interoperable frameworks?’ (2014) compares CBPR with EU approaches to cross-border data flows, and explains why CBPR has practical advantages over mechanisms such as adequacy and Binding Corporate Rules. Later papers on that page include our work on benefits realisation from CBPR.

Over time, it became clear that the design of CBPR meant it did not have to be limited to APEC economies. That insight helped drive the creation of the Global CBPR Forum (established by participating jurisdictions in 2022) to transition CBPR and related certifications to a global framework.

In the words of the Global CBPR Forum website, Global CBPR certifications “allow organizations to demonstrate their compliance to internationally-recognised data protection and privacy standards developed and supported by participating jurisdictions. Accountability is a key feature of the Global CBPR and Global PRP Systems. Companies that seek Global CBPR or Global PRP certification must have their data protection and privacy policies and practices verified by a third-party certification entity known as an Accountability Agent.”

In practical terms, the Global CBPR System is designed to ensure that when a certified organisation moves personal information across borders, it is protected to the standards prescribed by the Global CBPR Framework. Importantly, participating jurisdictions nominate an enforcement “backstop” (a privacy enforcement authority/regulator) to support compliance and provide redress for individuals should matters reach that point.

Participation in Global CBPR has grown only slowly – not least because of stout resistance from European interests. But grow it does.

The recent article in IAPP News echoes everything IIS has been saying for the last 15-20 years. It is usefully titled “What makes the Global CBPR Forum an attractive data transfer framework to implement?” If your organisation is reassessing cross-border transfer mechanisms in 2026, Global CBPR is worth putting on the shortlist for serious evaluation.

Comment

From ‘Black Box’ to Transparency: the Privacy Act’s New ADM Rules

Comment

From ‘Black Box’ to Transparency: the Privacy Act’s New ADM Rules

By John Pane

The introduction of the Privacy and Other Legislation Amendment Act 2024 (POLA Act) in 2024 represented the federal government’s phased approach to implementing the proposed changes to the Privacy Act 1988 (Cth) (Privacy Act) as published in the Government’s response to the Review of the Privacy Act Report.

One of the deceptively ‘simple’ changes made to the Privacy Act features in Part 15 to Schedule 1 of the POLA Act which introduced new transparency requirements for Automated Decision-Making (ADM) undertaken by APP entities and government agencies.  These reforms, which come into effect from 10 December 2026, specifically:

  • target how organisations use algorithms, AI and computer programs to make decisions that impact individuals;

  • seek to prevent ‘black box’ decision-making by forcing organisations to be open about when and how they use these technologies; and

  • uplift organisational privacy policies with all necessary changes.

What does Part 15 of the POLA Act say?

Part 15 of the POLA Act states organisational privacy policies must be amended for greater transparency under APP 1 where:

  • the organisation uses a ‘computer program’ to either:

    • perform decision-making functions in a fully automated manner (without a human decision-maker); or

    • substantially and directly assist human staff to make decisions;

  • the computer program uses personal information about that individual to perform its function (i.e. to either make the decision, or to assist the human decision-maker); and

  • the decision could ‘reasonably be expected to significantly affect the rights or interests of an individual’.

Defining the scope: What is a ‘computer program’?

The term ‘computer program’ is not restricted to emerging technologies like Generative AI or Large Language Models (LLMs). Instead, the POLA 2024 Bill Explanatory Memorandum clarifies that it encompasses a wide spectrum of automation, ranging from sophisticated machine learning to basic rule-based logic.

 While these reforms have obvious implications for emerging technologies such as autonomous AI agents, they also have the potential to capture a broad range of simpler automation use cases that are already widely used, such as:

  • software that assesses input data against pre-defined objective criteria and then applies business rules based on those criteria (e.g. whether to approve or reject an application);

  • software that processes data to generate evaluative ratings or scorecards, which are then used by human decision makers (e.g. predictive analytics); and

  • robotic process automation (which uses software to replace human operators for simple and repetitive rule-based tasks, such as data entry, data extraction and form filing).

If a tool (such as predictive analytics, algorithmic sorting or legacy macros) leverages personal information to either make a final determination or provide ‘substantial and direct assistance’ to a human decision-maker, it may fall within the scope of Part 15.

The materiality threshold: Assessing ‘significant effect’

Not every automated process requires disclosure in the organisational privacy policy. The obligation is triggered when a decision could ‘reasonably be expected to significantly affect the rights or interests of an individual.’

The POLA Act emphasises that the impact must be more than trivial and have the potential to significantly influence an individual’s circumstances. Management must conduct case-by-case assessments to determine if their use cases meet this threshold. The legislation provides a non-exhaustive list of high-impact domains:

  • Legal and Statutory Benefits: Decisions regarding the granting or revocation of benefits under law.

  • Contractual Rights: Decisions affecting insurance policies, credit facilities, or service agreements.

  • Access to Essential Services: Impacting an individual’s ability to access healthcare, education, or significant social supports.

For organisations operating in regulated sectors – such as financial services, healthcare, and law enforcement – the majority of automated processes are likely to meet this threshold. However, general corporate functions, such as automated fraud detection or facial recognition for security, must also be evaluated under the same materiality lens.

Mandatory updates to privacy policies

The most immediate operational requirement arising from the ADM reforms is ensuring your APP Privacy Policy is ready for the new APP 1.7-1.9 obligations, which commence on 10 December 2026. From that date, an APP entity must include additional information in its privacy policy if it has arranged for a computer program to use personal information to make (or substantially and directly support) decisions that could reasonably be expected to significantly affect an individual’s rights or interests.

1. What must be included in the privacy policy

Where the threshold is met, the privacy policy must describe the kinds of:

  • Data Inputs: What categories of personal information are fed into the relevant computer programs?

  • Fully Automated Decisions: Which processes are handled entirely by technology without human intervention?

  • Assisted Decision-Making: Which processes involve human-in-the-loop models where a computer program provides the primary evaluative data?

2. Practical drafting tip

The Office of the Australian Information Commissioner’s (OAIC’s) framing in its APP 1 Guidelines is deliberately ‘kinds of’ rather than ‘every single model/rule’. For many organisations, the cleanest approach is to add an ‘Automated decisions’ (or similar) subsection in the privacy policy that:

  • lists the key decision areas that meet the ‘significant effect’ threshold (e.g. onboarding/eligibility, credit/insurance decisions, fraud outcomes that affect access, service access/termination), and

  • for each, summarises the kinds of personal information used and whether it is fully automated or substantially and directly assists a human decision-maker.

Failure to update organisational privacy policies carries direct legal, financial and reputational risk. The OAIC has the authority to issue infringement notices for non-compliant policies.

Strengthening internal procedures and controls

Besides review and uplift to organisational privacy policies, impacted APP entities (including government agencies) should ensure alignment and compliance with these new obligations.: As a limited example this may include the following activities:

1. Automation mapping and inventory

Organisations should conduct a comprehensive audit of their ‘automation footprint.’ This involves identifying all instances where computer programs interact with personal information to influence outcomes. This inventory should distinguish between ‘back-office’ efficiencies (like data entry) and ‘evaluative’ functions (like credit scoring, tenancy applications, talent acquisition screening) that meet the materiality threshold.

2. Risk Impact Assessments

In line with existing practices for conducting a Privacy Impact Assessment (PIA), organisations should incorporate automated decision-making into their analysis. This would evaluate the logic of the program, the quality of the data inputs, and the potential for biased or inaccurate outputs, particularly for vulnerable groups such as children or individuals with disabilities.

3. Human-in-the-loop verification

Where computer programs/applications provide ‘substantial assistance’ to humans, internal controls must verify that the human involvement is meaningful rather than a ‘rubber-stamping’ exercise. Documenting the level of human oversight is essential for demonstrating compliance with the assisted decision-making disclosure requirements.

Navigating ambiguities and future privacy reforms

While the POLA Act provides a clearer transparency framework, certain areas remain subject to interpretation. The ‘second tranche’ of Privacy Act reforms may further impact these more recent changes. For instance, the current exemption for employee records and the evolving definition of ‘personal information’ (regarding metadata, disambiguated data or biometric templates) may impact how ADM rules apply to internal workplace monitoring.

Furthermore, while the POLA Act does not currently mandate changes to APP 5 Collection Notices, the existing requirements of APP 5.2 may already necessitate disclosures if personal information is shared with third-party AI platforms or automation vendors.  Management therefore should take a holistic view of their privacy compliance obligations across all organisational touchpoints of the personal information lifecycle.

IIS can help

IIS can help you with compliance and best practice in the ADM space, including:

  • Undertake automation mapping and inventory activities.

  • Assess whether ADM processes meet the materiality threshold.

  • Conduct PIAs that assess ADM-related projects.

  • Prepare updates to privacy policies to enhance ADM transparency.

More broadly, we help organisations:

  • Navigate the complexity of the privacy, cyber security, and digital regulatory landscape.

  • Get the basics right and help you comply with current and incoming requirements, to satisfy customer expectations and to avoid regulator scrutiny and enforcement.

  • Move beyond compliance to performance and resilience that builds trust and achieves business objectives in a fast-changing world.

Why? Because as we have said at IIS for two decades, ‘It is just good business.’

Please contact us if you have any questions about the Privacy Act reforms and how they may affect your organisation. You can also subscribe to receive regular updates from us about key developments in the privacy and security space.

Comment

Australia’s National AI Plan: Big Vision, Missing Guardrails

Comment

Australia’s National AI Plan: Big Vision, Missing Guardrails

By Mike Trovato

On 2 December 2025, the Australian government released the National AI Plan (NAP). NAP has arrived at a pivotal moment, when artificial intelligence (AI) is the hot technology pathway for all organisations, touted as rapidly shaping economic structures, labour markets and critical digital infrastructure.

NAP is ambitious in scope: expand AI’s economic opportunity, ensure its benefits are widely distributed, and keep Australians safe as the technology becomes embedded in daily life, essential services, and banking. NAP frames AI not merely as a tool for productivity, but as a democratising national capability requiring coordinated investment in skills, compute, public-sector transformation, and international alignment (without the laws and regulations).

But there are legitimate concerns and questions about it. John Pane, Electronic Frontiers Australia Chair, said in a recent blog post, “We need strong EU style ex ante AI laws for Australia, not a repeat of Australia’s disastrous ‘light touch’ private sector privacy regime introduced in 2000. We need to also resist the significant geo-political pressure being brought to bear on Australia and others by the Trump administration, forcing sovereign nations to adopt US technology ‘or else’.

Most importantly from an IIS perspective, it puts additional pressure on already stretched regulators such as the Office of the Australian Information Commissioner (OAIC) who will bear the brunt of the enforcement burden, without a commensurate increase in funding.

What is it

The core architecture of NAP is built around three pillars:

  1. Capture the opportunity – Increase Australia’s AI capability through sovereign compute access, industry investment, research support, and a workforce strategy that emphasises inclusion and long-term adaptability.

  2. Spread the benefits – Ensure AI adoption occurs not just in major corporations and government agencies but across regions, small businesses, social sectors, and public services. The Plan closely links AI growth to social equity, union negotiation, and regional skills pipelines. [1]

  3. Keep Australians safe – Establish the Australian AI Safety Institute, enhance standards alignment, and build frameworks for responsible, trustworthy AI across public and private sectors.

This structure does mirror the strategies of peer nations such as the UK, Singapore, and Canada with some notable omissions. It does provide unity: a national vision that integrates economic development with safety, fairness, and social wellbeing.

Socio-technical benefits

National coordination

Australia has struggled with fragmented digital and AI policy, spread across departments, agencies, and states. NAP moves toward a unified national architecture. This could reduce duplication and create a reference point for regulators, industry, and research institutions.

Investment in sovereign AI capability

By emphasising compute infrastructure, cloud capacity, and research ecosystems, NAP begins shifting Australia from AI consumer to AI contributor. This infrastructure matters: without sovereign compute access, Australia risks dependency on foreign technology decisions, third party vendors (with concentration risk) and data-handling practices.

Worker protections and social equity

Few national AI strategies foreground labour and social outcomes as explicitly as NAP. It integrates unions, worker transition programs, and protections for vulnerable groups. This ensures AI adoption considers societal impacts, not solely economic metrics. Yes, as noted above we have already seen some missteps in this area and fear is very much at the front of mind of several sector-specific worker types.

By targeting small businesses, local councils and not-for-profits, NAP attempts to democratise AI adoption [2], reducing the risk of AI-driven inequality between large and small organisations. This will be challenging given the trust issues many Australians have with AI and with respect to privacy and community attitudes.

Public sector modernisation

NAP emphasises AI-enabled public services such as health, education, welfare, and transport. When deployed safely, AI can increase accessibility, reduce administrative burden, and improve service delivery in remote and underserviced communities. Yes, this does assume a level of accountability and testing we did not see in Robodebt [3], and yes, we will have privacy concerns as we saw with Harrison.AI.

Socio-technical gaps

Despite its strengths, NAP contains structural weaknesses that carry real risk. The most significant dangers correspond to gaps in regulation, governance, and implementation.

Legal obligations and assurance

Unlike the EU AI Act or the US frameworks that mandate safety testing, reporting, and restrictions, NAP contains no enforceable legal obligations for high-risk AI systems. The Australian AI Safety Institute is promising but undefined. Without standards, authority, or enforcement powers, Australia risks deploying AI in financial services, healthcare, policing, and welfare without adequate safeguards.

Assurance is another area of potential harm for individuals. Globally, AI assurance, independent evaluation of robustness, bias, safety, and regulatory compliance is becoming essential and, in some cases, mandated by law. NAP does not define:

  • Assurance requirements

  • AI audit processes (or appropriate depth)

  • Documentation requirements

  • Pre-deployment testing

  • Model lifecycle controls

  • Ongoing continuous monitoring

  • Evaluation methods for generative AI.

Without an assurance regime, high-risk AI may be deployed in opaque, untested, or unsafe ways.

Risk identification and treatment

NAP does not specify which AI systems should be considered ‘high risk’ in banking, payments, energy, digital identity, critical infrastructure, healthcare, legal, national security or property systems.

Other nations treat critical infrastructure AI as a national security concern requiring heightened controls. Australia does not. The result could be AI-driven failures or exploitation in systems foundational to economic stability and social trust.

Government procurement is one of the most powerful levers for enforcing safe AI. The US and UK require impact assessments and supplier compliance with AI safety principles. NAP includes none of this. Australia may inadvertently purchase unsafe or non-compliant systems, embed risks such as bias, discrimination, or allow human harm within essential public functions.

NAP does not specify:

  • Which agency oversees AI risks in each sector

  • How regulators coordinate

  • How compliance will be enforced

  • Incident reporting for AI failures

  • Enforcement authority.

This creates a governance vacuum. In high-stakes and high risk domains, unclear jurisdiction leads to slow response, regulatory drift, and systemic risk.

Possible privacy concerns

NAP touches privacy indirectly. Potential gaps remain:

  • No new privacy protections tailored to AI-enabled data processing.

  • No guidance on model training using personal data or derived data or data use (consent).

  • No restrictions on biometric surveillance, emotional analytics, or behavioural prediction.

  • No provisions for transparency, contestability, opt-out, or rights when AI makes or influences decisions.

This leaves individuals exposed, particularly in welfare, policing, employment, and health contexts where Australia already has a history of algorithmic harm.

It also puts additional pressure on already stretched regulators such as the OAIC.

Risk identification and treatment

Lastly, NAP is ‘civilian oriented’, Australia lacks a publicly articulated framework for military, defence, dual-use, or national-security AI governance, even though peer nations (US, UK, EU, Singapore) explicitly integrate defence considerations or maintain separate defence AI strategies. This is worrisome.

Conclusion

NAP is a credible and coherent strategic document with substantial socio-technical benefits: national coordination, sovereign capability, worker-centred policy, public-sector uplift, and inclusive AI diffusion. It positions Australia to participate more actively in the global AI landscape.

NAP also leaves dangerous gaps. The absence of enforceable safety rules, AI assurance infrastructure, sector-specific oversight, procurement standards, enforcement authority, unclear government roles and responsibilities, and privacy safeguards creates systemic risk.

NAP nods toward safety without building the machinery necessary to enforce it. NAP is aspirational and does not ensure or build resilience Australia will still need the regulatory, technical, and institutional backbone that transforms NAP from vision to real protection.

[1] However, we already see AI redundancies and sectoral fears, for example recently at CBA, when it revealed in July it would make 45 roles in its customer call centres redundant because of a new bot system it had introduced – then reversed the decision after deciding it needed the humans to cope with its growing workloads

[2] In broad strokes, ‘democratise’ in an AI context equates to the notion that everyone and every organisation, regardless of socio-economic status, and regardless of technical skill or acumen or for companies and organisations without specialised or extensive IT, can have the same access to AI tools, workflows, and benefits.

[3] While Robodebt was not and AI making autonomous decisions, it was algorithmic bias that was relied upon without proper testing, safety, or human in the loop controls. See Royal Commission into Robodebt.

Comment

What’s next for Australian privacy regulation – Reflections on PAW 2025

Comment

What’s next for Australian privacy regulation – Reflections on PAW 2025

By Chong Shao

On Monday, 16 June 2025, IIS joined other IAPP members in Sydney for the launch of Privacy Awareness Week. Together we heard an address from, and fireside conversation with, Privacy Commissioner Carly Kind.

The past 12 months have been eventful for Commissioner Kind and the Office of the Australian Information Commissioner (OAIC). Here are some highlights:

At the Sydney PAW launch, Commissioner Kind gave further remarks about her office’s regulatory approach, given the current technological landscape and the uncertain timeframe of further privacy legislative reform.

This post summarises the key themes from those remarks, along with some practical takeaways to help you navigate both privacy compliance and good practice today.

1. The Commissioner takes a holistic view of privacy that emphasises organisational accountability and power imbalances

Throughout her remarks, Commissioner Kind highlighted the need for a broader conception of privacy than simply the protection of personal information.

This broader notion of privacy – autonomy to make decisions, free from interference and intrusion – is more important than ever in a world that is marked by technology that is always-on, collects data passively, subtly conditions our thoughts and behaviours, and removes friction from all manner of experiences.

Commissioner Kind noted that the problem is not that people aren’t aware of the importance of privacy these days, but that they feel helpless, fatalistic and disempowered. In pushing back against overreliance on individual responsibility for privacy, she memorably invoked a climate change analogy – ‘privacy settings are the plastic straws of the privacy world’.

Instead, Commissioner Kind wants entities to take accountability for doing the right thing in the first place, and for various groups and associations in our society to leverage their power as a counterbalance and advocate for more privacy-friendly approaches.

She noted that the scale of technological impact is a novel problem in our era, and that this informs her thinking with respect to regulatory priorities. In particular, she foreshadowed that her office will be looking at spaces where there are power disparities between individuals and organisations. As examples, she listed credit reporting, data brokerage and emerging technologies (such as AI and biometrics).

Practical takeaways:

  • Take accountability as an organisation to embed privacy into your culture and practice:

    • Set privacy culture from the top through strong messaging and financial investment in privacy; and

    • Limit over-collection of data and destroy what you don’t need.

  • Undertake a privacy review to identify potential gaps and opportunities to improve practice.

2. The Commissioner is committed to using the full toolkit of her regulatory powers

On the topic of enforcement, Commissioner Kind gave some additional thoughts on the powers now available to her office.

She noted that the power to issue infringement notices is limited to a relatively narrow set of APPs (e.g., Privacy Policy deficiencies, failure to offer direct marketing opt-out). However, it could potentially be used as part of a ‘compliance scan’ of a particular sector or market in relation to those privacy practices. This is similar to what the Australian Competition and Consumer Commission (ACCC) and the UK’s Information Commissioner’s Office (ICO) have done in the past.

Commissioner Kind reiterated that her office will prioritise enforcement action for violations that are persistent, egregious and/or manifest in real-life harms, as well as in places where intervention is likely to change market practices or help clarify aspects of policy or law.

She flagged that her office will be conducting more investigations and making more determinations this year, as well as taking more enforcement actions in a similar vein to the recent Australian Clinical Labs (ACL) and Medibank cases.

In response to an audience member question about what more can be done to get the C-suite to appreciate the importance of privacy, Commissioner Kind recognised the power of fines to highlight the risk of not taking sufficient action. She remarked matter-of-factly that her office is seeking to extract the largest fines possible.

Practical takeaways:

  • Review your Privacy Policy to ensure it is compliant, up-to-date and fit for purpose.

  • Revisit your organisation’s privacy risk appetite and posture (including raising this at the Board level), in light of the large fines now available under the Privacy Act and the OAIC’s more proactive enforcement stance.

3. The Commissioner recognises the importance of regulatory certainty and is willing to go to court to obtain it

One of the more interesting threads was what Commissioner Kind thought about her office’s role in providing regulatory certainty. She recognised that regulatory certainty is important because it helps entities know how to comply with the law and to innovate confidently.

Compliance can be challenging without guidance, examples and ultimately court cases that provide a firm interpretation and application of the law.

To this end, Commissioner Kind indicated that she not only wants to develop clear guidance and make regulatory decisions, but she also wants to actively pursue court cases (including inviting challenges to her investigations and determinations) that will either endorse or repudiate the OAIC’s position.

In taking this approach, it appears that she considers court cases to be a ‘win-win’ scenario – even if the court rejects the OAIC’s interpretation, this still moves the ball forward in terms of clarifying the law for everyone.

Commissioner Kind pointed to current cases on foot in the Federal Court (ACL, Medibank) that could bring more clarity on what is considered reasonable security steps under APP 11.1.

She also flagged other areas where regulatory and judicial interpretation is desirable:

  • Definition of personal information and what is ‘de-identification’ – especially in the relatively unchecked practices of data tracking and profiling where she is keen to establish clearer ‘red lines’ for that industry.

  • Definition of ‘reasonable expectations’ in the context of APP 6.2, which permits the use or disclosure of personal information for a secondary purpose where it is related to the primary purpose of collection, and it is reasonably expected by the individual.

Practical takeaways:

  • Keep watching this space for potential clarifications and (re-)interpretations of the current law, especially during a time when privacy law reforms are on a slow burn. [1]

4. The Commissioner is interested in fresh interpretations of current principles in the Privacy Act to keep pace with today’s privacy challenges

Speaking of the current law, the most significant insights from Commissioner Kind came when she was reflecting on how to make the most of the Privacy Act that we have, given the slow pace of legislative reform.

Commissioner Kind noted that many of the terms in the Act and the APPs are flexible in nature. She considered that they should be subject to a ‘purposive interpretation’ to keep pace with modern privacy risks and harms.

The key examples she gave come from APP 3, the collection principle:

  • APPs 3.1 (for agencies) and 3.2 (for organisations) posit that collection must be reasonably necessary for the entity’s functions or activities

  • APP 3.5 states that collection must take place via lawful and fair means.

Commissioner Kind noted that the language of ‘reasonably necessary’ and ‘lawful and fair’ approximate the ‘fair and reasonable’ test that has been proposed by the Privacy Act Review.

To consider what is reasonably necessary is to engage in an exercise of gauging reasonableness, proportionality and necessity. To consider what is fair is to incorporate notions of community values that evolve over time and adapt to changing circumstances.

Commissioner Kind gave an example of what (un)fairness could look like in the digital era – the scraping of publicly available information, bringing it together for profiling, and supporting predatory business practices. Assessing fairness should extend beyond the technical means of collection and extend to the purposes for which the collection takes place.

The Commissioner’s views cut against a legalistic and ‘minimum compliance’ reading of the current Privacy Act. Instead, she has laid down the challenge for organisations to take a ‘commonsense’ and proportionate approach to personal information collection and handling.

Practical takeaways:

  • Use commonsense and apply the ‘pub test’ to assess whether a proposed collection of personal information is reasonably necessary and fair.

  • With any personal information handling activity, ask ‘should we do this’ and not just ‘can we do this’.

  • Map how personal information collection leads to downstream uses and disclosures.

If you have any questions on the Privacy Act and its impact on your organisation, or would like assistance with any of the practical takeaways, please contact us. You can also subscribe to our newsletter to receive updates on the latest privacy developments, including law reform changes, further guidance and new interpretations.


[1] Asked about the status of the Tranche 2 reforms, Commissioner Kind observed that the timing was a matter for the Attorney-General’s Department. She did note that she had met with the new Attorney-General, the Hon Michelle Rowland MP, and was encouraged by her background and interest in privacy and digital regulation.

Comment

Australian Government introduces Cyber Security Legislative Package: Are you ready?

Comment

Australian Government introduces Cyber Security Legislative Package: Are you ready?

By Simon Liu and Sascha Hess

On 2 October 2024, the Australian Government announced its first standalone Cyber Security Bill as part of a package of reforms in critical infrastructure and national security to bring Australia in line with international best practice on new and emerging cyber security threats. The Cyber Security Legislative Package includes the Cyber Security Bill 2024 as well as amendments to the Intelligence Services Act 2001 and the Security of Critical Infrastructure Act 2018 (SOCI Act).

The proposed regulatory framework forms part of the government’s vision of becoming a world leader in cyber security by 2030, according to its 2023-2030 Australian Cyber Security Strategy, and specifically to build the government’s awareness of the ransomware threat, which continues to grow and raise risk for all organisations.

IIS welcomes the four key measures this bill introduces.

Set up a response and learning framework for cyber incidents

Three initiatives work together to systematically enhance Government and Industry’s ability to respond to, and learn from, cyber security incidents:

  • Providing data

  • Lowering barriers to information sharing with the Government, and

  • Creating a ‘no-fault’ cyber incident review board.

These efforts align with existing industry practices and common sense – sharing data fosters an informed, coordinated response, while conducting blameless post-mortems helps embed lessons for future incidents.

The bill does this by:

1. Introducing mandatory ransomware reporting for certain businesses to report ransom payments

Introducing a mandatory reporting obligation for entities who are affected by a cyber incident, within 72 hours of making the ransomware payment or becoming aware that the ransomware payment has been made. The two categories of entities that have ransomware reporting obligations are:

  • Category 1

    • Entities that carry on business in Australia with an annual turnover for the previous financial year that exceeds the turnover threshold (which is likely to be $3 million, yet to be confirmed);

    • Not a Commonwealth body or a State body; and

    • Not defined as a responsible entity for critical infrastructure asset under the SOCI Act.

  • Category 2

    • Responsible entities for a critical infrastructure asset to which the SOCI Act applies. In other words, all responsible entities will be ransomware reporting obligations even where their annual turnover does not exceed the turnover threshold (which is likely to be $3 million, yet to be confirmed), or where they are a Commonwealth or State body.

2. Introducing a ‘limited use’ obligation for the National Cyber Security Coordinator and the Australian Signals Directorate (ASD)

Introducing a ‘limited use’ obligation that restricts how cyber security incident information provided to the National Cyber Security Coordinator during a cyber security incident can be used and shared with other government agencies, including regulators.

3. Establishing a Cyber Incident Review Board

Establishing a Cyber Incident Review Board to conduct post-incident reviews into significant cyber security incidents.

Set up a minimum security baseline for ‘smart devices’

Smart devices are becoming a common feature in Australian homes and businesses. From home security systems and video doorbells to keyless entries and voice assistants, who doesn’t enjoy the added convenience and peace of mind? However, like any software, internet-connected devices have security vulnerabilities that require proper securing and regular patching.

4. Introducing a minimum set of cyber security practices for smart devices

The bill marks the first step in establishing a minimum-security baseline in Australia and follows the lead of the UK in April 2024.

Ready, Steady, Go

The legislation, if enacted, will become Australia’s first standalone cyber security legislation to strengthen protections for and enforcement measures against businesses from the increase in cybercrime.

Businesses will need to adapt to stricter security standards for smart devices and embed their new reporting requirements into their incident response plans.

Please contact IIS to have a confidential chat on how we can support your business to become compliance ready.

If you are interested to understand the impacts of a real major cyber security incident and a serious data breach, see our whitepaper on “What businesses need to know about the Optus 2022 cyber attack and lessons learned from the Service NSW 2020 Data Breach”.

Comment

Key takeaways from the Privacy Amendment Bill 2024

Comment

Key takeaways from the Privacy Amendment Bill 2024

By Chong Shao

The Australian Government has introduced the Privacy and Other Legislation Amendment Bill 2024, as part of the first tranche of its long-awaited response to the Privacy Act Review. We knew that progress would be measured in years, and so far this is proving out.

The headline changes touted by the government include:

  • A new statutory tort to address serious invasions of privacy.

  • Development of a Children’s Online Privacy Code to better protect children from online harms (accompanied by further funding to support the OAIC in development the code).

  • Greater transparency for individuals regarding automated decisions that affect them.

  • Streamlined and protected sharing of personal information (PI) in situations of a declared emergency or eligible data breach.

  • Stronger enforcement powers for the Australian Information Commissioner.

  • A new criminal offence to outlaw doxxing (i.e., the malicious release of personal data online that could enable individuals to be identified, contacted, or located).

For many, these reforms are modest and therefore disappointing, given the scope and duration of the Privacy Act Review.

Notably missing from the Bill is:

  • Any update to the definition of Pl.

  • Inclusion in the Bill of the four elements along EU GDPR lines that make a consent valid.

  • The introduction of a ‘fair and reasonable test’ for the handling of PI.

  • A requirement for APP entities to conduct a Privacy Impact Assessment for activities with high privacy risks.

  • The right for individuals to request erasure of their PI.

Also missing is one of the more contentious recommendations, the gradual removal of the small business exemption.

On the other hand, the changes represent a moderate progression from the status quo, which needs to be monitored closely and will likely have bigger implications over time.

Some key takeaways:

1. Privacy as a major intersection point

The Bill confirms that privacy sits at the intersection of the major technological and societal issues of our time.

For example:

  • The statutory tort introduces a cause of action for individuals against another person or organisation where there is a serious invasion of privacy – organisations should be aware of this provision (no small business exemption here!); although it should not be an issue if they are focused on “doing the right thing”.

  • A Children’s Online Privacy Code will be developed alongside other initiatives in the online safety space, including Online Safety Codes and the eSafety Commissioner’s research and work on age assurance.

  • Greater transparency regarding automated decision-making comes as part of a broader push by the government around promoting safe and responsible AI.

  • The streamlining of PI sharing in emergency and eligible data breach scenarios is a welcome move but will have to be considered alongside notification requirements in other laws and schemes such as the Security of Critical Infrastructure Act 2018, Data Availability and Transparency Act 2022, and APRA’s Prudential Standard CPS 234 Information Security.

The Bill is a microcosm of the complex privacy, cyber security, and digital regulatory landscape that is taking shape in Australia. The picture is getting (understandably!) complicated, and the Bill contributes to this.

2. Enforcement will matter more

The government’s touting of ‘stronger enforcement powers’ for the Australian Information Commissioner is a bigger deal than it appears on the surface.

On closer inspection, the Bill provides a series of changes that enable more flexible and effective enforcement of the Privacy Act:

  • A civil penalty provision for interference with privacy of individuals (not just ‘serious’ interference).

  • Separately, the civil penalty for serious interference with privacy of individuals is retained, with better elaboration of factors that may be considered in determining if the interference is serious.

  • The Commissioner may seek civil penalty orders and issue infringement notices for breaches of certain Privacy Act provisions and certain Australian Privacy Principles (APPs).

  • Additional monitoring and investigation powers.

One of the biggest issues with compliance and enforcement of the Privacy Act has been the relative lack of flexibility with the existing law, where there is a (recently strengthened) civil penalty provision for ‘serious and repeated interferences with privacy’. OAIC enforcement actions have been few and far between, typically reserved for ‘high profile’ cases such as Meta (Facebook), Medibank, and Australian Clinical Labs.

These changes to the Privacy Act, especially in relation to civil penalty orders and infringement notices, provide the OAIC with a bigger ‘toolkit’ to enforce breaches of the Privacy Act and the APPs.

Privacy Commissioner Carly Kind, in a Privacy Awareness Week Sydney event earlier this year, spoke of the ‘exciting opportunity for the OAIC to become a more enforcement-based regulator’. During the Q&A, she noted that for the first time in a decade there are three dedicated commissioners, and that they would be thinking a lot more about how to conduct proactive and proportionate enforcement.

This was confirmed by the OAIC’s Corporate plan 2024-25, which commits the OAIC to a ‘risk-based, education and enforcement-focused’ posture.

The true effectiveness of the regulator will depend on the extent to which it is sufficiently resourced. We have been advocating for greater funding for the OAIC for over a decade in speeches, forums and submissions. We eagerly await the next budget to see if the government will put its money where its mouth is and that they are indeed serious about ‘ensuring the Privacy Act works for all Australians and is fit for purpose in the digital age’.

Nevertheless, the Bill and the OAIC’s recently publicised posture demonstrate a clear intent and capability for the regulator to conduct more enforcement. Organisations should take note.

3. Keep sticking to the basics

The Privacy Act Review was flagged five years ago, as part of the ACCC’s 2019 Digital platforms inquiry. In the meantime, organisations are facing an increasingly challenging environment:

  • Cyber security incidents (including data breaches and the sophistication of bad actors) continue to increase in size and scale.

  • The growing data economy and technologies like AI heighten business pressures to collect and use personal information, while exposing organisations to greater data governance risks.

  • Australians care more than ever about privacy – according to the OAIC’s Australian Community Attitudes to Privacy Survey 2023, 82% of respondents care enough about protecting their PI to do something about it, and 84% want more control and choice over the collection and use of their PI.

It has been a slow and winding journey to reach the first tranche of changes to the Privacy Act. 

Our key takeaway is not to get over-excited, nor complacent. Not over-excited, because in many ways these are modest changes that will take time to realise their full effects. Not complacent, because the Bill heralds a new era of enforcement for the OAIC, including compliance with the existing Privacy Act and its APPs.

Instead, we think it is best to keep calm and stick to the basics. This means:

  • Assess your privacy practices against the existing APPs with a focus on Pl collection and handling practices and ensure you are taking ‘reasonable steps’ (including technical and organisational measures) in securing and protecting personal information. [1]

  • Know what PI (including sensitive information) you have now, where it is, whether you should still have it and the ways in which you are using it.

  • Assess cyber security risks and controls and consider certification against relevant standards.

  • Establish an improvement and remediation plan based on the findings of points 1, 2 and 3.

Putting the foundations in place now will give you a simpler path to compliance and good practice for both the current legislative requirements and the new requirements to come, including whatever Tranche Two will bring.

IIS can help

IIS and our subsidiary TrustWorks 360 can help you:

  • Navigate the complexity of the privacy, cyber security, and digital regulatory landscape.

  • Get the basics right and help you comply with current and incoming requirements, to satisfy customer expectations and to avoid regulator scrutiny and enforcement.

  • Move beyond compliance to performance and resilience that builds trust and achieves business objectives in a fast-changing world.

Why? Because as we have said at IIS for two decades, “It is just good business.”

Please contact us if you have any questions about the Privacy Act reforms and how it may affect your organisation. You can also subscribe to receive regular updates from us about key developments in the privacy and security space.


[1] In a separate interview, Commissioner Kind discussed the OAIC’s enforcement action against Medibank, for activities leading up to the data breach. The OAIC is making the case that Medibank didn’t take ‘reasonable steps’ to protect the personal information they collected and held. Reasonable steps are described as:

  • State of the art security

  • Good governance

  • Organisational responsibilities.

Comment

November 2023 ASD Essential Eight Maturity Model changes

November 2023 ASD Essential Eight Maturity Model changes

By Sascha Hess

The Australian Signals Directorate (ASD) updated its Essential Eight Maturity Model this November. Since 2017 the model has been updated regularly, supporting the implementation of the Essential Eight.

The Essential Eight can be considered a prioritised minimum security control baseline, referred to as mitigation strategies in the guide. The model comprises three maturity levels which can be considered ‘threat profiles’. Insights for refining the model are derived from various cyber-related fields, such as security testing, cyber threat intelligence, and learnings from responding to incidents.

This year, notable changes include:

  • Introduction of “patches assessed as critical by vendors” as an additional prioritisation criterion. Patches for critical security vulnerabilities for internet-facing systems are now required to be applied within 48 hours, even in the absence of a known exploit. Tighter time frames are also established for patching applications processing untrusted content from the internet (e.g., browser, PDF reader).

  • Enhancements to the use of multi-factor authentication (MFA) universally, like expanding the use of phishing-resistant MFA.

  • In response to attacks against citizens that continue to only use passwords, online access to organisation’s sensitive data now requires multi-factor authentication from Maturity level One.

  • A significant tightening of privileged account management practices around validation for requesting accounts, periodic revalidation, accounts with internet access and break-glass accounts.

As adding is typically favoured over removing in standards, it is good to see that the review also resulted in removing or easing a couple of requirements (i.e., macro execution event logging and patching for less important devices).

For a comprehensive list of changes, please visit the dedicated page on cyber.gov.au. IlS has compiled a marked-up table of the Essential Eight Maturity model which highlights the November 2023 changes for easier reference.

The increased focus on timely patching, use of robust multi-factor authentication and tightening the use of administrative access accounts help organisations to better defend against threat actors’ common attacks. IIS recommends all organisations to review their practices now in light of these changes.

IIS can help you review and uplift your current security practices and capabilities. Please contact us if you require assistance.

Queensland passes privacy reforms: Snapshot of key changes

Queensland passes privacy reforms: Snapshot of key changes

By Susan Shanley and Jacky Zeng

Queensland government agencies will be subject to new Privacy Principles as state parliament passes privacy reform.

Key points up front

  • The Information Privacy and Other Legislation Amendment Act 2023 was passed on 29 November 2023.

  • The information privacy reforms include:

    • consolidation of the existing Information Privacy Principles (IPPs) and National Privacy Principles (NPPs) into a single set of privacy principles: Queensland Privacy Principles (QPPs),

    • introduction of a mandatory data breach notification (MDBN) scheme, and

    • enhanced powers for the Information Commissioner to respond to privacy breaches including an own-motion power to investigate an act or practice without receiving a complaint.

  • The amendments commence on a day to be fixed by proclamation.

  • It is currently expected the reforms to the Information Privacy Act 2009 (IP Act) including the new QPPs, will begin on 1 July 2025. This means all agencies, including local government, would transition to the new QPPs on 1 July 2025. The MDBN scheme will likewise commence for all agencies except local government at that time.

  • A phased commencement of the MDBN scheme includes an additional 12-month delay for local government only to 1 July 2026.

Queensland Privacy Principles

The reforms to the IP Act include adopting a single set of privacy principles based on the Australian Privacy Principles (APPs) in the Privacy Act 1988 (Cth) (Privacy Act) referred to as the QPPs, replacing the NPPs for health agencies and the IPPs for all other agencies.

The new Schedule 3 in the IP Act sets out the QPPs which generally align with the APPs in the Privacy Act. There are some adaptations for Queensland agencies. Furthermore, some APPs and specific APP provisions which are not relevant to the Queensland government context have not been adopted in the QPPs.

IIS has undertaken a detailed comparative analysis of the IPPs/NPPs and the new and/or changed requirements under the QPPs, including what steps agencies and contractors can take now to prepare for the changes when they commence.

A snapshot of IIS’s comparative analysis is provided by reference to five questions and answers on the QPPs:

Question 1:

If a bound contracted service provider has an existing contract with a Queensland agency, does the contractor need to comply with the new QPPs once they commence?

Answer 1:

No, the QPPs do not apply to existing contracts and will only apply to new contracts entered into after commencement, unless there is agreement to a variation. This means the IPPs or NPPs will continue to apply to existing contracts.

The QPPs do not extend to subcontractors. However, contracted service providers should take steps to ensure any subcontractors supporting them in relation to Queensland government contracts have sufficient ability to manage privacy obligations. 

While the QPPs will not apply to existing contracts, IIS strongly recommends all businesses contracted to, or intending to, provide services to Queensland government agencies start the process of familiarising themselves with the revised requirements under the QPPs. 

This is particularly important given small businesses are currently largely exempt from the operation of the Privacy Act and unlikely to be familiar with the APPs and, therefore, the QPPs – which are largely modelled on the APPs – may be a mystery to them. Small business (and other contractors) will need to update their existing privacy arrangements for any new contracts entered into after commencement. 

Unlike the Privacy Act, the QPPs of the IP Act will apply to all bound contracted service providers and there is no exemption for small business providers.

Question 2:

There doesn’t appear to be a QPP equivalent of APP 8 – cross-border disclosure of personal information. What requirements apply to agencies and bound contracted service providers disclosing personal information outside Australia?

Answer 2:

While the Privacy Act includes a privacy principle about cross-border disclosure of personal information (APP 8) there is no equivalent QPP.

Under the Privacy Act, APP 8 and section 16C generally require an APP entity to ensure that an overseas recipient will handle an individual’s personal information in accordance with the APPs and makes the APP entity accountable if the overseas recipient mishandles the information (see Chapter 8: APP 8 Cross-border disclosure of personal information).

Section 33 of the IP Act is retained as the preferred method for regulating overseas disclosures of personal information rather than adopting an equivalent QPP 8. The term ‘transfer’ has been replaced with ‘disclosure’ in section 33 of the IP Act.

This means agencies (and contracted services providers where relevant) will continue to comply with section 33 of the IP Act. 

There is a note at QPP 8 which states ‘there is no equivalent QPP for APP 8.’

Question 3:

There is no detail provided under QPP 7, QPP 8 and QPP 9. What does this mean? How does an agency comply with these QPPs?

Answer 3:

The QPPs generally align with the APPs in the Privacy Act, with some adaptations for Queensland agencies. Some APPs that apply to organisations, specific Commonwealth agencies and Commonwealth functions have not been adopted.

APPs 7, 8 and 9 have not been adopted in the QPPs as they are not relevant to the handling of information by Queensland public sector agencies. APP 7 regulates direct marketing, APP 8 regulates cross-border disclosure of personal information (see previous question and answer) and APP 9 regulates the adoption, use or disclosure of government related identifiers (for example, Medicare numbers and driver licence numbers).

This doesn’t mean that there are no requirements for Queensland agencies in those areas above. For example, the disclosure requirements in QPP 6 are applicable for the use of personal information in direct marketing, and as noted, section 33 of the IP Act provides provisions for cross-border disclosures.

Where an APP (or a provision of an APP) has not been adopted in the QPPs, the QPPs include a note referring to the relevant APP or provision. For example:

The Editors note to QPP 7 – direct marketing states:

The Privacy Act 1988 (Cwlth), schedule 1 includes a privacy principle prohibiting direct marketing by certain private sector entities (see APP 7).

There is no equivalent QPP for APP 7.

Note—QPP 6 is relevant to the use or disclosure of personal information for the purpose of direct marketing.

Question 4:

What is a QPP code and how is this different to the QPPs? Do agencies bound by a QPP have to comply with it?

Answer 4:

A QPP code is a written code of practice about information privacy, approved by regulation, which states how one or more of the QPPs are to be applied or complied with by agencies that are bound by it. 

A QPP code may also impose additional requirements to those imposed by a QPP, to the extent that they are not inconsistent with a QPP. 

The purpose of the QPP code is to provide individuals with transparency about how their information will be handled. 

Once the amendments commence, agencies bound by a QPP code will be required to comply with the code and must not do an act or engage in a practice that contravenes a QPP code.

An example of a Code can be found under the Privacy Act. An APP Code is in force which sets out specific requirements and key practical steps Australian Government agencies must take as part of complying with APP 1.2. This includes requirements such as:

  • having a privacy management plan,

  • appointing a Privacy Officer, or Privacy Officers, and ensuring that particular Privacy Officer functions are undertaken,

  • appointing a senior official as a Privacy Champion to provide cultural leadership and promote the value of personal information and ensure Privacy Champion functions are undertaken, and

  • undertaking a written PIA for all ‘high privacy risk’ projects or initiatives involving new or changed ways of handling personal information.

Question 5:

Do the QPPs impose requirements on agencies to have a privacy policy?

Answer 5:

Yes, QPP 1.3 requires an agency to have a clearly expressed and up-to-date privacy policy about the management of personal information by the agency.

Other requirements placed on agencies under QPP 1 regarding privacy policies include:

  • ensuring the privacy policy contains the required information, and

  • taking reasonable steps to make its privacy policy available to the public free of charge and in an appropriate form. For example, an agency may do this by publishing its privacy policy on the agency’s website. 

IIS strongly recommends all agencies have a clearly expressed and up-to-date privacy policy in the interest of best privacy practice and openness and transparency about the handling of personal information.

Need assistance?

The above snapshot represents only a small sample of the changes Queensland agencies (and the businesses that support them) will need to make to ensure they are compliant with the QPPs once they commence.

It is important to be ready for the coming changes! As a leading Australian privacy consultancy, and a trusted service provider to the Queensland government, IIS can help. We can assist with your readiness assessment and we offer comprehensive privacy training, governance support, MDBN scheme preparedness and many other services to support your agency in addressing these important reforms.

Please contact IIS to find out more.