International Update: Privacy, AI, and Compliance Trends

South Korea 

The Personal Information Protection Commission and the Ministry of Science
and ICT will comprehensively strengthen the effectiveness of information
protection and personal information protection management system
certification.

Comprehensive system improvement, including strengthening inspections
centered on key items and strengthening post-management.

Strengthening follow-up management, including special on-site inspections of
certified companies that have experienced a spill.

As hacking of certification companies and large-scale personal information leaks
occur repeatedly, the Personal Information Protection Commission chaired a
meeting of relevant ministries to improve the certification system, with the
participation of the 2nd Vice Minister of Science and ICT, the President of the Korea
Internet & Security Agency, and others, and announced that they would be
promoting a comprehensive reform plan for the system to strengthen the
effectiveness of certification.

Australia
Australia’s National AI Plan 2025, released in December 2025, sets a roadmap for AI
innovation, adoption, and governance, focusing on infrastructure, skills,
responsible use, and safety, establishing an AI Safety Institute and aiming to use
existing laws for risk management while boosting public sector AI use and attracting
investment, but facing calls for clearer accountability and privacy measures

Voluntary AI Generated Content Notice:

    • Guidance explaining why and how to tell people AI has been used in content
    • Ensures AI generated content is clearly identifiable
    • Tells users when they are engaging with AI
    • Tells users when they may be impacted by AI-enabled decisions
    • Tells users when content is human generated
    • Manages copyright or other intellectual property implications of AI-generated
      content
    • Uses AI-generated content detection mechanisms
    • This guidance is for all businesses that use AI to generate of modify content
      and all businesses that build, design, train, adapt or combine AI models or
      systems that can generate or modify content
    • Transparency mechanisms include labelling, watermarking or metadata
      recording to be applied for AI-generated content

Workplace Compliance for Australian Employers:

Compliance landscape extends well beyond contracts and payroll. It now
encompasses:

    • Police and background checks, ensuring suitability for sensitive or
      safety critical roles
    • Credential and license management, particularly for sectors governed
      by AHPRA, ASIC, or education boards
    • Privacy and data protection obligations under the Australian Privacy
      Principals
    • Ethical employment practices, from classification and award
      adherence to fair recruitment
    • Work health and safety documentation and reporting

Publishing policy updates, sharing training completion metrics, or
demonstrating proactive police-check renewal programs all help reinforce
this trust

In sectors such as healthcare, education and employment services, where
compliance is tied to accreditation or funding; this transparency is existential.

India 

India’s Digital Personal Data Protection (DPDP) Act – This framework aims to provide
a functional privacy law, balancing innovation with data protection; though specific
rules on data localization and other aspects are still unfolding.

    • Content-Based Processing : Data processing requires clear, specific,
      informed and unambiguous consent, with provisions for withdrawing
      consent
    •  Data Principal Rights: Individuals (Data Principals) have rights to access,
      correct, erase, and nominate someone to exercise rights on their behalf, with
      mechanisms for redressal
    •  Data Protection Board: A digital-first body to enforce the law, receive
      complaints, conduct inquiries, and impose penalties.
    • Consent Managers: Intermediaries to help users manage their consent, with
      specific eligibility criteria
    • Significant Data Fiduciaries: Entities handling large columes of sensitive data
      face stricter rules, including annual Data Protection Impact Assessments
      and audits
    • Data Deletion: Data Fiduciaries must delete data after inactivity, giving a 48-
      hour warning to the Data Principal
    •  Implementation Timeline: Nov 2025 : DPB constituted, rules notified; Nov:
      2026 Consent Manager registration process begins; May 2027: Main
      compliance duties for Fiduciaries, including breach notifications, security
      protocols and SDF obligations.
    •  Businesses will need to revamp systems to manage consent records
    • Expect a digital first approach for filings and tracking
    • Higher compliance burdens for large data handlers

New Zealand – Privacy Act 2020 Reform

  • Businesses are getting record numbers for privacy complaints and increased
    breach notification by agencies
  • Surveys have stated that the Privacy Commissioner should have the power to
    Audit the privacy practices of agencies, issue small infringement fines for a
    privacy breach and ask the courts to issue large fines for serious privacy
    breaches.
  • Adding the ‘right to erasure’ to privacy rules here would provide New
    Zealanders with the right to ask organisations to delete their personal
    information in certain circumstances. This right would reduce the harm
    arising from privacy breaches through reducing the amount of personal
    information an agency is holding.
  • Residence calling for stronger protection with data
  • The Commissioner is also suggesting that agencies need to be able to demonstrate how they meet their privacy requirements, such as the privacy management programs recommended by the OECD.

Canada – Change of Name Amendment Regulations 2025

  • Saskatchewan – blocking, tightening and limiting legal name changes for serious offenders.
  • Blocking name changes for murderers and high-risk offenders
  •  Applications for a name change will now require a certified criminal record check, including fingerprinting, to ensure applicants do not have disqualifying convictions.
  • The expanded list of disqualifying offenses include:
      • Dangerous offenders under section 753 of the Criminal Code
      • Long-term offenders under section 753.1 of the Criminal Code
      • High-risk offenders subject to public notification
      • Fraud under Part X of the Criminal Code
      • Designated Schedule I substance offences under the Controlled Drugs
        and Substances Act
      • Murder under section 231 of the Criminal Code.
  • Approximately 1,000 people apply for a legal name change in Saskatchewan each
    year.
  • These amendments will strengthen protection for victims and the public ensuring
    that anyone convicted of these crimes cannot escape the consequences of their
    past actions; making the province safer and more secure.

Alberta – Adding Health Care Numbers to Licenses Increases Fraud Risk

  • Alberta’s privacy watchdog is raising concerns about new government bill
    that would add healthcare numbers to driver’s licenses and other forms of ID
  • If obtained by bad actors to access care, it could cause harm by having
    incorrect health records
  • Additionally, the Alberta government body responsible for driver’s licenses
    (the Registrar of Motor Vehicles) isn’t subject to privacy laws.
  • The government is developing further regulations regarding the MVR and is to
    discuss changes
  • Alberta will also be replacing paper health cards with the new “Alberta
    Wallet’ app to hopefully eliminate fake health care cards
  • Primary Health Services Minister states key measures ensuring compliance
    and accountability of the MVR.

      • Mandatory training for annual privacy training courses
      • Health Data Compliance: agents accessing the Alberta Health Carte
        Insurance Plan database must successfully complete rigorous Health
        Information Act Training and pass a certification exam.
      • Government Oversight: Agent activities are continuously monitored
        and audited by Service Alberta and Red Tape Reduction to ensure
        adherence to all relevant legislation, regulation, policy, and privacy
        laws
  • Alberta to become 1st province with mandatory ‘citizenship markers’ on
    driver’s license.

      • The mandatory change is the first of its kind in Canada and will roll out
        in late 2026
      • Adding citizenship markers allows for Albertans to more e􀆯ectively
        apply for funding and services like student aid, health benefits and
        disability supports
      • Some concerns regarding discrimination are present such as when
        providing the driver’s license for tra􀆯ic stops or other necessary
        instances. Some are calling for ‘guardrails’ to be put into place to
        ensure people do not face discrimination if a mark of Canadian
        citizenship is not on the driver’s license.
  • Federal – Canadian Guidelines for Biometric Privacy Consideration
      • Biometric technologies and their uses have advanced exponentially,
        enabling not only identity verification and recognition but also health and
        behavioral analysis through app interactions.
      • The concern is efforts to update Canadian privacy legislation has been
        repeatedly sidelined, leaving business and governments to grapple with laws
        and principals that were not drafted with biometric and other new
        technologies in mind.
      • Guidelines for Using Biometric Technology include:
          • Identifying an appropriate purpose
          • Legitimate Need – Purpose must be clearly defined and based
            on a current, not speculative, business need
          •  Effectiveness – proposed biometric program should be reliable
            and effective, and have clear method of measuring
            effectiveness
          • Minimal Intrusiveness – If less intrusive information can
            achieve a similar result without a material increase in costs,
            organizations must use the less intrusive information.
            Convenience alone should not be the determining factor
          • Proportionality – Organizations should assess whether the
            benefits are proportional to the loss of privacy. The biometric
            program should be narrow in scope, as biometric programs
            that are designed to rely on the analysis of large columes of
            biometric information are more likely to have a
            disproportionate impact on privacy.
    • Consent – Organizations must obtain valid, informed consent in an appropriate form when collecting biometric information
    • Limiting Collection – Information collected must be limited to that which is necessary for achieving the stated purpose.
    • Limiting Use, Disclosure and Retention
    • Limiting disclosure to third parties.
    • Avoiding cross-system data linking.
    • Distinguishing biometric data retention from other personal
      information
    • Deleting biometric information upon request.
    • Safeguards
      • Cancellable Biometrics: biometric templates that prevent
        reconstruction of the original data.
      • Privacy-Enhancing Technologies: e.g., homomorphic
        encryption can be used to conduct biometric matching without
        needing to decrypt the biometric template.
      • End-to-End Encryption: to protect biometric data in transit and
        storage.
      • Regular Testing and Vulnerability Assessments: to identify
        vulnerabilities and ensure safeguards continue to be effective
        over time
      • Access Controls: restrict system access to only those
        employees who require biometric information for their duties.
    • Accuracy – Biometric information must be accurate, complete and
      current for its stated purpose(s). Businesses must choose
      technologies with suitable accuracy rates and minimize performance
      discrepancies across socio-demographic groups.
    • Accountability – Organizations are responsible for the personal
      information under their control. They must also designate an
      individual responsible for the organization’s PIPEDA compliance who
      will act as primary contact to whom the public can ask questions and
      raise concerns. All employees responsible for managing biometric
      information must be provided with proper training, guidance and
      supervision to perform their duties.
    • Openness – Policies governing biometric data must be accessible and
      easy to understand. They should describe the types of biometric data
      held, their uses and disclosures, and include contact details for the
      individual responsible.

European Union – AI as Product or Service

  • The now-withdrawn AI Liability Directive, originally proposed in 2022,
    sought to harmonize fault-based liability rules across member states,
    addressing the fragmented landscape of national tort laws. Its aim was to
    adapt non-contractual civil liability frameworks to the unique characteristics
    of AI systems — especially those deemed high-risk under the EU AI Act. By
    introducing mechanisms such as a rebuttable presumption of causality and
    enhanced access to evidence, the directive attempted to ease the burden of
    proof for claimants harmed by AI-driven decisions.
  • However, with its formal withdrawal in early 2025, the EU has left a regulatory
    vacuum in fault-based liability for AI. The revised Product Liability Directive
    now extends strict liability for defective products to include software and AI
    systems, but it does not address negligence or unlawful conduct. This raises
    critical questions about how courts and regulators will navigate the
    dichotomy between product-based and service-based AI and how liability
    will be assigned in the absence of harmonized fault-based rules.
  • The AI Act adopts a product-centric approach to AI systems, aligning the AI
    regulatory treatment with existing frameworks for product safety. By doing
    so, it reinforces the notion that AI, particularly high-risk systems, should be
    subject to the same obligations and oversight as physical goods. This
    approach facilitates the application of strict liability rules, holding
    manufacturers accountable for damage caused by defective AI systems,
    regardless of fault.
  • This strict liability regime now benefits from enhanced harmonization under
    the revised Product Liability Directive, which explicitly extends its scope to
    include software and AI systems. Under this framework, the manufacturer —
    broadly defined to include developers, importers and authorized
    representatives — is liable for harm caused to consumers by defective AI
    products, without requiring the injured party to prove negligence or breach of
    duty.
  • The fragmented nature of liability regimes across the EU presents significant
    challenges for individuals seeking redress for harm caused by AI systems,
    particularly when those systems operate across borders. Moreover, the
    absence of a harmonized fault-based liability framework complicates
    regulatory enforcement. National authorities may interpret compliance
    obligations di􀆯erently, leading to uneven application of safety standards and
    risk management requirements. This may undermine the EU’s broader goal
    of creating a unified digital market governed by coherent legal principles.
  • The EU’s evolving AI legal framework exposes a key tension between product based
    and service-based liability. While the revised Product Liability
    Directive offers a harmonized approach to strict liability, it fails to capture
    harms arising from negligent conduct within the AI supply chain. Adaptive,
    context-sensitive systems often behave more like services than products,
    making traditional liability models inadequate. Without a harmonized faultbased
    regime, victims face fragmented national laws and limited access to
    justice. The withdrawal of the AI Liability Directive has deepened this gap.
    Reinstating a revised directive with clear obligations, access to evidence, and
    rebuttable presumptions would restore balance and ensure meaningful,
    effective redress for those harmed by AI systems.

European Commission proposes significant reforms to GDPR, AI Act

  • Less than a decade after the enactment of the European Union’s gold
    standard General Data Protection Regulation, the European Commission
    released its Digital Omnibus Regulation Proposal and Digital Omnibus on
    AI Regulation Proposal on 19 Nov. The draft proposals outline a course
    correction to digital regulation as set out in its new European Data Union
    Strategy.
  • Proposing to clarify in the GDPR that organizations may rely on legitimate
    interests to process personal data for AI-related purposes, provided they fully
    comply with all existing GDPR safeguards
  •  For cybersecurity protection, the package proposes a single portal for
    organizations to provide notification of a data breach. This “single-entry
    point” will be “developed with robust security safeguards and undergo
    comprehensive testing to ensure its reliability and e􀆯ectiveness.” Currently,
    organizations have incident-reporting obligations under several regulations,
    including the GDPR, Network and Information Security Directive and the
    Digital Operational Resilience Act.
  • The EU’s world-first AI regulation is also facing several changes, notably the
    timelines for entry into application of the high-risk processing, which was
    slated to go into e􀆯ect in August 2026.

Nigeria – Privacy by Design in Early-Stage Innovation

  • Nigeria is a multi-ethnic and culturally diverse federation made up of 36
    states and the Federal Capital Territory. With a population exceeding 220
    million, it is both the largest economy and the most populous nation in
    Africa, and these attributes continue to shape its growing influence as a
    regional leader in technology and innovation.
  • Nigeria’s digital transformation has also gathered pace in terms of
    infrastructure. The number of mobile connections reached approximately
    150 million by January 2025, equivalent to roughly 64% of the population.
  • Nigeria is actively pursuing the implementation of DPI to accelerate digital
    inclusion and effeciency in government service delivery. Nigeria has
    established some of the core foundations for effective DPI deployment:

    • Digital ID: The National Identity Management Commission (NIMC)
      continues to anchor Nigeria’s identity ecosystem. As of June 2025,
      over 121 million National Identification Numbers (NIN) have been
      issued,5 establishing a crucial foundation for digital identity. The
      recent launch and rollout of the NIN Authentication Service6
      (NINAuth) mandated for verification across all Ministries,
      Departments, and Agencies has strengthened the security and
      reliability of digital identity verification. NIMC has also migrated all
      telecommunications operators to the NINAuth platform and upgraded
      its diaspora enrolment system, enhancing both interoperability and
      inclusion within Nigeria’s identity framework.
    • Digital Payments: The payments layer of Nigeria’s DPI ecosystem is
      among the most advanced on the continent. The Nigeria Inter-Bank
      Settlement System (NIBSS), jointly owned by the Central Bank of
      Nigeria (CBN) and licensed banks, operates the NIBSS Instant
      Payment (NIP) system that has transformed digital transactions since
      its launch in 2012, facilitating secure digital financial transactions.
    • Data Exchange: Data exchange across the government remains in its
      early stages but is rapidly evolving. Data exchange between Ministries,
      Departments, and Agencies is currently nascent. While sectoral data
      exchanges exist—for example, between the Ministry of Health, the
      Ministry of Lands, and the Revenue Authority—cross-sectoral
      interoperability remains limited. However, to resolve this limitation,
      the Federal Ministry of Communications, Innovation and Digital
      Economy (FMCIDE) is currently working on the deployment of the
      Nigeria Data Exchange Platform (NGDX) to enable secure and
      seamless data exchange. It is scheduled to go live by the end of 2025.

AI Adoption in Nigeria

  • Nigeria ranks second among African nations8 with over 400 AI firms
    and start-ups, following South Africa, which leads with about 600.
  • To guide this emerging landscape, the FMCIDE released a National AI
    Strategy in 2024.Though not yet finalized, the document provides a
    guiding framework for innovators, academia, and government entities
    on the responsible use of AI.