
India
India – DPDPA Implementation Timelines Being Shortened? - Centre may shorten data protection law compliance timeline for Big Tech | Business News – The Indian Express
- The Ministry of Electronics and IT (MeitY) may shorten the timeline for Big Tech companies such as Meta, Google, and Amazon to comply with India’s Digital Personal Data Protection Act, 2023 and other related rules to 12 months from the current 18 months, as the government looks at creating separate compliance regimes for large companies and startups, The Indian Express has learnt. The move, though, could spark a wave of pushbacks from tech companies. https://indianexpress.com/article/legal-news/digital-personal-data-protection-rules-notified-key-opportunities-and-challenges-ahead-10370876/
- In particular, provisions that place additional obligations on ‘significant data fiduciaries could see fast tracking in terms of compliance timelines
- Tech majors including Meta, Google, Apple, Microsoft, and Amazon are expected to be classified as significant data fiduciaries.
- These specific provisions require tech companies to carry out a yearly data protection impact assessment, and verify that technical measures including their algorithmic software that deal with handling personal data don’t violate users’ rights.
India – Top 10 operational impacts of India’s DPDPA - Top 10 operational impacts of India’s DPDPA | IAPP https://iapp.org/resources/article/operational-impacts-of-indias-dpdpa-part1
- The DPDPA covers any entity that processes digital personal data within India and its union territories. Data in non-digitized forms are excluded. The act also imposes extraterritorial jurisdiction and covers data processed outside of India, if done with the intent to offer goods and services to individuals within India. However, the act differs from the GDPR by excluding from its purview the profiling of data subjects from outside the territory of India if not in connection to providing any good or service to the data subject. For instance, profiling individuals located in India from outside the country for statistical purposes may not trigger any obligations of data processing entities under the DPDPA.
- The DPDPA hinges on consent as ground for processing personal data, although additional narrowly defined or situation-based lawful grounds are also available. These are defined as “certain legitimate uses” listed under Section 7. The most relevant to the private sector include specified purposes for which the data principal has voluntarily provided their personal data and has not indicated their objection to the use of such personal data for that purpose, fulfilment of any legal/judicial obligations of a specified nature, medical emergencies and health services, situations involving the breakdown of public order, and employment-related purposes.
- Like the GDPR, the DPDPA requires that consent for processing of personal data must be “free, specific, informed, unambiguous and unconditional with a clear affirmative action.” Further, the consent should be limited to such personal data as is necessary for the specified purpose in the request for consent. In practice, this may mean that data fiduciaries cannot rely on “bundled consent.” The notice for consent must inform the data principal about the personal data and the purpose of its processing, how they can exercise their rights under the act, and the process for filing a complaint with the DPBI. Importantly from an operational perspective, where a data principal has given consent to processing prior to the act, the data fiduciary needs to provide notice with the said details “as soon as it is reasonably practicable.” Rule 3 sets out clear standards; it requires that notices be presented in clear and plain language and includes a detailed description of the personal data, its specific purpose, and a list of the goods and services that will use it.
- In what is perhaps one of the most important rights from the perspective of data subjects, similar to the GDPR, data principals have a right to withdraw their consent at any time and data fiduciaries are required to ensure that withdrawing consent is as easy as giving consent. Once consent is withdrawn, personal data must be deleted unless a legal obligation to retain the data applies. Additionally, data fiduciaries must ask any processors to stop processing the data for which consent has been withdrawn.
Singapore
Singapore – AI Governance Framework for Agentic AI - Singapore Launches New Model AI Governance Framework for Agentic AI | IMDA https://www.imda.gov.sg/resources/press-releases-factsheets-and-speeches/press-releases/2026/new-model-ai-governance-framework-for-agentic-ai?mkt_tok=MTM4LUVaTS0wNDIAAAGfivzYAQvbEnVGo-P8UY8XA8jK3Y6I_1yck8mQp1ODblBF3cmuTeQzo-YEsZFD-xEwlCR94Xf7n_V1h2iSdhrsZpidu7xl4mdYysFqlK9G0kg_
- Minister for Digital Development and Information, Mrs Josephine Teo, announced the launch of the new Model AI Governance Framework for Agentic AI1, at the World Economic Forum (WEF) today. It provides guidance to organisations on how to deploy agents responsibly, recommending technical and non-technical measures to mitigate risks, while emphasising that humans are ultimately accountable.
- Unlike traditional and generative AI, AI agents can reason and take actions to complete tasks on the behalf of users. This allows organisations to automate repetitive tasks, such as those related to customer service and enterprise productivity, and drive sectoral transformation by freeing up employees’ time to undertake higher value activities.
- The MGF for Agentic AI offers a structured overview of the risks of agentic AI and emerging best practices in managing these risks. It is targeted at organisations looking to deploy agentic AI, whether through developing AI agents in-house or using third-party agentic solutions. The framework provides organisations with guidance on technical and non-technical measures they need to put in place to deploy agents responsibly, across four dimensions:
- Assessing and bounding the risks upfront by selecting appropriate agentic use cases and placing limits on agents’ powers such as agents’ autonomy and access to tools and data;
- Making humans meaningfully accountable for agents by defining significant checkpoints at which human approval is required;
- Implementing technical controls and processes throughout the agent lifecycle, such as baseline testing and controlling access to whitelisted services; and,
- Enabling end-user responsibility through transparency and education/training
South Korea
South Korea – AI Basic Act - South Korea launches landmark laws to regulate AI, startups warn of compliance burdens | Reuters https://www.reuters.com/world/asia-pacific/south-korea-launches-landmark-laws-regulate-ai-startups-warn-compliance-burdens-2026-01-22/?mkt_tok=MTM4LUVaTS0wNDIAAAGfhl9jgLHQuVAVBDEmkJjgjjPVLFFLWbW8LPQj4AMktL30EPBZYIXehIqwHECUVHOSPYaMxj4K-2ESr6_RCZoSuwNCZW_gs-4_ybAfg-36BiiS
- South Korea introduced on Thursday what it says is the world’s first comprehensive set of laws regulating artificial intelligence, aiming to strengthen trust and safety in the sector, but startups fretted that compliance could hold them back. Aiming to become one of the world’s top three AI powerhouses, South Korea is hoping that its new AI Basic Act will help position the country as a leader in the field. The laws in their entirety have taken effect sooner than the EU’s AI Act which is being applied in phases through 2027.
- Under South Korea’s laws, companies must ensure there is human oversight in so-called “high-impact” AI which includes fields like nuclear safety, the production of drinking water, transport, healthcare and financial uses such as credit evaluation and loan screening. Other rules stipulate that companies must give users advance notice about products or services using high-impact or generative AI, and provide clear labelling when AI-generated output is difficult to distinguish from reality.
- The Ministry of Science and ICT has said the legal framework was designed to promote AI adoption while building a foundation of safety and trust. The penalties can be hefty. A failure to label generative AI in South Korea, for example, could leave a company facing a fine of up to 30 million won ($20,400).
Australia
Australia – Guidance on Automated Decision Making - Australian Information Commissioner highlights improved transparency and integrity for government agencies in automated decision-making | OAIC https://www.oaic.gov.au/news/media-centre/australian-information-commissioner-highlights-improved-transparency-and-integrity-for-government-agencies-in-automated-decision-making?mkt_tok=MTM4LUVaTS0wNDIAAAGfhl9jgfLl5F3LknaU-H4DoXHsoU-iSNH3EqckXmTgb2h4uTbEcjGpUudQZhA6DyfG9L_XHQ2CqTHZYl7UTSk2lVR0vnotQXXGbdKZ4vAW1uC6
- The Office of the Australian Information Commissioner (OAIC) has identified opportunities for Australian Government agencies to improve transparency in the use of automated decision-making (ADM). ADM refers to the use of technology to automate decision-making processes. It is used across government in areas such as social services, taxation, aged care and veterans’ entitlements. The intended benefit of this Report is to inject clarity and certainty for agencies and the community regarding the operation of the Australian access to information scheme in the context of digital government.
- The review assessed how agencies disclose their use of ADM as ‘operational information’ required to be published under the Freedom of Information Act 1982 (FOI Act). The Report acknowledges that technology has altered the operating environment of agencies and greater guidance is required to ensure that agencies are well placed to meet their existing obligations.
Canada
Royal Canadian Mounted Police releases High Risk Child Sex Offender Database to public | Canadian Lawyer https://www.canadianlawyermag.com/news/general/royal-canadian-mounted-police-releases-high-risk-child-sex-offender-database-to-public/393593
- The Royal Canadian Mounted Police has released to the national public the High Risk Child Sex Offender Database, which lists centralized information about individuals who have been found guilty of sexual offences against children. The database also includes those whose risk of committing crimes of a sexual nature is high. The tool has been described as the first of its kind in the country.
- The RCMP updates the database with recommendations from provincial, territorial, and municipal authorities, which identify high-risk offenders in their jurisdictions through established practices. These authorities are tasked with verifying the accuracy of information provided for the database.
- The database is intended to help law enforcement investigate and prevent sexual crimes against children; it is also intended to strengthen the public’s knowledge and aid in decision-making to protect children and vulnerable individuals.
- The RCMP differentiated the High Risk Child Sex Offender Database from the National Sex Offender Registry, the national registration system geared towards offenders convicted of designated sex offences. Only law enforcement may access the NSOR, which is under the Sex Offender Information Registry Act and requires the individuals listed to report to police every year per court orders.
Ontario – Joint Principles on AI Adoption - Ontario privacy commissioner, human rights commission publish joint principles on AI adoption | Law Times https://www.lawtimesnews.com/practice-areas/privacy-and-data/ontario-privacy-commissioner-human-rights-commission-publish-joint-principles-on-ai-adoption/393159
- The principles focus on helping organizations to maintain public trust by respecting privacy and human rights in developing, implementing, and using AI. Organizations using or are considering the adoption of AI systems are urged to adhere to six principles: valid and reliable, safe, protect privacy, affirm human rights, transparent, and accountable. AI systems must generate valid, reliable, and accurate results for the purposes for which they were developed, used, or deployed. Systems must meet independent testing standards and objectively fulfill intended requirements for specific application. They must perform consistently over a certain duration in the environment in which they are implemented.
- System use must comply with Ontario’s human rights and privacy laws, including the right to non-discrimination. AI systems must support considerations of human life, physical and mental health, economic security, and the environment; they must be reviewed and tracked to ensure their ability to withstand events or deliberate efforts to induce harm.
- AI system developers, providers, and users must proactively guard personal information and respect the right of access to information. Systems must be developed under a privacy by design approach that expects and minimizes privacy risks to groups and individuals, incorporating privacy protections into systems. AI developers, providers, and institutions must ensure that systems do not infringe on significant equality rights by identifying inherent systemic discrimination and addressing them under the Ontario Human Rights Code. Governments and governmental actors must also respect the Canadian Charter of Rights and Freedoms, including the rights of freedom of expression, peaceful assembly, and association.
- Organizations must establish an internal governance structure that includes a human-in-the-loop approach to facilitate system accountability. They must conduct privacy and human rights impact assessments as well as algorithmic impact assessments, and a person or persons must be appointed to supervise system development, deployment, and use.

