Reassessing India’s Draft SGI Rules: Key Concerns and Expert Recommendations

Background: Deepfakes and Digital Abuse in India
In November 2023, a deepfake video featuring actor Rashmika Mandanna went viral across Indian social media platforms, showing her face superimposed onto another woman’s body[1]. The incident sparked national outrage and prompted Delhi Police to arrest the creator within weeks. But Mandanna’s case represents only the visible tip of a much larger problem research indicates that approximately 92 percent of deepfake victims in India are women[2], with synthetic content increasingly weaponized for harassment, reputational damage, and extortion.
The threat extends beyond celebrities. Matrimonial platforms have emerged as new vectors for AI-enabled abuse, where malicious actors use morphing technology to create fraudulent profiles or manipulate images of women without consent. These tools, once requiring technical expertise, are now accessible through consumer applications, democratizing the capacity for digital harm while victims lack meaningful recourse[3].
Against this backdrop of documented harms from electoral manipulation through fabricated political statements to financial fraud enabled by synthetic video/audio, MeitY’s draft amendments on Synthetically Generated Information attempt to establish mandatory disclosure requirements. Regulatory intervention responds to a legitimate crisis: when synthetic content becomes indistinguishable from authentic media, the foundations of digital trust collapse, with disproportionate consequences for vulnerable populations.
Introduction: The Hunt for Real in an AI World
So, on 22nd October 2025, the Ministry of Electronics and Information Technology published draft amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, establishing mandatory disclosure requirements for synthetically generated information. The amendments mandate that all AI-generated content carry visible watermarks occupying at least 10 percent of the content area, along with permanent unique identifiers embedded within the content itself.
The regulatory intervention responds to documented instances of manipulated media causing demonstrable harm, fabricated videos influencing electoral outcomes, fraudulent audio recordings facilitating financial crimes, and synthetic images damaging individual reputations. Recognizing these risks as matters of public interest, MeitY positioned the draft rules as essential infrastructure for maintaining information integrity in India’s digital ecosystem.
However, the government’s proposed solution has generated substantial debate from technical experts, constitutional scholars, and industry organizations. Three prominent submissions from the Internet Freedom Foundation (IFF), the AI Knowledge Consortium (AIKC), and DeepStrat articulate fundamental objections spanning constitutional validity, technical feasibility, and regulatory effectiveness. The IFF argues the rules constitute an unconstitutional prior restraint on speech. The AIKC contends the prescriptive mandates are technically impractical and ignore established international standards. While acknowledging the legitimate need for transparency in AI-generated content, AIKC advocates for a principles-based framework analogous to SEBI’s Business Responsibility and Sustainability Reporting model, permitting operational flexibility while achieving policy objectives. They recommend adopting interoperable international standards like C2PA rather than imposing uniform procedural requirements. DeepStrat, while supporting the regulatory objective, identifies critical gaps in risk classification and enforcement mechanisms.
These submissions collectively challenge whether rigid procedural requirements can effectively address sophisticated threats from malicious actors, particularly when bad-faith operators will simply circumvent labelling obligations while legitimate users bear compliance burdens. The draft rules thus stand at a critical juncture, requiring fundamental reconsideration before becoming operational policy that binds India’s rapidly evolving AI sector.
DeepStrat’s Regulatory Refinements
DeepStrat accepts the draft amendments’ underlying objective but recommends critical operational refinements to enhance enforceability and technical feasibility. Central to their submission is a proposed three-tier risk classification system that distinguishes creative AI content requiring minimal disclosure from malicious deepfakes warranting immediate takedown and criminal referral. They advocate replacing the rigid 10 percent watermark mandate with flexible, format-appropriate disclosures aligned with international standards like C2PA and ISO/IEC 23053:2022, allowing platforms to select implementation methods suited to different content types a position that converges with AIKC’s recommendation for interoperable, open standards rather than prescriptive specifications. Recognizing India’s linguistic diversity, DeepStrat emphasizes the need for phased rollout of Indic language detection capabilities, noting that current detection models predominantly target English-language content. Their submission also recommends invoking Section 66E of the IT Act to address deepfake-enabled identity fraud in e-KYC systems, establishing SSMIs’ obligation to treat unlabeled synthetic content as community guideline violations, and implementing a four-phase compliance roadmap that aligns regulatory expectations with existing technical capabilities rather than imposing immediately infeasible requirements. Both DeepStrat and AIKC prioritize embedded metadata and provenance logging over surface-level visual markers, reflecting technical consensus that authentication mechanisms within content structure prove more resilient than user-facing labels alone.
Divergent Perspectives on Mandatory Disclosure Requirements
Three prominent organizations submitted substantive feedback during the consultation period for MeitY’s draft amendments, each advancing distinct analytical frameworks and regulatory recommendations. The submissions reflect fundamentally different interpretations of both the constitutional validity and technical feasibility of the government’s proposed approach.
The organizations occupy different positions along the regulatory spectrum. DeepStrat adopts a security-focused framework, accepting the regulatory objective while identifying critical gaps in risk assessment and enforcement mechanisms. The AIKC approaches the rules from an industry implementation perspective, emphasizing operational flexibility and standards interoperability. The IFF grounds its analysis in constitutional doctrine, arguing the amendments exceed MeitY’s delegated authority and violate fundamental rights protections. While DeepStrat and AIKC diverge on risk classification and compliance timelines, both converge on rejecting the fixed 10 percent watermark specification in favor of flexible, standards-based disclosure mechanisms aligned with international frameworks like C2PA.
The table below synthesizes their core positions across key regulatory dimensions:
| Regulatory Dimension | MeitY’s Proposed Recommendation |
DeepStrat | AI Knowledge Consortium (AIKC) | Internet Freedom Foundation (IFF) |
| Overall Position | Mandatory labelling framework under Rule 3(3) and Rule 4(1A), requiring technical controls at the point of generation and verification at the point of publication | Supports underlying regulatory objectives but recommends enhanced specificity, standards alignment, risk-based classification mechanisms, and nuanced enforcement provisions. | Acknowledges the necessity of regulatory intervention but advocates for operational flexibility, technical interoperability, and intermediary-driven implementation strategies. | Recommends complete withdrawal of draft amendments, citing constitutional infirmities, technical impossibility of compliance, and substantial risks to constitutionally protected speech. |
| Proposed Legal Framework | Amends existing IT (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, through ministerial notification under Section 87 of the IT Act, 2000 | Operates within the proposed rule structure but recommends incorporating risk-based classification taxonomy and applying complementary provisions from existing statutes, particularly Section 66E of the Information Technology Act. | Advocates for principles-based reporting architecture analogous to Business Responsibility and Sustainability Reporting frameworks, permitting implementation flexibility rather than prescriptive procedural mandates. | Contends that amending the IT Rules constitutes an improper legislative approach; recommends Parliament enact comprehensive, standalone legislation establishing an AI governance framework through primary law. |
| Labelling and Watermarking Requirements | Rule 3(3): Permanent unique metadata/identifier visible on at least 10% of surface area (visual) or audible during initial 10% of duration (audio); must immediately identify content as synthetically generated; no user modification permitted | Recommends replacing fixed percentage mandate with flexible, format-appropriate disclosure mechanisms aligned with international technical standards, including C2PA and ISO/IEC 23053:2022 provenance frameworks. | Rejects prescriptive 10 percent watermark mandate as technically arbitrary; advocates for interoperable, open technical standards such as Coalition for Content Provenance and Authenticity (C2PA), allowing intermediaries discretion in selecting implementation methods. | Characterizes mandatory labeling requirement as constitutionally impermissible “compelled speech” and “prior restraint” on expression. Argues the 10 percent watermark specification is technically unenforceable and readily circumvented by malicious actors. |
| Allocation of Responsibility | Dual structure: Rule 3(3) obligates SGI creation platforms to embed immutable labels; Rule 4(1A) requires SSMIs to mandate user declarations, verify through technical measures, and label confirmed SGI before publication | Places primary obligation on entities generating synthetic content and platforms distributing such content, with enhanced responsibilities for Significant Social Media Intermediaries as defined under the existing IT Rules framework. | Contends intermediaries possess unique technical capability and institutional position to design harm-reduction mechanisms; recommends an incentive-based compliance model with oversight conducted by civil society “trusted flaggers” operating as accountability monitors. | Argues proposed rules unconstitutionally transform intermediary function from passive content hosts to active content moderators, thereby eliminating safe harbor protections established under Section 79 of the Information Technology Act. |
These divergent perspectives reveal fundamental tensions in India’s approach to synthetic content regulation between prescriptive mandates and flexible frameworks, between executive rulemaking and parliamentary legislation, and between intermediary liability and platform responsibility. The submissions collectively demonstrate that achieving the stated policy objective of transparency requires resolving these foundational questions before operationalizing specific disclosure requirements.
Consensus Framework Emerging from Stakeholder Submissions
Despite occupying different positions along the regulatory spectrum, the three organizational submissions converge on a fundamental critique of the draft amendments’ structural approach. All three organizations reject the fixed mandate requiring a 10 percent visible watermark or permanent unique identifier as the exclusive compliance mechanism.
This rejection stems from three reinforcing concerns. First, rigid procedural specifications create technical implementation barriers that vary substantially across content formats. Audio presents different disclosure challenges than static images, video, or text-based synthetic outputs. Second, prescriptive approaches prove vulnerable to circumvention by malicious actors, who can readily remove or modify watermarks while legitimate creators bear full compliance burdens. Third, mandated specifications risk suppressing innovation by imposing costs that disproportionately affect smaller platforms and market entrants.
The submissions advance a coherent alternative: replace prescriptive mandates with principles-based disclosure frameworks emphasizing transparency and verifiable provenance. Under this approach, platforms would maintain discretion to implement context-appropriate disclosure methods, such as visible watermarks, embedded metadata, content credentials, textual captions, or iconographic indicators provided the chosen method enables users to identify synthetic content.
This flexible framework would operate through open, interoperable technical standards rather than government-specified procedures. Multiple submissions reference the Coalition for Content Provenance and Authenticity (C2PA) as an example of industry-developed standards that establish verifiable content provenance without prescriptive implementation requirements.
The consensus framework shifts regulatory focus from procedural compliance did the platform implement the specified watermark?
to outcome verification can users reliably identify synthetic content? This outcomes-based approach aligns enforcement mechanisms with policy objectives rather than with technical specifications that may become obsolete as technology evolves.
Conclusion: Reconciling Transparency with Constitutional Limits
The draft amendments address a legitimate regulatory concern synthetic content poses documented risks to electoral integrity, financial security, and individual reputation. However, the prescriptive mandates MeitY has proposed create more problems than they solve. The fixed watermark requirement proves technically unenforceable, readily circumvented by malicious actors, and potentially unconstitutional under Articles 14 and 19 of the Constitution. Expert submissions converge on an alternative approach: principles-based frameworks that establish transparency objectives while permitting intermediaries to select implementation methods. This outcomes-focused architecture would align India’s regulation with international standards like C2PA, maintain operational flexibility as technology evolves, and avoid constitutional vulnerabilities inherent in compelled speech requirements.
The fundamental question facing MeitY is whether India will adopt rigid procedural mandates that risk both ineffectiveness and unconstitutionality or establish flexible transparency principles that achieve policy objectives without prescriptive overreach. The consultation process has provided a clear roadmap toward the latter approach one that balances citizen protection against deceptive content with constitutional protections for legitimate expression and technological innovation.
[1] https://www.newindianexpress.com/entertainment/2024/Jan/20/delhi-police-arrests-guntur-man-for-rashmika-mandanna-deepfake-video
[2] https://www.news18.com/india/with-92-of-deepfake-victims-being-women-how-is-ai-becoming-a-tool-for-digital-abuse-tyd-ws-el-9684473.html
[3] https://feminisminindia.com/2025/12/01/from-rishta-to-risk-scams-and-ai-morphing-fueling-gendered-violence-on-indian-matrimonial-sites/
