Skip to main content
LITT
India AI Governance Guidelines, 2025
Try LITT
LITT/Blog/REGULATORY
Li
X
🔗
REGULATORYReg UpdateMarch 2025

India AI Governance Guidelines, 2025

A comprehensive framework to enable safe, inclusive, and innovation-driven AI adoption.

LR
LITT Research
6 min read

India has released the India AI Governance Guidelines, laying out a comprehensive framework to enable safe, inclusive, and innovation-driven AI adoption. The guidelines recognise that AI represents a significant opportunity for national development across sectors such as healthcare, agriculture, education, and public services, but also carries risks ranging from misinformation and deepfakes to algorithmic bias, privacy concerns, and threats to national security.

The core aim of the framework is to balance innovation with accountability, ensuring India benefits from AI while safeguarding individuals and society.

At the foundation of the framework are seven guiding “sutras”: Trust is the Foundation; People First; Innovation over Restraint; Fairness and Equity; Accountability; Understandable by Design; and Safety, Resilience & Sustainability. These principles emphasise human-centric design, responsible deployment, clear responsibility across the AI value chain, and the need for systems that are explainable, safe, and inclusive.

The Seven Guiding Sutras

01
Trust is the Foundation
Without trust, innovation and adoption will stagnate.
02
People First
Human-centric design, human oversight, and human empowerment.
03
Innovation over Restraint
All other things being equal, responsible innovation should be prioritised over cautionary restraint.
04
Fairness & Equity
Promote inclusive development and avoid discrimination.
05
Accountability
Clear allocation of responsibility and enforcement of regulations.
06
Understandable by Design
Provide disclosures and explanations that can be understood by the intended user and regulators.
07
Safety, Resilience & Sustainability
Safe, secure, and robust systems that are able to withstand systemic shocks and are environmentally sustainable.
II

Institutional Architecture

A notable institutional shift is the establishment of an AI Governance Group (AIGG) for cross-government coordination, supported by a Technology & Policy Expert Committee, and operational guidance from the AI Safety Institute (AISI). The AISI will lead research on AI safety, develop technical standards, support testing and evaluation, and anchor India’s participation in global AI safety networks.

The Guidelines also stress the need for voluntary compliance and industry-led responsibility, recommending transparency reports, grievance mechanisms, audits, and the adoption of techno-legal solutions such as privacy-preserving data architectures and content provenance measures to help counter deepfakes and harmful misinformation.

III

Six Pillars of Implementation

The guidelines propose six key pillars of implementation:

01
Infrastructure
Enable innovation and adoption of AI by expanding access to foundational resources such as data and compute, attract investments, and leverage the power of digital public infrastructure for scale, impact and inclusion.
02
Capacity Building
Initiate education, skilling, and training programs to empower people, build trust, and increase awareness about the risks and opportunities of AI.
03
Policy & Regulation
Adopt balanced, agile, and flexible frameworks that support innovation and mitigate the risks of AI. Review current laws, identify regulatory gaps in relation to AI systems, and address them with targeted amendments.
04
Risk Mitigation
Develop an India specific risk assessment framework that reflects real-world evidence of harm. Encourage compliance through voluntary measures supported by techno-legal solutions as appropriate. Additional obligations for risk mitigation may apply in specific contexts, for e.g. in relation to sensitive applications or to protect vulnerable groups.
05
Accountability
Adopt a graded liability system based on the function performed, level of risk, and whether due diligence was observed. Applicable laws should be enforced, while guidelines can assist organisations in meeting their obligations. Greater transparency is required about how different actors in the AI value chain operate and their compliance with legal obligations.
06
Institutions
Adopt a whole of government approach where ministries, sectoral regulators, and other public bodies work together to develop and implement AI governance frameworks. An AI Governance Group (AIGG) should be set up, to be supported by a Technology & Policy Expert Committee (TPEC). The AI Safety Institute (AISI) should be resourced to provide technical expertise on trust and safety issues, while sector regulators continue to exercise enforcement powers.
IV

Implications

Overall, the India AI Governance Guidelines signal a pragmatic, future-focused, and innovation-positive approach. The framework aims to ensure that as AI scales, it does so safely, ethically, and in alignment with India’s developmental vision, making AI a driver of broad societal benefit rather than restricted technological advantage.

AI GovernanceIndia AIRegulationDPDPAI SafetyAISI
LR
LITT Research

The research arm of LITT, publishing regulatory updates and deep dives on Indian regulatory intelligence.

Don't let your next filing cite a ghost.

Every citation verified. Every reasoning step auditable.
Your data never leaves your machine.

Try LITT Free
14 DAYS FREE · NO CREDIT CARD REQUIRED
LITT© 2026 LITT Technologies Pvt. Ltd.