Synopsis
Legal and policy experts are urging Indian regulators to move beyond broad AI principles and begin addressing the unique risks posed by autonomous AI agents. India currently has no dedicated law to govern AI agents that can act on their own and interact with other systems, leading to a yawning gap as companies rapidly deploy such tools across payments, banking and supply chains, they said.Listen to this article in summarized format
Much of the risk is currently being managed through contracts, consent structures and system design, tools built for a world where humans were still in the loop. Regulators are still relying on voluntary guidelines and broad principles instead of a dedicated framework, the experts said.
Enterprises are embedding AI into payments, transactions and workflows, raising the urgency of a rigorous governing framework.
The biggest concern is over agent-to-agent interactions—one autonomous system triggering another without any human oversight--making liability difficult to trace if something goes wrong. Existing laws such as tort, contract, consumer protection and data rules are being stretched to cover these risks, even though they were never designed for autonomous systems acting independently.
At the policy level, there is an ongoing debate on the subject but no direct action yet.
“Indian regulators are actively thinking about AI governance, though agentic AI remains a frontier challenge,” said Subimal Bhattacharjee, a technology policy analyst.
The difficulty lies in how these systems behave in practice.
“Unlike a single AI system, autonomous agents trigger other agents in chains, across banking, healthcare and supply chains, with no human checkpoint in between,” Bhattacharjee said. “Attributing liability across such pipelines is something no major regulatory framework has yet resolved cleanly.”
China, for instance, requires that every autonomous action be traceable back to a specific human command or system parameter. The NITI Aayog has signalled a similar direction, suggesting a risk-tiered supervision model that would require high-impact agents to undergo regulatory sandbox testing. India has not moved there yet.
“Causation and allocation of liability become particularly tricky in complex multi-agent interactions,” said Arun Prabhu, partner at Cyril Amarchand Mangaldas. Applying traditional legal tests, such as “foreseeability,” is difficult when systems are constantly evolving and acting across environments, he added.
One action can trigger a chain of decisions across banking, healthcare or supply chains, making both the impact and accountability harder to contain.
There is no specific legal framework in India that directly addresses these risks.
“Without a dedicated AI statute, companies primarily rely on tort law and contractual obligations to manage deployment risks,” said Probir Roy Chowdhury, partner at JSA Advocates and Solicitors.
Since Indian law does not recognise AI agents as legal persons, liability for their actions generally falls on the developer or operator, he said, unless contracts explicitly shift that burden. Courts are also likely to apply product liability standards under the Consumer Protection Act to penalise developers if an error stems from a lack of mandatory safety guardrails, Roy Chowdhury added.
That becomes harder in agent-to-agent interactions.
“Liability would generally cast responsibility on the developer or operator, even if they did not intend for the AI agent to cause harm,” he said.
Some lawyers argue existing laws offer a starting point.
“Existing legal frameworks such as the IT Act, data protection law and sectoral regulations should be sufficient guidance,” said Harsh Walia, partner at Khaitan and Co.
Indian technology law has historically been reactive, with significant incidents driving regulatory change.
“A dedicated framework may eventually become an urgent social need, but at present it seems that this dedicated law is still a few years away,” Walia said.
Bhattacharjee said, “India’s instinct to move carefully rather than rush legislation is sound but careful needs urgency, because agentic deployments are not waiting for policy to catch up.”