On paper, these are two separate topics. In practice, they're inseparable. You can't talk about the transformative potential of AI in tax administration without immediately confronting the question: at what cost, and to whom?
That tension — between innovation and protection, between speed and fairness — became the throughline of my entire day in Nagpur.
The Room That Changed My Perspective
Here's what I wasn't fully prepared for: the quality of the questions.
I walked in expecting to explain foundational concepts. What is a large language model? What does RAG architecture mean? How does prompt engineering work? And yes, some of that was necessary. But the participants — officers from developing economies across Asia, Africa, and beyond — weren't just absorbing information passively. They were interrogating it.
One officer asked how you prevent an AI-assisted audit system from encoding historical biases in taxpayer selection. Another wanted to know how her country, with limited digital infrastructure, could adopt AI without deepening the digital divide. A third pushed back on the idea that AI could meaningfully assist in transfer pricing without understanding the specific regulatory nuances of his jurisdiction.
These weren't theoretical objections. They were the questions of people who would actually have to implement these systems back home, with real constraints and real consequences.
That shift — from "should we adopt AI?" to "how do we adopt it responsibly?" — told me something important. The global conversation about AI in government has moved past the hype phase. What people want now is practical, honest guidance.
What I Showed Them (And What I Learned Showing It)
In the morning session, I walked through real examples rather than hypotheticals. India's NUDGE campaign, where AI-powered behavioural nudges led to 24,678 taxpayers voluntarily revising their returns and disclosing ₹29,208 crore in foreign assets — with zero litigation. Singapore's IRAS chatbot, which saved over 11,000 officer-hours. The CBDT's ₹3,000 crore data analytics project with LTIMindtree that's building predictive capabilities at a national scale.
I also got personal. I showed them the time-savings analysis from my own transfer pricing work — how document review that used to take eight weeks can now be compressed to two, how comparability analysis that consumed a month now takes three days. The room went quiet when I put up the numbers. Not because the technology was surprising, but because the implications were immediate. Every officer in that room was mentally mapping those efficiency gains onto their own caseload.
But the moment that stayed with me came in the afternoon.
The Cautionary Tales That Matter
When I brought up Australia's Robodebt scandal — where an automated debt recovery system wrongly targeted hundreds of thousands of welfare recipients — the energy in the room shifted. This wasn't abstract anymore. These were real governments, using real algorithms, causing real harm to real people.
The Netherlands childcare benefits crisis, where an AI system flagged families — disproportionately those with dual nationalities — for fraud based on flawed data and biased algorithms, hit even harder. Some participants came from countries with similar demographic complexities. The lesson wasn't subtle: if wealthy, technologically advanced nations can get this catastrophically wrong, what does that mean for countries with fewer resources to build safeguards?
I made a point that I believe deeply: these failures weren't caused by bad people. They were caused by good people who asked "Can we automate this?" before asking "Should we automate this?" Who asked "How much will this save?" before asking "Who might this harm?"
The room didn't just nod along. They debated. They shared examples from their own contexts. They pushed me to be more specific about what "human oversight" actually looks like in practice when you're understaffed and under-resourced.
What I'm Taking Away
Three things crystallised for me during that day in Nagpur.
First, the demand for practical AI literacy in government is enormous and largely unmet. Officers don't need more keynote speeches about the "fourth industrial revolution." They need to sit down with a tool, try a prompt, see what works, understand what fails, and build intuition through practice. The most engaged moments in my sessions weren't during the slides — they were during the live demonstrations, when participants could see AI drafting a tax notice or analysing a scenario in real time. I gave a live demonstration of how we give a context in a prompt and how does it lead to a customised notice generation.
Second, developing economies have a genuine opportunity to leapfrog. They're not burdened by legacy systems the way some advanced administrations are. If they build thoughtfully — with ethical frameworks baked in from day one rather than bolted on after a scandal — they can set a global standard. That's not wishful thinking. It's a strategic possibility, and it requires exactly the kind of cross-country learning that programmes like ITEC enable.
The Question I Keep Coming Back To
Tax administration isn't glamorous. It doesn't make headlines unless something goes wrong. But it's the infrastructure that funds schools, hospitals, roads, and defence. When we get it right, societies function. When we get it wrong — through bias, opacity, or carelessness — the most vulnerable bear the cost.
So the question isn't whether AI will transform tax administration. It already is. The question is whether we — the people in the room that day, and the thousands like us around the world — will shape that transformation with the care it demands.
I left Nagpur cautiously optimistic that we will.
.jpeg)

.jpeg)
.jpeg)

.jpeg)

.jpeg)

No comments:
Post a Comment