Thursday, January 15, 2026

When AI Builds AI

There's something uniquely humbling about watching an AI agent build another AI agent in less than two weeks - and then realizing that the thing it built might fundamentally change how millions of people work every day.

Anthropic just launched Cowork, a desktop AI agent that manages files on your Mac without you having to babysit it through every step. But here's what stopped me cold: they built this entire feature using Claude Code. In approximately ten days. The AI helped build the AI that now helps us build... well, whatever we need to build.

If you're in tax administration, public finance, or really any corner of government that drowns in paperwork, you should be paying very close attention to what just happened.

The Expense Report That Changed My Mind

Let me paint you a picture that probably sounds familiar. A financial operations team receives hundreds of receipt images every month - crumpled photos from field visits, scanned hotel bills, blurry restaurant checks. Someone (usually several someones) manually extracts vendor names, amounts, dates. They sort by category. They build spreadsheets. They reconcile. Four hours of mind-numbing work that, let's be honest, nobody entered public service to do.

Now imagine this: you point Cowork at a folder of those receipts. You go get coffee. You come back to a reconciled monthly expense spreadsheet, categorized and formatted, with near-zero data entry errors. The four-hour process took fifteen minutes, and the humans involved spent that time thinking about patterns in spending rather than typing numbers into cells.

That's not a future scenario. That's available today for Claude Max subscribers on macOS.

And working at DOMS, thinking constantly about how we transform tax administration while respecting taxpayer dignity, I find myself asking: if Anthropic can build something this sophisticated in ten days using their own AI tools, what's our excuse for still requiring taxpayers to manually re-enter information the government already has? (Though we have succeeded to a greater extent with pre-filled forms)

The Recursive Loop We're Living In

Here's what fascinates me about this moment. Anthropic used Claude Code - an AI coding agent - to build Cowork - an AI file management agent. We're watching AI accelerate AI development, which will accelerate how we deploy AI, which will accelerate... you see where this goes.

In transfer pricing work in Mumbai, I've spent years analyzing patterns across thousands of transactions, looking for anomalies that might indicate profit shifting. It's intellectually demanding work that requires both pattern recognition and contextual judgment. The pattern recognition part? That's increasingly AI territory. The contextual judgment - understanding what unusual circumstances might legitimately explain an outlier, recognizing when similar-looking cases require different treatment - that's profoundly human.

But here's the thing: I couldn't do the contextual judgment part nearly as well if I was drowning in the pattern recognition grunt work. The AI doesn't replace my expertise; it creates the conditions where my expertise can actually matter.

Cowork represents a particular philosophy about this division of labor. It's not a chatbot that requires constant prompting. You specify a folder, define a task, and it works autonomously within those boundaries. It reads, creates, edits - without asking permission at every step. That's a fundamentally different relationship between human and AI than most of us are used to.

What Government Gets Wrong About AI (And Why Cowork Matters)

In my role working on India's new Income Tax Act 2025 and developing Guidance Notes at DOMS, I see two competing visions of AI in government service constantly colliding.

Vision One: AI as Institutional Efficiency Engine. Automate processing. Speed up compliance checks. Reduce headcount needs. Make government run faster, cheaper.

Vision Two: AI as Citizen Empowerment Tool. Reduce coordination costs for taxpayers. Make complexity navigable. Shift the relationship from adversarial compliance to collaborative partnership.

Most government AI initiatives, if we're being honest, default to Vision One. It's easier to measure. It fits existing budget frameworks.

But Vision Two is where the transformation actually happens.

What Anthropic did with Cowork - built primarily by AI, for everyday human tasks, designed to work autonomously once properly directed - points toward Vision Two. It doesn't make Anthropic's team smaller; it makes them more capable of building ambitious things quickly. The constraint shifted from "how many hours can our engineers spend on this?" to "what's actually worth building?"

Now translate that to tax administration. The constraint shouldn't be "how many officers can we hire to process returns?" It should be "how do we help taxpayers understand and meet their obligations with minimum friction?"

If an AI agent can take a folder of messy receipts and produce a reconciled expense report in minutes, could a similar agent help a small business owner take a folder of invoices and produce an accurate GST return? Not just fill in the forms - actually understand what qualifies, what doesn't, flag potential issues, suggest legitimate deductions they might have missed?

The technology is clearly there. The question is whether we have the imagination and will to deploy it this way.

The Ten-Day Test

Here's a thought experiment I keep coming back to: If Anthropic can build a sophisticated desktop agent in ten days using AI-assisted development, what could a well-resourced government innovation team build in ten weeks?

A taxpayer assistance chatbot that actually understands the Income Tax Act 2025's streamlined 536 sections? An agent that automatically identifies eligible deductions from uploaded financial documents? A tool that helps taxpayers model different scenarios - should I claim this under section X or Y? - with clear explanations in plain language?

Working closely with CBDT Board Members and the Chairman on policy implementation, I've seen how much brilliant thinking goes into tax reform. The new Act itself is remarkable - reducing 800+ sections to around 536, written in genuinely plain language, designed for accessibility.

Cowork suggests a different approach: build the tools as fast as you rebuild the rules. Use AI to create AI-powered assistance that evolves alongside the policy. Don't wait for the perfect centralized solution; empower teams to build, test, learn, iterate.

The Privacy Architecture We're Not Talking About

There's a critical detail buried in Anthropic's announcement that everyone in government should internalize: Cowork operates within user-specified folders. It doesn't roam freely across your system. You define the boundaries; it works within them.

This matters enormously for government AI deployment. The resistance to AI in tax administration often centers on privacy concerns, and rightfully so. Citizens worry about algorithmic surveillance, about AIs that know too much, about data being used in ways they didn't consent to.

But the Cowork model offers a different architecture: bounded AI assistance. The taxpayer uploads their documents to a specific, secure environment. The AI works within that environment to help them complete their obligations accurately. The AI doesn't have access to their entire financial life - only what they explicitly provide for the specific purpose of tax compliance.

This isn't just technically feasible; it's a fundamentally different social contract. Instead of "trust the government with AI access to all your data," it's "use this AI tool, bounded by your choices, to interact with government more effectively."

That shift from institutional efficiency to citizen empowerment I mentioned earlier? It requires this kind of privacy-preserving architecture. You can't empower citizens if they fundamentally don't trust the tools you're asking them to use.

What Gives Me Hope

I'll be honest about something that troubles me. The gap between what's technically possible and what government actually deploys is widening dangerously fast. Private sector companies are building sophisticated AI agents in weeks. Government procurement cycles measure timelines in years.

If a taxpayer can use a commercial AI tool to manage their finances more easily than they can use government-provided tax compliance tools, we've failed. And right now, in 2026, that's increasingly the reality.

But here's what gives me hope: The India we're building through initiatives like the new Income Tax Act isn't trying to compete with the private sector on technological sophistication. We're trying to create the legal and policy frameworks that make sophisticated technology serve public purpose.

The Act's move to plain language, the Taxpayers' Charter revision we're working on, the focus on collaboration over confrontation - these create the conditions where tools like Cowork-for-tax-compliance could actually flourish.

I've been struck by how hungry students and young professionals are for this vision. They don't want to choose between public service and technological sophistication. They want to bring AI's potential into government, to build tools that genuinely help people navigate complexity.

That energy matters. If we can channel it - if we can create environments where talented people can build meaningful solutions quickly, learn from real users, iterate rapidly - the ten-day timeline that seems remarkable today might just become normal.

The Question That Matters

So here's what I keep coming back to: Are we building AI for government, or are we building AI for citizens that happens to interact with government?

Cowork is definitely the latter. It doesn't exist to make Anthropic's operations more efficient (though it probably does). It exists to make Anthropic's users' lives easier. The benefit to Anthropic is indirect - happier, more productive users who see more value in their subscription.

Most government AI is the former. Built to make processing faster, compliance checking more automated, administration more efficient. The benefit to citizens is supposed to be indirect - cheaper government, faster processing, fewer errors.

I think we have this backwards.

What if we started with: How do we help this specific taxpayer understand what they owe and why? How do we make it genuinely simple for this small business to comply accurately? How do we reduce the cognitive load on this individual trying to claim legitimate deductions?

And then worked backwards to: What AI capabilities would we need to build to achieve that? What data infrastructure? What privacy protections? What training for our officers who work alongside these tools?

That's a fundamentally different procurement process. A different innovation culture. A different success metric. Not "how many returns processed by CPC” but "how many taxpayers report feeling confident they complied correctly?"

An Honest Admission

I don't have all the answers here. Working at DOMS, engaging with policy implementation at the highest levels, I'm acutely aware of constraints I couldn't have imagined before serving in this role. Government AI deployment isn't slow because bureaucrats are lazy or unimaginative. It's slow because the consequences of getting it wrong affect millions of lives, because privacy architecture for government AI is genuinely harder than for consumer applications, because equity considerations require us to ensure AI benefits don't accrue only to the tech-savvy.

These aren't excuses; they're real challenges that deserve serious thought.

But watching Anthropic build an AI agent using an AI agent in ten days, seeing what's now possible for ordinary users, I also know this: The complexity argument only holds if we're trying to build centralized, one-size-fits-all solutions. If we're trying to create bounded, privacy-preserving tools that help individuals navigate their specific situations, the path forward is clearer than we often admit.

The Income Tax Act 2025 gives us a once-in-a-generation opportunity to rethink not just the legal framework but the entire compliance experience. We're writing Guidance Notes to help people understand the new Act. What if those Guidance Notes were interactive? What if they adapted to your specific situation?

Building Differently

If I had to distill what Cowork represents into one sentence, it would be this: AI building AI to help humans focus on what actually matters to them.

For Anthropic's users, that's managing files and tasks efficiently so they can do their real work.

For taxpayers, it could be navigating tax obligations confidently so they can focus on their businesses, their families, their lives.

For us in tax administration, it should be about creating the conditions where that kind of empowerment becomes normal, expected, achievable - not exceptional.

The technology is here. The legal framework is evolving.

And I think the next ten days, ten weeks, ten months of government AI deployment will tell us whether we meant it.

 

Monday, January 12, 2026

When Coffee Machines Know Calendars

There's something quietly revolutionary happening in the space between our kitchen counters and our calendars. Last week, Amazon launched Alexa.com - bringing their AI assistant to web browsers - and while the tech press buzzed about "cross-device integration" and "unified interfaces," I found myself thinking about something else entirely: What happens when AI stops being a novelty and starts being a utility?

I've been spending considerable time at DOMS thinking about AI in taxation - pattern recognition in transfer pricing, compliance automation, the architecture of systems that could shift us from adversarial enforcement to collaborative partnership. But this Amazon announcement landed differently for me. Not because it's revolutionary technology (it isn't), but because it reveals something about where we are in the maturation of AI as infrastructure.

And that matters for anyone working in public service, policy, or governance.

The Invisible Infrastructure Question

Here's what caught my attention: Amazon's new platform doesn't just connect your voice commands across devices. It coordinates your Bosch coffee machine with your calendar, manages grocery shopping across Amazon Fresh and Whole Foods, and adjusts your smart home settings - all without you actively managing the connections. The company claims busy professionals are reclaiming 3-5 hours per week previously lost to managing household operations.

Three to five hours. Per week.

Now, set aside whether you trust Amazon with that level of household visibility (a legitimate concern). The underlying pattern is what interests me: AI becoming genuinely useful when it reduces coordination costs rather than just automating individual tasks.

This isn't about making one thing faster. It's about eliminating the cognitive overhead of connecting multiple things.

And that's precisely the conversation we're not having enough in public finance and governance.

What Taxpayers Actually Need (Hint: It's Not Just Faster Processing)

Working on the Taxpayers' Charter revision and the implementation guidance for India's new Income Tax Act 2025, I keep returning to a fundamental question: What if we're optimizing for the wrong thing?

Most AI applications in taxation focus on institutional efficiency - faster processing, better fraud detection, automated compliance checks. All important. All necessary. But they're solving the government's problem, not necessarily the taxpayer's problem.

The taxpayer's problem isn't usually that their return takes three weeks instead of two to process. Their problem is understanding which of seven different deduction categories applies to their situation, remembering whether they need Form 10E or Form 10BA, coordinating information across multiple financial institutions, and doing all of this while holding down a job and managing a household.

The taxpayer's problem, in other words, is coordination cost.

What would it look like if we designed AI systems in taxation the way Amazon designed this household coordination platform? Not to make our processes faster, but to eliminate the cognitive overhead citizens face in navigating our systems?

The Power Dynamic Embedded in Design

Consider the difference between these two approaches:

Approach A: AI-powered system that automatically flags discrepancies in your return and sends you a notice demanding clarification within 30 days.

Approach B: AI-powered system that, while you're preparing your return, proactively identifies potential issues, explains why they might be flagged, suggests documentation you should gather, and walks you through the reasoning - before you even file.

Both use the same underlying technology. Both might even result in the same compliance outcome. But only one treats the taxpayer as a partner in the process rather than a subject of it.

This is what I mean when I talk about shifting from adversarial models to collaborative ones. It's not about being "nice" to taxpayers. It's about recognizing that better compliance outcomes emerge when citizens understand and trust the system - and when the system genuinely serves their need to comply, not just the government's need to enforce.

When Privacy Architecture Becomes Democratic Architecture

The Amazon launch also surfaced something uncomfortable: the company stores conversation history and personalization settings across all your devices. Convenient? Absolutely. Concerning? Also absolutely.

But here's what struck me: In consumer AI, we've largely accepted this trade-off. We surrender privacy for convenience, and we do it consciously (if not always thoughtfully). We know Google reads our email to make search better. We know Amazon tracks our purchases to refine recommendations. We've normalized surveillance capitalism as the price of utility.

In public service AI, we cannot make that trade.

The privacy architecture of government AI systems isn't just a technical consideration - it's a democratic one. When the Income Tax Department deploys AI for transfer pricing analysis or compliance monitoring, the question isn't just "Does this protect data?" but "Does this preserve the proper relationship between citizen and state?"

This is why I'm increasingly convinced that equity considerations and privacy protections in AI aren't add-ons to be addressed after we build the systems. They're foundational design constraints that should shape what we build in the first place.

And honestly? I think we're still figuring this out. The new Income Tax Act 2025 reduces complexity from 800+ sections to around 500 and emphasizes plain language compliance. That's movement in the right direction. But compliance simplification and AI-enabled assistance need to evolve together, not sequentially.

The Coordination Challenge That Actually Matters

Here's where this connects to the broader work we're doing at DOMS with the CBDT leadership: The real coordination challenge in taxation isn't technical. It's institutional.

We have multiple departments, multiple systems, multiple data sources, multiple compliance touchpoints. From a taxpayer's perspective, this creates exactly the kind of coordination overhead that Amazon's new platform claims to solve for household management - except with much higher stakes and far less user-friendly interfaces.

The question isn't whether AI can help with this coordination. It obviously can. The question is whether we're willing to redesign our institutional architecture to let it.

Because here's the uncomfortable truth: Truly effective AI in public service requires dismantling some of the silos and turf protections that currently define how government works. It requires data sharing across departments. It requires common standards and interoperable systems. It requires trusting that better citizen outcomes serve everyone's institutional interests.

That's not a technology problem. That's a governance problem.

And it's one that decades of e-governance initiatives have repeatedly confronted - often unsuccessfully - because we've treated it as a technology problem.

What I'm Carrying Forward

As I work on implementation guidance for the new Act and continue the Charter revision, I find myself returning to a simple test: Does this make it easier for a taxpayer to understand what they need to do and why?

Not "Does this make our process more efficient?"

Not "Does this reduce our processing time?"

But: Does this reduce the coordination cost for the citizen trying to comply?

If we're honest, most of our current systems - even the digitized ones - fail that test. We've automated complexity, not eliminated it. We've made our processes faster without making them more navigable.

The Amazon announcement is a reminder that the technology exists to do better. What we need now is the institutional courage to design differently.

A Final Thought

I don't know if Amazon's vision of AI-coordinated household management will actually deliver on its promise of reclaiming hours per week. The history of productivity technology is littered with overpromises.

But I do know this: The future of AI in public service won't be determined by what the technology can do. It'll be determined by who we design it to serve - and whether we have the imagination to prioritize citizen empowerment over institutional efficiency.

That's the conversation I want to be having. Not just within DOMS or CBDT, but across government, across policy communities, across anyone grappling with how we make public institutions work for the people they're meant to serve.

What AI applications in governance have actually made your life easier as a citizen - not just as a policy professional or administrator, but as someone navigating systems from the outside? I'm genuinely curious what's working out there.

Because the best ideas for collaborative systems rarely come from inside the institutions alone.

 

When AI Needs Nuclear Power

There's something deeply paradoxical about our digital age. We've built technologies that can recognize faces across billions of images, translate languages in real-time, and generate human-like text - all running on invisible infrastructure we rarely think about. Until, that is, the power goes out.

Last week, Meta announced deals to secure up to 6.6 gigawatts of nuclear power by 2035. To put that in perspective, that's roughly equivalent to the entire electricity generation capacity of a country like Austria. This isn't just about keeping the lights on at Facebook. It's about ensuring that the next generation of AI models - the ones that might transform everything from drug discovery to climate modeling - can train without interruption.

And it got me thinking about a question we're wrestling with at DOMS as we explore AI applications in tax administration: What happens when the infrastructure requirements for transformative technology become so massive that they fundamentally reshape industries we thought were settled?

The Hidden Cost of Intelligence

Working on AI implementation in taxation, I've become acutely aware of something most people outside the tech world don't fully appreciate: artificial intelligence is hungry. Not metaphorically hungry for data - though that's true too - but literally hungry for electricity.

Training a single large language model can consume as much energy as hundreds of homes use in a year. When you're Meta, running the Prometheus AI supercluster, you're not talking about hundreds of homes. You're talking about powering a small city, continuously, for months at a time.

Here's what makes this particularly challenging: these training runs can't be interrupted. Imagine you're ninety days into a hundred-day training cycle for a frontier AI model - an investment potentially worth hundreds of millions of dollars in compute time and researcher expertise. A power fluctuation doesn't just pause your work. It can corrupt the entire run, forcing you to start over.

This is why Meta's nuclear bet matters. It's not about being green (though that's a welcome benefit). It's about reliability. Nuclear power plants run at 90%+ capacity factors, compared to 35-40% for solar and 25-35% for wind. When you're making a hundred-million-dollar bet on uninterrupted computation, that difference between 90% and 40% isn't academic - it's existential.

What This Means Beyond Silicon Valley

Now, you might be wondering: what does Meta's energy strategy have to do with public service or tax administration?

More than you'd think.

At DOMS, as we explore AI applications - from pattern recognition in transfer pricing to compliance automation - we're confronting a scaled-down version of the same question: What infrastructure do transformative applications actually require?

It's not just about computing power (though that matters). It's about the entire ecosystem that makes sustained innovation possible. Reliable data pipelines. Uninterrupted processing capacity. The ability to run complex analyses without worrying about system failures mid-stream.

But here's where it gets interesting for public service: while Meta can sign multi-gigawatt nuclear deals, government agencies need to think more creatively. We can't just throw money at the problem. We need to be smarter about architecture, partnerships, and what we're actually trying to achieve.

This brings me back to something I emphasize when speaking to students about AI in finance: The constraint isn't the limitation - it's the clarifying force. Meta's constraint is power availability. Ours in government might be budget or legacy systems. Both constraints force us to think more carefully about what problems we're actually solving and whether AI is genuinely the right tool.

The Deeper Question About Sustainability

There's an elephant in the room that Meta's announcement highlights: if artificial intelligence is going to transform healthcare, accelerate scientific discovery, and help solve climate change, it's going to need a lot of power. The estimates vary, but data centers could consume 3-4% of global electricity by 2030, up from about 1% today.

This raises an uncomfortable question: Are we willing to make the infrastructure investments required for the future we say we want?

I find myself thinking about this in the context of India's development trajectory. We're simultaneously trying to:

  • Expand electricity access to everyone
  • Reduce carbon emissions
  • Build digital infrastructure
  • Deploy AI for public benefit

These aren't contradictory goals, but they're certainly in tension. Meta's solution - nuclear power - might work for a company with virtually unlimited capital. But what's the pathway for developing economies? For government agencies? For small startups with transformative ideas but limited resources?

What I'm Taking Away

Meta's nuclear deals won't be the last of their kind. We're going to see more announcements like this - tech giants securing dedicated energy sources, building their own infrastructure, effectively becoming their own utilities.

But what's calling to me isn't just the scale of these investments. It's the reminder that transformative technology requires transformed infrastructure. You can't bolt revolutionary capabilities onto legacy systems and expect them to just work.

This applies whether you're training AI models or modernizing tax administration. The question isn't "Can we use AI?" but "Have we built the foundation that makes AI sustainable, reliable, and equitable?"

As we work on the new Income Tax Act 2025 and revise the Taxpayers' Charter, I keep coming back to this: the most important innovations aren't always the flashiest ones. Sometimes they're the unglamorous infrastructure decisions - the data architecture, the processing pipelines, the reliability standards - that make everything else possible.

Meta is betting billions on nuclear power because they understand something fundamental: the future they're trying to build requires infrastructure decisions made today.

In public service, we're making similar bets, even if they're denominated in different currencies. The question is whether we're being as intentional about our infrastructure choices as Meta is being about theirs.

What infrastructure investments - technical, institutional, or human - do you think are missing from conversations about AI in government? I'm genuinely curious, especially as we navigate these questions in real-time at DOMS.

 


Sunday, January 11, 2026

ChatGPT Health - When Health Meets Intelligence

There's something uniquely humbling about realizing that the frontier of transformation isn't always where you're looking.

For the past few months at DOMS, I've been immersed in how artificial intelligence might reshape taxation - from pattern recognition in transfer pricing to automating compliance checks. I've spoken to students at DTU and LBSIM about AI in finance, always circling back to efficiency, accuracy, and scale. But when OpenAI announced ChatGPT Health last week, I found myself thinking less about algorithms and more about my father.

He passed away in 2011. In those final months, I watched him navigate a maze of medical appointments, test results scattered across multiple hospitals, medication schedules that changed with bewildering frequency. My mother would carry a worn folder stuffed with reports, trying to piece together narratives for each new specialist. "What did the cardiologist say about the kidney function tests?" Simple questions that demanded complex archaeology through fragmented records.

What struck me about the ChatGPT Health launch wasn't the technology itself - we've known AI could process medical data for years. It was the fundamental reorientation of the question it answers.

The Question We're Actually Asking

Most health technology asks: "How do we make healthcare systems more efficient?"

ChatGPT Health asks something more intimate: "How do we help people understand their own bodies?"

The distinction matters immensely. With 230 million health questions being asked on ChatGPT weekly, OpenAI identified something profound: people aren't just looking for medical expertise - they're looking for translation, synthesis, and partnership in making sense of their health journey.

This is where my work in tax policy and health technology unexpectedly converge. Both deal with systems that have grown so complex that the gap between expert and citizen has become a chasm. Both struggle with fragmentation - multiple touchpoints, different data formats, institutional silos. And both are being transformed not primarily by making the system work better, but by empowering individuals to navigate the system more effectively.

Consider the practical reality. A person managing diabetes doesn't just need their glucose levels measured - they need those levels contextualized against their medication timing, exercise patterns, sleep quality, and stress levels captured across different apps and devices. The old approach: spend 15-30 minutes daily manually logging information, then several hours before each appointment trying to compile three months of scattered notes. The new possibility: upload your data, receive synthesized insights, generate a comprehensive health summary in two minutes. Arrive at your appointment with specific, data-informed questions.

That's not just efficiency - that's a fundamental shift in agency.

What Makes This Different

Working this closely with policy implementation at CBDT, I've learned to distinguish between technological novelty and genuine transformation. ChatGPT Health demonstrates several design choices that signal the latter.

First, the separation of health data from the general ChatGPT environment. All health conversations exist in a protected space, encrypted by default, with 30-day deletion options. This data won't train their foundation models. In an era where data privacy concerns often derail promising innovations, OpenAI chose to build walls between their business model and your medical information.

That matters because trust is the currency of health technology.

Second, the explicit framing: "designed to support, not replace, medical care." The technology positions itself as infrastructure, not authority. This reminds me of how we've been thinking about AI in tax administration - the goal isn't to replace tax professionals with algorithms, but to free them for complex judgment calls while AI handles what's systematic.

The Questions That Surface

But every powerful tool creates new responsibilities. Several questions keep surfacing for me:

The equity dimension: Connecting medical records and wellness apps assumes you have both. In India, where healthcare records are increasingly digital but far from universal, where Apple Health penetration remains limited to a small urban demographic, who benefits from this technology? How do we prevent health AI from becoming another layer of advantage for the already advantaged?

The interpretation gap: AI can identify patterns in your glucose levels, but can it distinguish between correlation and causation in complex biological systems? The AI might flag the pattern, but who owns the interpretation?

The data dependency: What happens when people begin outsourcing their health literacy to AI? There's profound value in learning to read your own body's signals. Does AI synthesis enhance that literacy or erode it? (or may increase anxiety!!)

These aren't questions with clear answers. They're tensions to be navigated.

A Personal Thought

I keep thinking about those folders my mother carried, stuffed with medical reports. In some future I can now imagine, those reports would flow seamlessly into a secure space where patterns emerge, where questions form themselves, where the doctor's appointment becomes a genuine conversation.

My father won't benefit from that future. But millions of others might. And if we get this right - if we build these tools with intention, with equity, with clear boundaries - they'll benefit not just from more efficient healthcare, but from deeper understanding of their own wellbeing.

That's the possibility that keeps me engaged with AI. Not because technology solves everything, but because thoughtfully deployed, it can shift the balance of agency back toward individuals navigating complex systems.

 

Wednesday, January 7, 2026

2026: Standing at the Threshold of Transformation

New Year, New Possibilities, New Purpose

There's something uniquely humbling about standing at the edge of a new year. It's that rare moment when you're allowed—almost expected—to pause, look back at the road traveled, and then turn your gaze forward to the horizon ahead. As 2025 draws to a close and 2026 beckons, I find myself doing exactly that.

And what a year 2025 has been.

Working at the Heart of Policy

If someone had told me a few years ago that I'd be working at DOMS—the policy think tank of the Central Board of Direct Taxes—collaborating closely with Board Members and the Chairman himself, I would have been both thrilled and terrified. The reality? It's been even more enriching than I imagined.

At Directorate of Income Tax (Organization and Management Services) (DOMS), CBDT, we don't just talk about policy; we live it, breathe it, and shape it. This year, I had the privilege of being part of several transformative initiatives that will impact millions of taxpayers and reshape how our tax administration functions.

We worked extensively on revising the Taxpayers' Charter—not as a cosmetic exercise, but as a genuine commitment to making tax administration more transparent, accountable, and citizen-centric. Every word mattered. Every commitment needed to be backed by implementable processes. It was policy work at its core, and it reminded me why I chose public service in the first place.

Then came Special Campaign 5.0, spearheaded by the Department of Administrative Reforms and Public Grievances (DARPG). As the nodal authority for CBDT, we were right in the thick of it—streamlining processes, addressing pending matters, improving responsiveness. It's the kind of work that doesn't always make headlines but fundamentally changes how government functions.

And now? We're working on Guidance Notes for the new Income Tax Act, 2025. This is history in the making. A completely reimagined tax legislation going live on April 1, 2026. The responsibility is immense, but so is the opportunity to get it right.

Working this closely with leadership, seeing policy from conception to execution, has been one of the most defining experiences of my professional life. It's taught me that real change doesn't happen in grand pronouncements—it happens in the details, in the late-night drafts, in the stakeholder consultations, in the willingness to listen and iterate.

Asking the Big Question: Am I Still Relevant?

In October and November 2025, I stepped out of the policy corridors and into lecture halls—first at Lal Bahadur Shastri Institute of Management (LBSIM) in Dwarka and then at Delhi Technological University (DTU) (former Delhi College of Engineering).

My opening slide at both sessions posed a question that I believe every professional must grapple with today: "Am I still relevant in a world where machines are becoming smarter every day?"

The students leaned forward. Because this isn't an abstract question anymore—it's personal, it's urgent, and it's real.

We dove deep into how Artificial Intelligence is reshaping finance—from risk management and fraud detection to software productivity and decision-making. Generative AI alone is estimated to add $2.6–$4.4 trillion in annual economic value globally, with financial services capturing a significant share. But beyond the numbers, we discussed the human dimension: How do we stay relevant? How do we adapt? How do we ensure AI augments rather than replaces us?

What struck me most wasn't just the curiosity in their questions, but the anxiety underlying them. These bright young minds are entering a job market where the rules are being rewritten in real-time. My message to them was simple: Don't fear AI. Understand it. Master it. Use it as a tool, not a threat. Use it as an amplifier.

Those sessions reminded me that sharing knowledge isn't just about transferring information—it's about empowering the next generation to navigate uncertainty with confidence.

2026: The Year of Transformation

As I look ahead to 2026, I'm filled with a sense of purpose and possibility that I haven't felt in years. Here's what's calling to me:

1. The Income Tax Act 2025 Implementation

April 1, 2026, isn't just another financial year beginning. It's the dawn of a new tax regime—simpler, clearer, more modern. Being part of the team creating Guidance Notes means I'm not just witnessing this transformation; I'm helping shape it.

The challenge? Making 536 sections and 16 schedules understandable and implementable for millions of taxpayers and thousands of tax officers. The opportunity? Getting it right could set the tone for India's tax administration for the next generation.

This is legacy work. And I want to give it everything I've got.

2. AI and Tax Administration

If there's one area where AI can make a transformative impact, it's tax administration. Imagine a system where:

  • Taxpayers get instant, accurate answers to their queries
  • Compliance becomes seamless, not burdensome
  • Risk assessment is predictive, not reactive
  • Litigation reduces because clarity increases

This isn't science fiction. The technology exists. What we need is vision, courage, and careful implementation. In 2026, I want to be part of initiatives that bring AI meaningfully into tax administration—not as a buzzword, but as a practical tool for better governance.

3. Expanding Thought Leadership

The DTU and LBSIM sessions opened my eyes to something important: there's a hunger for nuanced conversations about the future of work, finance, and technology. And I have something to contribute.

In 2026, I want to do more—more speaking engagements, more writing, more collaborations with academic institutions. Not to build a personal brand, but to contribute to the larger conversation. To mentor. To provoke thought. To challenge assumptions (including my own).

My blog, my talks, my interactions—they're all ways of thinking out loud. And I want to do more of that.

4. International Horizons

Having worked as a UN Adviser for the Afghanistan Mission and as a G20 Strategic Consultant for Rio de Janeiro (Brazil), I know the value of bringing global perspectives to domestic challenges—and vice versa. 

The world is interconnected. Tax policy doesn't happen in silos. Whether it's base erosion and profit shifting (BEPS), digital taxation, or climate finance—these are global conversations India needs to be part of. And I want to contribute to that dialogue.

5. Mentoring the Next Generation

Every young professional I've spoken with this year has reminded me: we have a responsibility to those coming behind us. To share not just our successes, but our failures. To demystify careers in public service. To show that impact and integrity can coexist.

In 2026, I want to be more intentional about mentoring—through formal programs, informal conversations, and by being accessible. The students who asked me, "Am I still relevant?" deserve mentors who help them find their own answers.

A Personal Resolution

If I had to distill my aspirations for 2026 into one sentence, it would be this: I want to build bridges—between policy and practice, between technology and humanity, between where we are and where we could be.

The new year isn't just a calendar turning. It's an invitation to recommit, to reimagine, to renew. And I'm ready.

Let's make it count.

What are your resolutions for 2026? What transformation are you hoping to be part of? I'd love to hear from you in the comments.

Tuesday, November 4, 2025

My Guest Talk at Delhi Technological University (DTU)

 “Am I still relevant in the market?”

That was the first question on the opening slide of my talk at Delhi Technological University (formerly Delhi College of Engineering). And honestly, it’s the question that sits quietly in every finance professional’s mind today.

On October 31, 2025, I had the privilege of speaking to a packed hall of bright young management students at DTU about a topic that’s reshaping not just finance, but the future of work itself — Artificial Intelligence in Finance. My thanks to Dr. Arushi Jain for making the session possible and for the energy she brought to the conversation.

What’s happening?

AI is no longer a buzzword. It’s a boardroom strategy.
McKinsey estimates that Generative AI could add up to $4.4 trillion in annual economic value, with financial services at the heart of that growth. From banking to tax to consulting, algorithms are becoming colleagues — handling data analysis, automating compliance, and even drafting financial insights.

But here’s the twist: while AI is creating efficiency, it’s also creating a global divide. As countries race to power AI, new dependencies are emerging — reshaping geopolitics and economics alike.

Why should you care?

Because finance careers are being rewritten.
In tax administration alone, 60% of traditional skills could become obsolete within the decade, according to Thomson Reuters. Yet, those who upskill in AI, data analytics, and automation will be three times more valuable by 2030.

AI isn’t replacing finance professionals — it’s upgrading them. Imagine tax systems that predict non-compliance before it happens, or risk teams that proactively flag issues using AI-driven forecasts. This isn’t science fiction. It’s where the industry is heading — from reactive enforcement to predictive guidance.

What does this mean for students (and professionals)?

Your “arsenal” matters.
Tools like Claude, Agentic AI, and MCP are the new Excel sheets. The real differentiator won’t be who knows AI, but who knows how to use it smartly.

During the session, we explored how concepts like Vector Memory and Chain-of-Thought reasoning let AI mimic human judgment — not just fetch answers, but explain its reasoning like a seasoned analyst.

And when I asked, “How productive are you — really?”, I saw heads nodding. Because in an age of information overload, productivity isn’t about working harder. It’s about working intelligently.

As I wrapped up the talk, I shared a few career nuggets that hold true across industries:

  • Learn AI — don’t fear it.

  • Sharpen your math and logic.

  • Communicate clearly — it’s your superpower in an AI-heavy world.

  • Be the “Go-To Manager” — the one who gets things done.

The world of finance is transforming fast. But as I told the students that morning — your relevance isn’t under threat if your curiosity stays alive.

So, keep learning. Keep experimenting. And remember: the future belongs not to those who predict change, but to those who adapt to it.

Few moments shared by DTU:











Thursday, September 18, 2025

CAG-Connect

 


Audits in India are about to look very different. Until now, they meant piles of files, long travel for auditors, and months of delays. But starting November 2025, the Comptroller and Auditor General (CAG) will move everything to a new online system called CAG-Connect. This is like moving from typewriters to cloud apps overnight. Instead of chasing paperwork, auditors will log in to a single dashboard that pulls data from India’s digital systems—things like state financial accounts, e-procurement records, and even geospatial maps. The real game-changer is AI. 

CAG is building its own large language model, CAG-LLM, that works like a specialized chatbot for government accounts. With Retrieval-Augmented Generation (RAG), it can pull information from different sources, link it together, and explain patterns in plain language. So instead of auditors digging through spreadsheets line by line, the system highlights unusual trends - say, a sudden spike in spending on a project or duplicate contracts across states. It’s like giving auditors x-ray vision over millions of records. For businesses, this means audits will be faster, sharper, and harder to dodge. The old trick of hiding in paperwork won’t work when AI can cross-check everything instantly. But if your books are clean, this is good news - queries will be quicker, compliance rules clearer, and competition fairer. For citizens, it means more accountability. Your tax money will be tracked in real time, and problems that might have slipped under the radar will surface sooner. Think of it as financial sunlight, and sunlight makes it harder for corruption to grow. In short, India’s audits are moving from clipboard to cloud, powered by AI. For businesses, that’s a signal to clean up records and go digital. For citizens, it’s a promise of more transparency. 

And for the government, it’s a leap into a future where accountability isn’t a slow chase, but a constant, live process.

When AI Builds AI

There's something uniquely humbling about watching an AI agent build another AI agent in less than two weeks - and then realizing that t...