AI Is a Strategic Imperative. Mindlessly Restructuring is the Hype Tax. Wait for the Data.

AI

I want to be upfront about something before I get into this.

Writing a balanced take on AI when you are in the AI business is genuinely hard. I am the founder of AIdeate Solutions. I advise organisations on AI adoption. My livelihood is connected to AI’s promise.

So when I tell you that the productivity story is more complicated than the headlines suggest, I want you to know that it is not easy for me to say. It would be far more convenient to surf the wave.

But I have been in this industry long enough to know that hype cycles have a body count. And right now, the hype is moving faster than the evidence.

The Productivity Numbers Are Real. And Incomplete.

Let me give you both sides of the data, because you deserve that.

The case for AI productivity is not invented. Harvard Business School and BCG found that consultants using GPT-4 completed tasks about 25% faster and produced work that was more than 40% higher quality on tasks that fell within AI’s capability frontier. A BIS and EIB working paper using data from more than 12,000 firms found that AI adoption increased labour productivity by around 4% on average. Microsoft Research also found that, in a controlled experiment, developers using GitHub Copilot completed a coding task 55.8% faster than the control group.

For specific, well-defined tasks, the gains are real and meaningful.

But here is what the broader picture also tells us. A recent NBER working paper surveying nearly 6,000 CFOs, CEOs, and senior executives across the US, UK, Germany, and Australia found that 89% reported no impact from AI on labour productivity over the previous three years. A Financial Times analysis also found that 374 S&P 500 companies mentioned AI on earnings calls over the prior 12 months, with an overwhelmingly positive tone, yet executive enthusiasm still outpaces measurable economy-wide productivity outcomes.

The San Francisco Fed put the macro picture plainly in February 2026, noting that most macro studies still show limited evidence of a significant AI effect on productivity so far, and that even firms finding AI useful have shown little evidence of transformative gains at scale.

The truth sits somewhere in the pragmatic middle. AI is producing real gains in specific, well-scoped applications. It is not yet producing the economy-wide transformation that the biggest voices in the room are promising. Both things are true simultaneously. And any strategy built on only one of those truths is a strategy built on incomplete information.

The Hype Tax Is Real, and You Are Paying It

When companies cut talent in anticipation of AI delivering on its full promise, they are paying what I call a hype tax. They are making irreversible decisions based on capabilities that are, at best, partially proven in their specific context.

Here is what I have seen happen repeatedly. A company cuts a team of content creators because a vendor demo showed impressive output. Six months later, the AI is producing technically correct content that sounds like it was written by no human being anyone would want to do business with. The brand voice is gone. The institutional knowledge is gone. The people are gone. And rebuilding costs three times what was saved.

The tech does not always fail. Sometimes it delivers exactly as promised. But even then, the assumption that removing the human removes the friction is almost always wrong. The friction was not the human. The friction was the process. The AI inherited the process.

Most organisations are still stuck between experimentation and scale. McKinsey’s 2025 global survey found that nearly two-thirds of respondents said their organisations had not yet begun scaling AI across the enterprise, and only a small minority qualified as high performers. That is not an argument against AI. It is an argument for humility about where you sit on that curve before you start restructuring around a capability you have not yet demonstrated you can operationalise.

Start With the Audit, Not the Axe

The first question is not, “What can AI replace?” It is, “where does friction actually live in this organisation, and does AI genuinely solve it?”

In my experience, the honest answer surprises most leadership teams. The bottlenecks are rarely in the roles that look most automatable on paper. They are in handoffs. In approvals. In misaligned briefs. In systems that do not talk to each other. Dropping an AI tool into a broken workflow does not fix the workflow. It accelerates the dysfunction.

This is where execution fluency matters. Before you restructure anything around AI, you need a clear picture of:

Where your team actually loses time, not where you assume they do

What your MarTech stack can actually support today, not what your vendor promised it would support in 18 months

What institutional knowledge lives in people’s heads that has never been documented, and therefore cannot be handed to a model

The audit is not glamorous. It is also not optional.

The HAR Question: What Is Your Human-to-Agent Ratio?

One of the frameworks I keep coming back to in advisory conversations is the Human Agent Ratio, not as a fixed number, but as a deliberate calibration.

For every task or workflow you are considering automating, the real question is not, “Can AI do this?” It is, “What is the right balance of human judgment and AI execution for this specific outcome?”

High-volume, rules-based, pattern-dependent work sits closer to the autonomous end of that spectrum. That is the right place to start. Creative direction, client relationships, crisis judgment, and brand positioning decisions sit much more firmly at the human end for now.

The mistake most organisations make is treating this as a binary choice. You do not choose between a fully human team and a fully automated function. You design the right ratio for each workflow, and then you govern it. That calibration is the strategic work. And it requires the kind of human judgment that cannot itself be automated.

Governance Before Autonomy

This is where the conversation gets uncomfortable, and where I think some of the most serious risks in APAC are still being underestimated.

Deploying AI without a governance architecture is not efficient. It is a liability. The gap between “we are using AI tools” and “we have a governed AI stack” is enormous.

Singapore has continued to expand its governance approach, including its long-running Model AI Governance Framework and, more recently, its Model AI Governance Framework for Agentic AI, launched by IMDA in January 2026. The EU AI Act also entered into force in August 2024 and is being applied progressively, with the European Commission stating that it will be fully applicable on 2 August 2026, subject to some exceptions and phased deadlines thereafter. For multinationals operating across APAC and Europe, these are not abstract policy debates. They are operating constraints.

The Guardians of Trust model is not a nice-to-have. It is the difference between a competitive moat and a compliance crisis. Before you automate anything customer-facing, you need clarity on who owns the output, who audits it, and what the escalation path is when it goes wrong, because it will go wrong.

The Upskilling Mandate Is Not a Side Project

This is one of the most under-discussed signals in the market.

EY’s December 2025 AI Pulse Survey found that among organisations investing in AI and already experiencing AI-driven productivity gains, only 17% said those gains had led to a reduction in headcount. Far more said they were reinvesting those gains into existing AI capabilities, new AI capabilities, cybersecurity, R&D, and upskilling or reskilling employees.

The smartest operators in the room are not replacing people. They are redeploying them.

The roles are shifting. This is true. A content creator in 2026 who is not also becoming adept at prompting, editing, and directing AI output is operating at a disadvantage. But the answer to that is not to replace them. It is to develop them. The investment required to shift a skilled person from pure creation to what I would call an Agent Boss, someone who directs AI output with genuine expertise and brand judgment, is usually a fraction of the cost of replacing institutional knowledge from scratch.

I have watched teams make this transition well. The people who felt most threatened by AI often became its most effective internal advocates once they understood it as a capability multiplier rather than a replacement threat. That shift does not happen by accident. It happens because a leader made it a priority.

For Builders and Decision Makers: Before the Next Restructuring Conversation

Ask these questions honestly:

Have you audited where friction actually lives, or are you assuming it lives where the headcount is?

Is your MarTech stack genuinely integrated enough to support the automation you are planning, or are you building on a foundation that is not yet ready?

Do you have governance architecture for your AI deployment, or are you running on vendor trust and optimism?

Have you mapped your HAR for each workflow, or are you applying a blanket policy because it is simpler?

Are you investing in upskilling the people who hold your institutional knowledge, or are you about to walk that knowledge out the door?

The organisations that answer these questions before cutting are building genuine competitive moats. Those who do not are paying a hype tax, and they may spend years recovering from it.

Here is my honest caveat. I am writing this from inside the AI industry, with all the bias that implies. I have skin in this game. And I have tried hard in this piece to give you both the data that supports AI’s promise and the data that complicates it, because I think you are smart enough to want both.

The productivity gains are real in specific contexts. The macro-level transformation is still emerging. The J-curve is a reasonable mental model. We may be in the dip before the surge. But restructuring your talent strategy around a surge that has not yet arrived, in your specific context, with your specific stack and governance maturity, is not strategic courage.

It is just expensive impatience.

Are you deploying AI where it genuinely changes the outcome? Or are you restructuring because you feel you should be taking decisive action? Those are very different questions. Only one of them leads somewhere worth going.

Jamshed Wadia

Business and Marketing Advisor @AIdeate | Advisory Board @CMO Council | AI Ethics & Governance @Mavic.AI | Startup Mentor @Eduspaze & @Tasmu | MarTech & AI Practitioner

https://aideatesolutions.com/
Previous
Previous

AI-Infused Marketing: The CMO's Playbook for Scalable Growth

Next
Next

The Enterprise AI Tool You Just Subscribed to has a 60-70% Chance of Sitting Unused in Six Months.