The Enterprise AI Tool You Just Subscribed to has a 60-70% Chance of Sitting Unused in Six Months.

Enterprise AI Tools That Stay Unsed

I've been sitting on this data for a bit. Not because the finding is surprising to me, but because I wanted to make sure I was saying something useful rather than just adding another "AI adoption is hard" post to the pile.

So here it is. The number that should be in every AI business case presented to a board right now: for every $1 you spend on AI tools, you need $2 to $3 in training and change management to make that dollar work.

That ratio comes out of SXSW 2026 CMO sessions. Digital Applied organisations that budgeted only for tools reported abandonment rates of 60-70% within six months. Not underutilisation. Abandonment.

I will let you digest that information for a second.

We Keep Misdiagnosing the Problem

The dominant narrative in most boardrooms is still "which AI tool should we choose?" The energy goes into vendor evaluation, procurement cycles, and demos. Impressive demos. The kind where everyone leaves the room saying, "We need this."

And then six months later, the same team is back on the old workflow. The tool sits in the stack, the subscription renews automatically, and nobody says anything because admitting failure feels worse than paying the invoice.

I've seen this pattern more times than I can count. In my marketing leadership career, I learned early, sometimes the hard way, that Martech without adoption infrastructure is just expensive shelf decoration. The first time my team rolled out a new digital marketing platform across multiple markets, the tool was genuinely good. The training was an afterthought. Three months in, half the markets had reverted to what they knew. Not because people were resistant. Because they were busy, under-resourced, and nobody had made the new way feel safer than the old way.

That is not a Martech and now an AI Tool problem. That is an execution fluency problem.

Why the $1:$2-3 Ratio Matters More Than the Model Choice

The wrong variable has dominated the AI ROI conversation. We obsess over which model, which platform, which integration, which is all good. But we spend almost no time on what happens after the contract is signed.

Here is what the ratio is really telling us:

The tool is the smallest part of the investment. The change is the investment. Embedding a new capability into a team's daily workflow, getting the doubters to a place of genuine competence, building the psychological safety for people to say "I don't know how to use this yet," and recalibrating what good output looks like when AI is in the loop. All of that costs more than the software. It should cost more. That is where the value actually gets created.

Skipping training is not saving money. It is burning money slowly. A 60 to 70% abandonment rate means you have wasted somewhere between 60 and 70 cents of every dollar you spent on tools. Plus the organisational toll of a failed rollout, which makes the next change initiative harder to land.

The HAR (Human-to-AI ratio) question is being asked too late. Most organisations consider the Human Agent Ratio (how much of this workflow should be handled by humans versus AI) at the deployment stage. It should be answered before the procurement conversation even starts. If you do not know what you are asking your people to hand over and what they are keeping, the training plan is just guesswork.

What Execution Fluency Actually Looks Like in Practice

There is no single playbook that works everywhere. APAC in particular is a graveyard of "one size fits all" AI rollouts. A change management approach that works in Singapore does not automatically translate to Jakarta or Ho Chi Minh City, due to differences in language, hierarchy, digital literacy baseline, and organisational culture. The fragmentation is real, and it matters.

That said, some principles hold across markets:

  1. Budget the $2-3 before you sign the contract. If the training and change management line is not in the original business case, do not approve the tool spend. Full stop.

  2. Identify your Agent Bosses early. Every team has two or three people who, if they become genuinely competent and visibly enthusiastic about the new capability, will pull the rest of the team forward. Find them. Train them first. Give them ownership.

  3. Make failure a data point, not a career risk. The 60 to 70% abandonment problem is partly a psychological safety problem. People will not persist through the learning curve if the cost of getting it wrong feels too high or the likelihood of losing their job is too high.

  4. Measure adoption, not just capability. Your success metrics should include weekly active usage rates, not just whether the licence is activated. If people have the tool but are not using it, you have an adoption problem that is about to become a retention problem.

  5. Build for the sceptic, not the enthusiast. Your AI champions will figure it out regardless. Design your training and support for the person who thinks this is more trouble than it is worth. Win them, and you have won the room.

For Builders and Decision Makers

The practical takeaway is blunt. If you are presenting an AI business case to your board or leadership team in 2026 and the investment in training and change management is not at least double the cost of the tool, your numbers are wrong.

Not roughly right. Wrong. The SXSW data makes the adjustment for you.

The enterprises building durable competitive moats right now are not necessarily those with the most sophisticated AI stack. They are the ones where the AI is actually used every day by the people it was bought for.

Governance-ready AI deployment is not just about compliance frameworks and ethics policies. It starts with whether your team can actually operate the thing you deployed. That is the first governance question. Everything else follows.

A caveat before you come after me. The $1:$2-3 ratio and the 60 to 70% abandonment figure come from SXSW CMO sessions and Digital Applied analysis, not from a peer-reviewed longitudinal study. I trust the pattern, because I have seen it independently, but the exact numbers will vary by organisation size, industry, and market. My gut feel on the ratio is that it is probably conservative for large, multi-market APAC enterprises where the change management complexity is compounded by language and regulatory differences. But I could be wrong on the direction. What I am confident about is the principle: the tool is not the investment. The change is.

Jamshed Wadia

Business and Marketing Advisor @AIdeate | Advisory Board @CMO Council | AI Ethics & Governance @Mavic.AI | Startup Mentor @Eduspaze & @Tasmu | MarTech & AI Practitioner

https://aideatesolutions.com/
Next
Next

HP Garage 2.0 Cohort 2 Is Here: Meet the AI Startups Shaping the Future