Here is a number that should stop every delivery manager, programme director and technology leader in their tracks. In twenty-four months, AI spend governance went from a niche concern held by early adopters to a near-universal responsibility sitting on the desk of almost every FinOps practitioner on the planet. The growth curve is almost vertical.

And yet the same report identifies AI cost management as the single biggest skill gap in the field — the number one capability that FinOps teams need to add in the next twelve months, ahead of tooling, automation and data engineering.

Ninety-eight percent are responsible for it. Almost none of them feel equipped for it.

"Is your AI providing value? No one can answer that question yet."

— Practitioner, State of FinOps 2026 Report

I am not writing this from the sidelines. Right now, I am in active conversation with my CTO about changing how we govern AI spend inside our live delivery programmes. We are redesigning the process from the ground up — embedding cost accountability into sprint ceremonies, redefining who owns AI cost at each stage of the delivery lifecycle, and building the kind of culture where engineers think about technology value the way they think about code quality.

I have been delivering complex technology programmes for twenty years — from process automation at Lloyd's of London that generated £3 million in annual savings across four regions, to HMRC's Alcohol Duty Reform and Pensions Administrator projects at Accenture, to building an organisation-wide digital framework at OFGEM. Across all of it, I have used IBM Apptio Cloudability to govern cloud spend and watched the same pattern repeat itself: a new technology category explodes in spend before the governance infrastructure catches up.

It happened with cloud. It is happening with AI right now — but AI spend is harder to see, harder to allocate and harder to justify than cloud spend ever was. Here is the framework I am implementing.

Why AI spend is harder to govern than cloud spend

When cloud computing scaled in the early 2010s, the governance challenge was primarily one of visibility. FinOps was born to solve that problem. AI spend is different for three reasons.

Pricing models are opaque and variable. Cloud compute bills per instance hour, per GB, per request. AI services bill by token, by model, by inference, by seat, by API call, by GPU hour — units that mean completely different things across providers. A team using two different large language models is working with two cost structures that map onto no existing cloud billing framework.

The spend-to-value chain is genuinely unclear. When I was managing the Alcohol Duty Reform programme at HMRC alongside the Pensions Administrator Reform, I could trace every infrastructure cost to a specific service or feature in our JIRA backlog. That same traceability does not yet exist for most AI tooling — and that gap is what I am working to close.

AI spend is accumulating inside programmes not designed to govern it. This is the pattern I keep seeing — and the one that prompted my CTO conversation. Individual engineers make small AI tool decisions independently. A developer adds an AI coding assistant. A product manager subscribes to an AI document tool. An analyst enables AI features on a platform already in the budget. Each decision is small. Reasonable, even. But across a programme of any size, the aggregate is significant — and nobody has a complete picture.

At OFGEM, building the organisation-wide digital framework meant confronting exactly this kind of distributed decision-making. The lesson I took from that programme: governance that lives only at the centre fails. It has to be embedded at the point where decisions are made. That is the principle I am now applying to AI spend.

The framework I am implementing: four steps

Get visibility before you try to govern anything

The first principle of FinOps applies to AI just as it applies to cloud: you cannot govern what you cannot see.

The first thing I am doing is running a structured audit across every team in the programme — asking each team to surface every AI tool they are currently using. Licensed, subscribed, free-tier, trial, built into existing platforms — all of it. In my experience running Communities of Practice at Lean Icon Technology and facilitating sprint ceremonies across multiple teams, the average delivery team is using between four and eight AI services at any given moment — and fewer than half appear on any formal procurement record.

At HMRC, I used a structured JIRA backlog and a shared Confluence space to track tooling decisions across cross-functional teams. That same approach is what I am using now. Map each tool against three questions:

Sanctioned & governed
Approved, paid for, and actively monitored. Leave it alone — maintain it.
Sanctioned but ungoverned
Approved but nobody is watching the spend. Immediate governance action needed.
Unsanctioned
No approval, no visibility, no owner. This is where your biggest risk lives.

Getting to this map is the entire goal of Step 1. Everything else depends on it.

Add AI cost to the Definition of Done

This is the centrepiece of what I am proposing to my CTO — and the part of the conversation I find most energising, because it connects directly to how Agile teams actually work.

The State of FinOps 2026 report identifies pre-deployment architecture costing as the second most-requested missing tool capability in the entire FinOps ecosystem. The message from the global community is unambiguous: cost governance needs to move earlier in the delivery lifecycle. In Agile terms, that means the Definition of Done.

I have been writing and refining Definitions of Done since my early days as a Scrum Master — at Nyros Technologies where I was recognised as best project manager for delivering within 5% deviation, and later coaching fifteen project managers a week at The Knowledge Academy, where candidate pass rates rose from 80% to 85%. The most powerful thing a Definition of Done does is define what the team is collectively accountable for. That accountability mechanism is exactly what AI spend governance is missing.

I am proposing three new acceptance criteria for every user story or feature that involves AI:

AI service identified and sanctioned
Before a line of code is written, the team confirms which AI service is being used, that it is on the approved list, and that its pricing model is understood. Not after the sprint. Before it starts.
Cost estimate documented
A rough order of magnitude estimate of the monthly cost at expected usage volumes. On the Pensions Administrator Reform at HMRC, we costed infrastructure commitments before approving them. The same discipline, down to feature level. It does not need to be precise. It needs to exist.
Cost owner assigned — by lifecycle stage
Vague ownership is the single most common reason governance frameworks fail. See the lifecycle model below.

On cost ownership by lifecycle stage — this is where I am being deliberately specific, and the line my CTO and I are having the most productive argument about:

During development
Lead Engineer / Tech Lead
Owns the AI cost estimate and monitors actual spend against it sprint by sprint. They are closest to the architectural decisions driving cost — model choice, API call frequency, caching strategy — and the only person who can catch a cost problem before it ships.
Once live in production
Service Manager
Formal handover at go-live. AI spend becomes a standing agenda item in the service review alongside uptime and user metrics. It belongs to the service for the life of the service. No handover, no go-live.

"Once you fix it, it's gone — how do we give developers credit for shift-left activities?"

— State of FinOps 2026 Report — the discipline's most important unsolved challenge

My answer to that challenge comes from twenty years of coaching Agile teams, not from a FinOps playbook. You change behaviour by making the new behaviour visible, by celebrating it publicly, and by embedding it in the team's identity. At The Knowledge Academy I did not improve pass rates by adding more content. I changed how candidates thought about delivery accountability. Put AI cost estimates on the sprint board next to the burndown chart. Celebrate the team whose estimate came in accurate. Make cost awareness part of what it means to be a good engineer on this programme.

Build an AI cost allocation model

Working with IBM Apptio Cloudability on cloud programmes, I have seen organisations at every stage of FinOps maturity struggle with allocation. The lesson is always the same: start with a model simple enough to actually use, be transparent about its limitations, and iterate. I am proposing a three-tier approach:

Three-tier AI cost allocation model

Direct allocation — if a team uses an AI service exclusively, 100% of that cost is allocated to their product budget. Simple, clean, unambiguous. The default wherever technically feasible.

Proportional allocation — multiple teams sharing a platform split costs by usage: API calls, tokens consumed or user count. At HMRC, managing budgets across two concurrent programmes meant every cost allocation had to be explainable simultaneously to the delivery team and the programme board. If you cannot explain it in plain English in a sprint review, the model is too complex.

Pooled allocation — AI costs embedded in enterprise SaaS tools that cannot yet be individually attributed. Pool as overhead, flag explicitly in every budget report, and set a quarterly goal to migrate items out of the pool as visibility improves. The pool is an honest acknowledgement — and a commitment to close the gap.

Define what value looks like before the money is spent

This is the conversation I most want to have with every team I work with — and the one that gets skipped most often. The underlying problem is not measurement. It is that the value definition was never agreed in the first place.

When I was an Equity Research Analyst at Reuters and later a Corporate Actions Analyst at Lehman Brothers, every investment decision was scrutinised against a clear thesis: what return, at what risk, over what time horizon? At Lloyd's of London, the process automation project I led generated £3 million in annual savings across four regions — not because we were lucky, but because we defined exactly what that looked like before we started.

AI investment deserves the same rigour. In my current process proposal, I am requiring every significant AI investment to answer three questions before approval:

1
What decision or outcome will this AI capability enable that we cannot achieve without it?
2
How will we measure whether it is working — and at what time horizon?
3
What would we stop doing — or do less of — if this capability delivers no measurable value in six months?

If a team cannot answer those three questions in writing, the investment is not ready to be approved. That is the standard I am holding — and the standard I am asking my CTO to institutionalise across every programme.

Why this matters beyond the programme

The reason I am having this conversation with my CTO is not just because it is the right governance call for this programme. It is because I believe the organisations that build this capability in 2026 — embedding AI cost governance into their delivery culture, their Definitions of Done and their service management frameworks — will be significantly better positioned than those that do not, for years to come.

The FinOps Foundation updated its mission this year from "the value of cloud" to "the value of technology." That is not a cosmetic change. It is a recognition that the discipline has to grow to meet the moment. AI spend governance is the most urgent expression of that growth.

Every delivery manager, every programme director, every CTO who is currently deploying AI without a governance framework is accumulating a debt — not just a financial one, but an organisational one. The longer the habits of ungoverned AI spend persist, the harder they will be to change.

The work I am doing right now — on this programme, in this conversation with my CTO — is the work I think every delivery organisation needs to start doing.

The question is not whether your organisation needs AI spend governance. The question is whether you start building it today, or wait until the bills arrive and the value cannot be traced.

In my experience, the second option is always more expensive.


Are you having this conversation in your organisation? What is the biggest obstacle you are hitting? Drop it in the comments — I am building a practitioner resource on this topic and real delivery experience is far more useful than any survey data.

Read the State of FinOps 2026 report in full at data.finops.org