Faster Cars need Better Brakes
Implementing AI requires system wide consideration, not just pilots and POCs.

There’s a meeting that’s been happening all over the country. It starts with, “Ahem...can you see my slides?” and then introduces you to the latest AI offer. Accompanied by the inevitable explanation that the presenter “can’t see any of you now”, there is energy, enthusiasm, and excitement.
At the end of the presentation two yellow hands appear on screen. The owner of one hand proclaims breathlessly "WOW! I can’t believe how far this technology has come”. The owner of the other, with far greater control over their respiration, raises a question about "GDPR and DPIAs" that they’ve been finessing since the meeting went in their diary.
The meeting of course ends with agreement to run a pilot, which is the organisational equivalent of saying "let's keep in touch". Technically its agreement but functionally it’s decision deferral wearing intent like a fake beard and glasses.
Several months later, the pilot is done. Results are, everyone agrees, "really interesting."
The person who championed the initiative has been moved on or is at least no longer returning calls on this specific subject.
So, who and what now?
Why do we stall? How do we get beyond the pilot?
What stops us is everything ‘around’ the idea. The absence of a clear leadership position on where AI should and shouldn't go. No real process for deciding which problems were actually worth solving. A workforce that had been informed rather than given a chance to shape and own. An insufficient mechanism for capturing what was learned, let alone doing anything useful with the insight.
The tool often delivers exactly what it promised. What was promised and what was needed however, often turn out to be different things.
RPNA's perspective, built from working through this exact pattern with many organisations, is that technology accounts for less than 20% of what determines whether an AI implementation succeeds. The other 80% is leadership, governance, culture, and change. Too many organisations are spending the bulk of their attention and budget on the 20% and then writing retrospectives about the 80% being “unexpectedly tricky”.
The strategic work that has to happen before pilot is a twinkle in the Transformation Director’s eye has to be non-negotiable. Leadership Vision must come first. This is not being "open to exploring AI opportunities," which is the business equivalent of an equally insincere "I'd love to catch up soon."
You need an actual position on what AI is for in this organisation, what it isn't for, who is accountable for it, and where the limits are. Without that, everything that follows is expensive guesswork.
Ethics and governance sit in the same conversation. Organisations make decisions that affect real people's lives. You can’t build in isolation. Building a car with greater horsepower, whilst not telling the team designing the brakes has inevitable consequences.
In the public sector these consequences change lives: Benefit allocations. Care decisions. SEND adjudication. The question of how AI will operate in those contexts, and how decisions made with AI assistance will be explained, challenged, and reviewed, are not things that can be left until the pilot is running.
Value prioritisation matters more than most organisations recognise. The pull toward pilots is almost irresistible, because pilots feel like action. But an organisation that runs twenty pilots and draws no coherent conclusions hasn't built capability. It has just gathered evidence that it didn't really know what it was doing, and that’s not quite the same thing.
Deciding in advance which problems are worth solving, and in what order, is how you build AI credit that compounds rather than pilot debt that accrues.
None of this is to suggest that AI adoption is impossibly complicated. The technology genuinely is ready, in many contexts, to do incredible things. The honest question is whether the organisation is ready to use it well.
The test is simple enough.
Do you have a clear paragraph explaining your organisation's position on AI? Not a policy document, not a reference to a national framework. Just a plain-English account of what AI is for in your specific context, what it isn't for, when and how you use it, and who is responsible for making sure it happens that way.
If that paragraph doesn't exist, you already know where to start.
It isn't with the technology.