Who do we trust?
We explore trust in AI and public institutions

These figures come from the 2026 UK Public Sector AI Adoption Outlook, conducted by Censuswide on behalf of Appian and are supported by finding from the joint survey by the Ada Lovelace Institute and the Alan Turing Institute. They've been sitting in my peripheral vision since they were published.
Sixty-three per cent of UK citizens trust the NHS to use AI responsibly.
Forty-four per cent trust their local council.
Thirty-nine per cent trust central government.
On one reading they could look like a communications problem. The NHS is well-regarded; councils and Whitehall often less so.
That reading is wrong. Leaders who reach for it will find it does not hold. Trust in AI does not exist in isolation. It sits inside the pre-existing relationship between an institution and the people it serves.
The NHS has a relationship with the British public that is nearly eighty years old, personally intimate, and largely positive. People have a stake in it. They want it to succeed. When they hear that the NHS is using AI, their instinct is to give it the benefit of the doubt, because decades of experience have taught them that the NHS is, at some level, on their side.
Local councils sit in a more complicated position. They are geographically close to citizens in ways that Whitehall is not. They collect the bins, provide care, and make planning decisions. But they also set council tax, are seen to fail on road maintenance, and are poorly understood. Closeness does not automatically generate trust as for many residents, closeness has meant watching services decline. For many, Local Government is where austerity has had a face. That story remains the same when AI appears on the agenda.
Central government is more distant still. It makes policy but rarely delivers services directly. When it does touch citizens' lives, the interactions are often high-stakes and the experiences frequently poor.
In those contexts, AI arrives carrying the weight of everything that came (or failed to arrive) before it.
What this tells leaders is that the case for AI adoption cannot be separated from the case for the institution itself. If citizens do not trust the organisation, they will not trust the organisation's use of AI. No volume of transparency reporting, no number of ethics publications, will close that gap on its own. The work of building public confidence in AI use and the work of rebuilding institutional credibility are the same work.
The Heriot-Watt University study of UK local authorities, published last month, adds something important. Across councils of every size and financial position, the organisations making real progress on AI share four characteristics
clear leadership ambition
disciplined governance
strategic clarity about what AI is actually for, and
stronger underlying data foundations.
What does not predict readiness is size. A small district council with serious leadership and clean data infrastructure is better placed than a large unitary authority with weak governance and a chaotic estate. That finding connects to the trust question in a way that tends to get overlooked.
Citizens trust institutions where they feel they are in capable hands. Capable leadership is a prerequisite for responsible AI adoption. The councils building genuine AI capability are also the ones most likely to use it in ways that will hold public confidence, because the governance discipline and strategic clarity that makes AI adoption work, also makes responsible use more probable. This is not a comfortable message for leaders in organisations with contested leadership or weak governance. It suggests that the organisations most at risk of AI adoption going badly are also the ones least likely to recognise that risk.
The readiness gap and the trust gap are not separate problems. So what does this mean in practice?
First, leaders need an honest account of where their organisation actually sits on those four dimensions - leadership ambition, governance discipline, strategic clarity, data capability. Not where the strategy document says it will be in three years, but now. Most organisations will be stronger on some than others. That gap is worth understanding before any AI programme is announced publicly.
Second, the 63/44/39 split should make local government and central government leaders think carefully before assuming they have political, public, and personnel confidence. The trust the NHS carries was not earned by its AI programme. It was earned by the institution over eight decades. Local and central government leaders are operating from a different position in public consciousness, and what lands well in a hospital trust will not necessarily land well in a council with a local newspaper and facebook groups ready to trash its reputation further with stories about "another failure".
Third, this does not mean AI adoption should wait until trust is established. In some cases the reverse is true and well-chosen, well-communicated AI adoption can be part of demonstrating genuine organisational competence. But the sequencing matters.
Start with projects that demonstrate efficiency and improve access. Govern them carefully, be open about what you learn, and build from there. The worst outcome is an AI programme that promises more than it delivers and reinforces the suspicion that the organisation does not really know what it is doing.
Trust in AI is on offer. The survey shows that. Citizens are not opposed to AI in public services.
They are cautious about AI in organisations they are already cautious about. The path to closing that gap runs through the organisation, not through the communications team.
The RPNA Responsible Artificial Intelligence Framework maps twelve components of AI-ready organisations across three zones: Shape, Enable, and Execute. Get in touch to use it as a diagnostic for your own programme