Can talking increase trust?
A study highlights how engagement impacts trust in AI

Twenty-six per cent of UK citizens think AI adoption could make public services better.
That was the starting position of research undertaken by Nesta, through its AI Social Readiness programme. They then ran 18 deliberative workshops with 144 members of the public. The participants examined a real AI tool, Magic Notes, used for transcription and summarisation in real public services. They heard the case for and against, talked to each other, and then they reached a judgement.
Seventy-four per cent concluded the benefits outweighed the risks.
An increase from 26% to 74%.
A fortnight ago we published figures showing that 63 per cent of UK citizens trust the NHS to use AI responsibly, 44 per cent trust their local council, 39 per cent trust central government. Several people asked, "what would it take to move those numbers?".
The Nesta finding suggests the answer is not better technology or better communications. It is engagement. The trust problem appears not to be scepticism about AI but is perhaps the feeling of being excluded from judgement. The majority of people have little context beyond their own exposure to ChatGPT and what they've seen, heard, and read in videos and shows, podcasts, and articles.
The Ada Lovelace Institute's study of AI transcription tools in 17 English and Scottish councils tells a version of the same story from inside the organisation. Researchers spoke to 39 social workers and managers. In several councils, they found social workers narrating their decisions to an AI transcription tool. The tool had become a substitute for the professional reflection that used to happen in supervision, in team discussions, in the corridor conversations that caseload pressure squeezed out years ago.
Nobody designed that use case. It grew in the gap between what the organisation provided and what the practitioner needed.
Those social workers were not consulted about how the reshaping of their working patterns. Austerity squeezed and squeezed until a tool was deployed to try and recover time. Then the users found an alternative and possibly better, use. What we need to guard against now, is that the gap that is being filled, becomes the future deployment rationale. Professional practice was under pressure and so skilled practitioners found a way to harness AI to provide support. That doesn't mean we should design for this use case – it just demonstrates the resourcefulness of the people on the frontline.
Last week, Ada Lovelace and the Nuffield Foundation published a review of AI in career guidance across UK secondary schools, further education and higher education. Thirty-six people from 22 organisations. AI tools were deployed in a service that shapes young people's futures, adopted on the basis of efficiency, with no evidence that they work and no structured input from the people affected.
In every preceding case, people were not being deliberately shut out. Instead, leaders were moving to adopt new technology that has an obvious efficiency benefit. But the distance between who is affected by an AI decision and who is involved in making it is often too great.
The Nesta workshops show what closing the gap can look like. A structured process where citizens examine a specific AI tool, hear the case for and against, deliberate with each other, and reach a judgement. The output is a Social Readiness Advisory Label, a practical signal that tells public sector staff whether a particular tool has been through genuine public scrutiny.
The IFOW and CIPD's BridgeAI programme, which published its first findings this month, found the same thing at organisational level. The AI deployments that stuck were the ones where workers were involved in the design.
Organisations that run an engagement model before they deploy publicly will be using evidence that says you can move positivity by nearly fifty percentage points.
Involving employees before you procure will see stronger and more durable adoption.
And before announcing any AI programme to the public, test whether the organisation can demonstrate to a resident, a patient, a tenant, "we talked with, demonstrated to, and designed this with, people like you".
A fortnight ago I argued that trust in AI sits inside the pre-existing relationship between an institution and the people it serves. That remains true. But the Nesta data teaches us that we can quickly build trust through genuine consultation. Twenty-six per cent trust to seventy-four per cent is not a nudge.
Organisations that do this will find a public and a workforce far more willing than they might have expected. The ones that skip it may just find that our trust figures from a fortnight ago are a ceiling, not a floor.