Proportional AI Governance
Sharna Quirke, Chief Strategy Officer explores the proportionality of governance

The way you use AI should determine how you govern it!!
Hang on… what?? That’s not what everyone else is saying……
This post is written in response to a comment a few weeks ago challenging the onerous nature of AI governance for smaller organisations. It stuck with me, because I see both sides of it.
Right now I’m working with a large public sector organisation making decisions that directly affect people’s lives, using sensitive personal data. They are not new to AI. They have piloted Copilot, put a policy in place, and built awareness.
But now they want to build their own AI solutions. This is a significant shift which is why they got me in to set up an approach to governance that would enable this to happen responsibly. This is the point where governance stops being a ‘policy document’ and becomes something real.
What has been interesting is that we have not started with structure or documentation. I have not just 'rolled out' our framework and IP to say this is how all governance should be.
We have started with decisions.....
-Who needs to decide what.
-What carries real risk.
-Where accountability sits when something goes wrong.
Only after that do roles and processes start to make sense.
And this is where the original comment comes in. For an organisation of this size and risk profile, you do need defined oversight and clear accountability. There is no way around that. But that does not mean everyone needs the same model or, that that there is a single solution.
If you are a smaller organisation, copying this approach will likely create more friction than value. More roles, more meetings, more process. Not necessarily better outcomes.
You do not need to create five new job functions to use AI responsibly.
-You need clarity on ownership.
-A simple way to make decisions.
-And access to the right expertise when it matters.
Sometimes that is not a full team. It might be an advisor you can call on when needed ( 😜 ).
So the point is not that governance should be light or heavy or one single design. It should be proportionate. Designed around what you are actually doing with AI, the level of risk you carry, and the decisions you are making. And, if that changes, so too should your governance. Get that right and governance enables progress.
Get it wrong and you either slow yourself down or create the illusion of control without the reality of it.