Article by Nadine Soyez
Most organisations agree they need AI governance, especially as employees adopt AI tools faster than policies can keep up. But when leaders start implementing governance, things quickly drift into two unproductive extremes:
- Too much governance: Endless documents, heavy approval flows, and teams who feel blocked.
- Too little governance: Shadow AI everywhere, inconsistent quality, unclear responsibilities, and rising compliance risks.
The real challenge is maintaining a balance between governance and speed. It’s designing governance that supports daily work, provides clarity, and accelerates safe adoption. This is exactly what I break down in this newsletter.
Disclaimer: I’m not an AI lawyer or legal expert. These recommendations focus on what works in practice, based on my clients’ experience. In my projects, I always collaborate with legal, data protection, and compliance specialists where needed.
1. Start With a Quick Governance Audit
Most organisations jump straight into building new policies or complex approval workflows. This is where governance becomes bureaucratic. The truth is: you cannot design effective governance unless you understand your starting point. A governance audit gives you a clear view of what is working, where teams feel uncertain, and which gaps create real risk.
It also prevents over-engineering. Many companies believe they need more rules, but often what they really need is simpler communication and better alignment. A short audit helps leaders focus on the areas with the biggest impact instead of adding layers of policy that no one will use.
A good governance audit looks at five areas:
- Strategy: Is the AI ambition linked to clear business goals? Do teams understand what “good AI” looks like?
- Data: Are employees aware of which data is approved, sensitive, or restricted?
- Tools: Do people know which AI tools they can use, and is there a process for requesting new ones?
- Roles & Responsibilities: Is it clear who approves, reviews, or owns AI initiatives?
- Risks & Compliance: Are there simple, understandable guidelines for oversight and risk management?
Without this clarity, governance will fail — regardless of how many policies you write. Organisations overcomplicate governance when they don’t understand their starting point. Most issues come from unclear roles, tools, and data rules. A quick audit ensures governance stays lightweight and relevant.
2. Replace Committees With Clear Roles
Once the audit reveals what is missing, the next challenge is to avoid governance becoming a committee-driven bottleneck. Many organisations involve far too many people in AI decision-making, which slows everything down and discourages teams from experimenting.
Instead of creating another governance board, design a role-based model. People need to know exactly what they are responsible for — and what they are not responsible for. This clarity speeds up work, avoids duplication, and ensures high-risk topics get the attention they need.
A simple AI Governance Role Matrix works like this:
- AI Sponsor (Executive): Sets the AI ambition, defines business priorities, and provides resources.
- AI Governance Lead: Creates the governance structure, defines rules and processes, and ensures alignment.
- Use Case Owners (Business): Identify opportunities, manage workflows, define KPIs, and drive adoption.
- Data Owner / IT: Ensures data quality, availability, access, and compliance.
- AI Ethics / Risk Reviewer: Evaluates high-risk use cases for fairness, legality, and potential harm. This role can also be combined with the role AI Governance Lead.
Clear roles reduce friction, improve decision-making, and make governance faster. Governance fails when many people own everything and no one owns anything. A simple role matrix brings clarity and structure. Role-based governance replaces slow committees with fast decisions.
3. Build Governance Teams Can Use Daily
Governance is only effective if people can apply it in daily work. Most organisations create policies that sit in documents no one reads — and then wonder why usage is inconsistent. Employees don’t need more text; they need practical tools.
A Governance Checklist is the fastest and most effective way to operationalise governance. It ensures teams know exactly what to check before starting any AI use case. This avoids risk, increases consistency, and provides a shared framework across the entire organisation.
The checklist should answer:
- What problem are we solving?
- What data do we use, and is any of it sensitive?
- What are the key risks, and how do we mitigate them?
- Who reviews the output before it goes live?
- What does human oversight look like?
- How do we measure success and performance?
- Who owns the use case long-term?
The checklist should be integrated into tools that employees already use — such as Teams, SharePoint, Confluence, or Notion — so it becomes a natural part of their workflows. Governance must fit the flow of daily work to be effective. Teams need practical tools, not long documents. A simple checklist ensures consistent, safe, and fast AI usage.
The real shift is not governance vs. speed, but moving to tiered, practical governance:
- Clear guardrails, not heavy processes. A few non-negotiables act as boundaries; everything else moves quickly.
- A split between “safe to try” and “needs review.” Low-risk use cases move fast; high-risk ones get light review.
- Central clarity, decentralised execution. A core group defines rules; teams experiment within them.
- Short cycles: assess → test → measure → scale. Governance becomes part of agile iterations, not a separate gate.
This thinking is exactly what leads to modern, effective governance structures.
4. Create a 3-Layer Governance Structure
Most organisations suffer because they apply the same governance to every use case — whether it’s summarising meeting notes or deploying an AI agent to customers. This one-size-fits-all thinking creates bottlenecks, slows teams down, and encourages shadow AI. A better approach is risk-based governance: the level of control matches the level of risk. A three-layer model provides this balance.
Layer 1 — Basic Rules for Everyone
This layer creates clarity and safety across the organisation. It includes simple rules for:
- responsible use
- human oversight
- data protection
- approved tools
- transparency and documentation
These rules apply to all employees. Most daily-use AI tasks fall into this category — light governance, high autonomy.
Layer 2 — Use Case Checklist (operational governance)
Before starting any AI initiative, teams quickly validate their idea using a checklist. This ensures consistency, reduces risk, and improves quality — without requiring central approval.
The checklist covers:
- the business problem
- data sources and sensitivity
- oversight and review
- risks and mitigation
- KPIs and success metrics
- long-term ownership
- failure modes
If the checklist raises no concerns, the team can start immediately. This is where the majority of use cases live.
Layer 3 — Expert Review for High-Risk Cases
A small percentage of use cases require deeper oversight because they include:
- customer-facing interactions
- sensitive or personal data
- automated decisions
- multi-step system integrations
- legal, ethical, or reputational risks
The review is fast but thorough, focusing on:
- data compliance
- fairness and bias
- safety and reliability
- human oversight
- potential harm
- security
- alignment with strategy
This protects the organisation exactly where it matters.
Real-world examples
- Meeting notes summary → Layer 1
- Automating internal reports → Layer 1 + 2
- AI workflow with system triggers → Layer 2 (+ optional 3)
- Customer-facing agent → Layer 3
Governance must adapt to risk, not be applied everywhere equally. Layer 1 provides safety; Layer 2 ensures structure; Layer 3 protects against high-risk scenarios. This model keeps organisations fast, compliant, and aligned.
5. What Companies Should Do Now
After designing governance that teams can actually use, the next step is to implement it with focus and clarity. Governance is only effective when leaders actively integrate it into operations — not when it remains a standalone policy.
Here’s what organisations should do:
- Step 1 Run a governance audit: Identify gaps in clarity, data, tools, roles, and risk.
- Step 2 Define your AI Governance Role Matrix: Make responsibilities explicit and easy to understand.
- Step 3 Deploy the Governance Checklist: Give teams a simple tool to ensure consistency and compliance.
- Step 4 Train managers and teams: Focus on practical application, not abstract theory.
- Step 5 Review governance quarterly: AI evolves fast, your governance should too.
Governance must be introduced through clarity, roles, and tools. Training and reinforcement are essential for adoption. Quarterly reviews keep governance aligned with AI’s rapid pace. AI governance does not slow organisations down, complexity does. When governance is practical, lightweight, and embedded into the way people actually work, it becomes a catalyst for faster, safer, and more strategic AI adoption.
What’s your view? Is your organisation currently closer to over-governance or under-governance? Leave a comment and let’s learn together.



0 Comments