Article by Nadine Soyez
Most organisations have tried AI. Very few have made it stick.
Ask any team how they use AI, and you will hear a mix of answers. Some people swear by it. Others tried it once and moved on. A few have never touched it. This inconsistency is the real story of AI adoption in most companies today. The technology works. The organisation does not work with it. The missing piece is not enthusiasm or investment. Plenty of both exist. What is missing is the operational layer: the habits, workflows, ownership structures, and validation routines that turn occasional AI use into reliable AI use. Without this layer, AI remains something people do when they remember, not something built into how work happens.
In 2026, the gap between AI experimentation and AI operation will define which organisations move forward and which fall behind. This newsletter explains how to close that gap through practical changes to how work actually happens.
Why habits matter more than training
AI adoption fails when it depends on individual motivation. Training sessions teach people what AI can do. But knowing what AI can do is not the same as using AI consistently. The difference lies in habits. A habit is a behaviour that no longer requires conscious effort. People do not decide each morning whether to check email. They just do it. AI becomes reliable when it reaches the same status: when using AI for specific tasks becomes automatic, not optional.
The mistake many organisations make is treating AI as a general capability. They tell employees to use AI more without specifying when, where, or how. This leads to inconsistent adoption. Some people experiment. Most forget. Instead, identify three to five specific moments in daily work where AI should become the default. For example: before drafting any document, before preparing any analysis, before responding to complex enquiries. These are trigger points. When people reach them, they should automatically turn to AI. Make these moments explicit. Name them. Train for them. Reinforce them. The goal is not general awareness. The goal is embedded behaviour.
Designing an AI tool ecosystem that teams actually use
Tool chaos is one of the fastest ways to kill AI adoption. When every team uses different tools, learns different interfaces, and follows different practices, the organisation cannot build shared capability. Knowledge stays siloed. Support becomes fragmented. Risk increases. A practical AI tool ecosystem should be intentionally small. Most organisations need far fewer tools than they think. The goal is not to cover every possible use case. The goal is to cover the most common use cases reliably, with tools that people actually know how to use.
Start by mapping how work actually flows through the organisation. Where do people create content? Where do they analyse data? Where do they communicate? Where do they make decisions? These are your integration points. AI tools should meet people where they already work, not pull them into separate applications.
Build your ecosystem around three layers:
- Core tools: One or two primary AI assistants that everyone in the organisation learns to use. These handle general tasks: drafting, summarising, brainstorming, answering questions.
- Embedded tools: AI capabilities built into existing software. This includes AI features in Microsoft 365, Google Workspace, CRM systems, or project management platforms. People use these without switching applications.
- Specialist tools: Purpose-built solutions for specific functions such as legal review, financial analysis, or customer service automation. These require deeper expertise and tighter governance.
Standardise training around core tools first. Build shared prompt libraries and templates. Create internal channels where teams can share what works. The power of a small, well-understood toolkit far exceeds a large, fragmented one.
How to turn one manual task into an AI-supported workflow
The fastest way to demonstrate AI value is to transform a single, visible task. Not a theoretical use case. A real task that someone does repeatedly, that takes significant time, and where the improvement will be obvious.
Pick carefully. The ideal task has clear inputs, a repeatable structure, and a human who currently spends hours on it. Weekly reports. Meeting summaries. Proposal first drafts. Customer response templates. Research synthesis.
Then follow a simple process:
- Document the current workflow step by step. Write down exactly what happens today. What triggers the task? What inputs are gathered? What decisions are made? What output is produced? Who reviews it? This documentation reveals where AI can add value and where human judgment remains essential.
- Identify the AI-assisted steps. Not every step needs AI. Focus on the time-intensive, repetitive parts: initial drafting, data gathering, formatting, summarisation. These are where AI creates the most immediate relief.
- Build and test the prompt or template. Create a reusable prompt that produces consistent, quality output. Test it multiple times. Refine the instructions until the output requires minimal editing.
- Define the human review point. Every AI output needs a checkpoint. Specify who reviews, what they check for, and what constitutes ready for use.
- Measure before and after. Track time spent. Track quality. Track satisfaction. These numbers become your evidence for scaling AI further.
One well-designed AI workflow creates momentum. It shows colleagues that AI delivers real results. It builds confidence. And it provides a template that other teams can adapt.
How to redesign work so AI actually saves time
Here is an uncomfortable truth: AI often adds work before it saves work. People spend time learning tools, writing prompts, reviewing outputs, and correcting mistakes. If this phase never ends, AI becomes a burden rather than a benefit. The key insight is that AI only saves time when work is redesigned around it. Simply adding AI to existing processes creates extra steps. Redesigning processes to assume AI creates efficiency.
Consider how this works in practice. A traditional report workflow might involve gathering data, analysing trends, writing insights, formatting the document, and circulating for feedback. Adding AI to this means someone still does all the coordination, just with AI assistance at some steps. A redesigned workflow looks different. The data feeds directly into an AI-ready template. The AI generates the first complete draft. The human reviews and adjusts rather than writes from scratch. The format is standardised to minimise editing. The feedback process focuses on substance, not structure.
Three principles guide effective redesign:
- Move humans from creation to curation. Instead of building from blank pages, people review, refine, and approve AI-generated starting points. This is faster and often produces better results because it focuses human attention on judgment rather than mechanics.
- Standardise inputs ruthlessly. AI works best with consistent, structured inputs. If every request is formatted differently, every output requires heavy editing. Templates, forms, and standard briefing documents dramatically improve AI performance.
- Eliminate steps that AI makes redundant. Many traditional process steps exist because humans needed them. Information handoffs, status updates, formatting checks. When AI handles these, do not keep doing them manually. Remove them entirely.
Redesign takes more effort upfront. But without it, AI remains an add-on rather than an accelerator.
Designing AI workflows with built-in human validation
AI cannot be trusted blindly. This is not a criticism of the technology. This is a design requirement. Every AI-supported workflow needs validation built in from the start, not added as an afterthought. The challenge is making validation efficient. If reviewing AI output takes as long as producing it manually, the workflow fails. The goal is smart validation: checking what matters, trusting what has proven reliable, and investing review effort where risk is highest.
Start by classifying AI outputs by risk level. Low-risk outputs like internal summaries or brainstorming lists need only a quick scan. Medium-risk outputs like client communications or published content require thorough review. High-risk outputs like financial calculations, legal language, or strategic recommendations need expert validation. Build validation into the workflow visibly. Every AI output should have a clear moment where a human confirms: I have reviewed this, I take responsibility for it, it is ready to proceed. This can be as simple as a checkbox in a document or as formal as a sign-off in a system. What matters is that validation is explicit, recorded, and understood.
Train people specifically in AI review. This is different from general quality control. Reviewers need to know common AI failure modes: confident-sounding but incorrect facts, logical gaps, hallucinated references, cultural insensitivity, subtle bias. They need checklists that prompt them to verify specific elements rather than trusting general impressions. Finally, create feedback loops. When AI outputs require significant correction, capture why. These patterns inform better prompts, improved training, and smarter validation rules. Validation is not just a safety measure. It is a learning mechanism.
Making ownership clear
AI creates accountability confusion. When a report is partly written by a person and partly by AI, who owns the quality? When a recommendation emerges from AI analysis, who is responsible if it proves wrong? The answer must be unambiguous: humans always own the outcome. AI is a tool. The person who uses the tool remains accountable for the result. This principle needs to be stated clearly, reinforced regularly, and reflected in how work is structured.
In practice, this means every AI-supported deliverable has a named owner. Not the team. Not the department. A specific person who has reviewed the work and stands behind it. When ownership is diffuse, quality suffers and accountability disappears. It also means managers cannot delegate accountability to AI. If a manager approves work that AI helped produce, the manager remains responsible for the quality. This might seem obvious, but in practice many managers quietly assume AI reduces their oversight burden. It does not. If anything, AI increases the need for managers to understand what their teams produce and how.
What companies should do now
Moving from experimentation to operation requires deliberate action. Here are the practical steps:
- Identify three to five trigger moments where AI should become the default behaviour. Name these explicitly and train teams to recognise them.
- Consolidate your tool ecosystem around core, embedded, and specialist layers. Reduce fragmentation by standardising on fewer, well-understood tools.
- Transform one manual task into a documented AI workflow. Choose something visible, measurable, and repeatable. Use this as a template for expansion.
- Redesign workflows rather than just adding AI. Move people from creation to curation. Standardise inputs. Eliminate redundant steps.
- Build validation into every workflow with explicit review points, risk-based intensity, and captured feedback.
- Make ownership unambiguous. Every AI-supported output needs a named human owner who has reviewed and accepts responsibility.
- Create shared resources such as prompt libraries, templates, and internal channels where teams exchange working approaches.
- Train for judgment, not just usage. Teach people to recognise AI failure modes and validate outputs effectively.
The bottom line
Experimentation has taught organisations what AI can do. Now the challenge is making AI a reliable part of how work gets done. This requires attention to habits, workflows, tools, validation, and ownership. None of these are technical problems. They are operational and behavioural ones. Organisations that treat AI adoption as a change in how people work, not just a change in what tools they use, will move from potential to performance. Those that keep experimenting without operating will wonder why their AI investments never quite deliver.



0 Comments