I wrote the other day about the shift from AI as personal shortcuts to AI as organisational capability. This week, while preparing some training materials, I realised the conversation becomes much more useful when we stop debating the model and start looking at the components.
Because in public affairs and policy comms, the model is rarely the bottleneck. The stack is.
Most teams already have access to powerful tools. The question is why the output still feels inconsistent, hard to trust, and difficult to operationalise. The answer is usually architectural, not inspirational.
It starts with data sources. Policy updates, internal studies, position papers, CRM notes, SharePoint archives. Most organisations have plenty of material. The problem is that it is fragmented, duplicated, and hard to retrieve under time pressure. If humans cannot find the right source fast, AI will not magically do it either.
Then comes the knowledge layer, which is where many pilots quietly stall. A knowledge layer is not a folder. It is a structured library with tags, version control, consistent terminology, and traceability. It is what turns AI from “sounds plausible” into “shows me the source”. That is why grounding and retrieval matter in regulated or policy sensitive environments.
Only after that does the choice of AI model really matter. ChatGPT, Copilot, Gemini. They are interfaces. The quality difference often comes from what they can safely access, not from which logo sits on the screen.
The next component is workflow design. This is the moment AI becomes repeatable. Turning a raw update into a structured briefing. Turning a report into an executive summary. Turning a policy paper into content assets. One input, multiple outputs. That is where speed improves without losing precision.
Workflow orchestration connects those steps across tools. Without it, teams end up with tool chaos and dashboard fatigue. With it, the logic becomes simple: signal, qualification, AI processing, structured output, human decision.
Then there is the agent layer. Not a chatbot, but background workflows that monitor continuously, draft first versions, and escalate when needed. A monitoring agent. A briefing agent. A content repurposing agent. Their value is not autonomy for its own sake. Their value is reducing friction between signal and decision.
And finally, the most important component does not disappear. Human decision. Validation discipline. Strategic choice. Approval. In policy comms, you cannot outsource judgement. AI supports. Humans decide.
A useful question for any organisation is this. Which layer is actually slowing you down today. Data discoverability. Knowledge structure. Workflow repeatability. Or governance and decision cycles.
If you fix the right layer, AI becomes boring in the best way. Reliable, repeatable, and operational.
Where do you see the bottleneck in your organisation right now?
This article was originally posted by Jesús Azogue on LinkedIn




