Lately I’ve been noticing a quiet shift in how AI shows up in daily work. Not in the quality of answers. In the type of capabilities. We are moving from AI that responds to AI that can take actions.
Some of the recent updates around Claude, and earlier experiments like OpenAI Operators, point in the same direction: the assistant is no longer only a chat interface. It can interact with tools, click, type, search, organise, and execute steps while you stay focused on higher value work.
That does not mean handing over responsibility.
In fact, the more “agentic” AI becomes, the more important judgement becomes, as Philip Weiss well said in this article. The goal is not to outsource decisions. The goal is to outsource repetition. Think of it as a new productivity layer. You keep control over the moments that matter, but you delegate the mechanical steps that drain attention.
A small example made this click for me.
I set up a recurring task in ChatGPT that monitors topics I care about in the Brussels bubble and sends me a digest every Monday morning. It’s the kind of monitoring that used to cost hours of research, scanning, and synthesis. Now it arrives automatically. And the most interesting part is that I can tune it simply by talking to the AI. I can adjust the scope, the sources, the focus, the format, and the outputs, without rebuilding the system.
It reminds me of Google Alerts and RSS aggregators from years ago, but the difference is depth and adaptability.
Alerts were rigid. You set keywords and hoped the feed was relevant. RSS was powerful but required a lot of manual curation. This new version behaves more like a living assistant. You don’t configure it once. You steer it continuously.
That is a meaningful shift for public affairs and policy communication teams.
Brussels is not short on information. It is drowning in it. The bottleneck has never been access. It is filtering, prioritising, and turning raw signals into something usable for decisions.
If AI can automate the repetitive layers of that pipeline, the impact is structural. Not because it saves a few minutes, but because it changes how teams operate.
Instead of monitoring being a weekly burden, it becomes background infrastructure.
Instead of “someone should look into this”, you start the day with a curated view of what matters, tailored to your agenda, and you can refine it over time.
This is also why the “computer use” features matter. The assistant doesn’t just summarise what you paste. It can potentially perform the repetitive steps that sit around knowledge work: collecting sources, sorting them, extracting the relevant parts, formatting a briefing, preparing a first draft, populating a tracker.
And when those steps become cheaper, the value of human work shifts upward. Less time spent on building the raw material, more time spent on judgement, strategy, and positioning.
The real question for 2026 is not whether teams will use AI. It’s how they will design the boundary between automation and decision making.
What do you want the assistant to do autonomously. What requires human approval. What is safe to automate. What must remain a judgement call.
For me, monitoring is an obvious starting point. It is repetitive, time-consuming, and it benefits from continuous tuning. It is also a perfect example of how AI can work in the background while humans keep control of meaning.
We are not replacing the strategist. We are removing the friction around the strategist.
Curious how you see it. What is the first workstream in your organisation that should become “background automation”, and what are the moments you would never delegate?
This article was originally posted by Jesús Azogue on LinkedIn




