The machines have arrived. Are we ready for a world where judgement is the only thing that counts?
In the Terminator films, Judgment Day is the moment the machines become intelligent enough to destroy us. Great cinema. Even better anxiety. For forty years it shaped how we talked about AI: the machines are coming, and when they arrive, we are finished.
Here is what actually happened. The machines got intelligent. Remarkably, stunningly intelligent. And they did not destroy us. They did something far more uncomfortable. They revealed the limits of intelligence and the importance of judgement.
They revealed how much of what passes for professional expertise was never judgement in the first place, but the machinery around judgement: research, synthesis, drafting, monitoring, formatting, reporting. Useful work. Necessary work. Often expensive work. But not the same thing as deciding what matters. Think of it this way: having a brilliant memory used to be a professional advantage. Then Google arrived. The advantage shifted from knowing facts to recognising patterns, seeing what connects them, and thinking strategically about what they mean. AI is doing the same thing, but to the entire knowledge economy at once.
The real thing, the scarce thing, is judgement. The moment when all the research is done and someone has to decide what it means. The moment when the data says one thing and the room says another. The moment when a technically perfect recommendation is about to produce a strategically disastrous outcome, and someone senses it before it happens, not after.
For most of human history, research and judgement were bundled together because gathering intelligence was so difficult. The person who could find the information was often the same person who had to interpret it. AI has removed much of that bottleneck. Not completely, and not in every context. But enough to change the economics of knowledge work. The old bundled package of intelligence plus judgement is coming apart. And when it does, something uncomfortable is exposed: how many professionals were selling intelligence while calling it judgement.
Judgement Day is here. It is just not the one we were warned about. The machines have not come to take our jobs. They have come to show us which part of our job was actually ours.
This is not just a Brussels story, although I will use the Brussels EU ecosystem, the world I have worked in for over twenty-five years, as my primary example. It applies across almost every knowledge profession. Lawyers. Consultants. Analysts. Communicators. If most of your working week is spent turning information into a more useful form of information, AI is already doing it faster. The professionals who will thrive are the ones whose real value was never the information processing. It was the judgement that sat on top.
The Ladder
There is a hierarchy in knowledge work: data becomes information, information becomes intelligence, and intelligence, applied with skill and experience, generates judgement.
Julien Bek, a partner at Sequoia Capital, sharpened this into a distinction so clean it stopped me in my tracks. In his article “Services: The New Software,” he wrote that in software engineering, writing code is mostly intelligence while knowing what to build next is judgement. His larger point: AI is crossing the threshold where it can handle a growing share of intelligence work autonomously. It has not crossed the threshold on judgement.
Map that onto any profession. In law, researching precedent is intelligence; knowing which argument will move a specific judge is judgement. In medicine, synthesising test results is intelligence; deciding what to tell a frightened patient, and when, is judgement. In European public affairs, drafting a position paper is intelligence; knowing which argument to lead with, for which audience, at which political moment, sensing that a Commissioner is about to shift before anyone says it publicly, that is judgement.
The ladder has not changed. AI now climbs far more of it than before. The higher you go, the more stubbornly human the work becomes.
The Anatomy of Judgement
Judgement is a composite of three distinct capacities.
Critical thinking: the ability to interrogate intelligence. To question data, spot buried assumptions, and ask the question no one else has asked. Not everything AI produces is correct. Not everything correct is relevant. Critical thinking separates signal from noise.
Instinct: pattern recognition operating below conscious analysis. I felt this when I learned that Manfred Weber and the EPP Group were pushing for a free Interrail ticket for every young European on their eighteenth birthday. I loved the idea. But where most practitioners would have started with a position paper and a policy document, something told me the approach needed to be different. This was not primarily a legislative exercise. It was a campaign waiting to happen. It needed young people’s energy behind it, not just a budget amendment in committee. That instinct, shared with Weber’s team, who had the political courage to run with it, helped shape what became DiscoverEU: a campaign that mobilised young Europeans across the continent and helped build the momentum to deliver the legislation. No data model told me to reframe a policy proposal as a youth movement. That came from years of watching what actually moves outcomes in Brussels
Emotional intelligence: the most underrated. Not sentimentality. The ability to read a room, sense what an audience needs to hear, recognise when a technically correct message will land catastrophically badly, and know when to push, hold, or leave something unsaid.
These three combine into orchestration: directing abundant intelligence toward the right goal, with the right people, at the right moment, in the right tone. That is judgement. And it remains stubbornly human.
Philippe Borremans , a crisis communication specialist who has advised the WHO and the European Council, put this well in his Wag The Dog newsletter: judgement is built on something data cannot replicate, the experience of having been wrong before and understanding why.
What This Looks Like in Practice
Brussels produces more policy intelligence per square kilometre than almost anywhere on earth. And yet it rarely suffers from a shortage of intelligence. It suffers from an excess of it, and from the illusion that more information automatically produces better decisions.
AI will not fix that. Used without judgement, it will make it worse: more reports, more drafts, more apparent sophistication, while the core problem goes untouched.
This is the red pill moment. AI is forcing every professional to confront which side of the intelligence-judgement line they actually occupy. In Brussels, I see three groups emerging. I suspect you will recognise them in your own profession.
The Strategists Without Speed. Decades of judgement. Political instinct. The ability to read institutions. But operating far below their potential because they are still doing by hand what AI can compress in minutes. Ferraris stuck in first gear.
The Operators Without Direction. Technically fluent. Fast. Impressive output. But dependent on someone else to tell them what matters. Without judgement, speed is not strategy.
The Integrated Practitioners. Rare, and becoming disproportionately valuable. They combine genuine experience with the ability to deploy AI intelligently, not just to generate text but to design better research, accelerate insight, and create compounding strategic advantage. Force multipliers.
Every profession will need more of them. And here is the uncomfortable corollary: many AI tools built for specific professions are wrappers around the same underlying models. Before investing heavily, ask: is this a durable advantage, or just a temporary interface? Reading the direction of the technology is itself a form of judgement.
The Hyperthinking Imperative
This is why Hyperthinking matters more now than when I first wrote about it.
In Hyperthinking, I argued that the ability to shift paradigms, think across disciplines, and resist the comfort of a single framework is what separates people who thrive in volatile environments from those who merely endure them. Today that argument applies to the mechanics of professional life itself.
When the tools change every six months, the decisive advantage is not mastery of one toolset. It is the ability to rebuild your mental models faster than the environment is changing. That is what I mean by Hyperlearning: the deliberate, continuous process of updating how you think rather than defending what you already know.
The people who will thrive are not the ones who become experts in one interface. They are the ones who develop a way of learning that lets them adapt to whatever comes next.
The Generation Question
Philippe Borremans started his career at Porter Novelli in Brussels in 1994, printing press releases on paper and stuffing them into envelopes. But the envelopes did not build his career. The room did. As the only junior in a small agency, he sat in on crisis meetings almost by accident, watching experienced practitioners make calls under pressure. That proximity to judgement made the difference.
His story raises a question every profession now faces: how does the next generation build judgement when the entry-level intelligence work is being automated?
The mechanical apprenticeship is compressing. The judgement apprenticeship, getting into rooms where decisions are made, studying why experienced people call it the way they do, has to be pursued deliberately.
For those early in their careers: do not compete with AI on intelligence. Get into the rooms where judgement happens. Volunteer for the difficult situation, not the monitoring report. Use AI as infrastructure, not as identity.
For those with experience: if you are not developing real fluency with these tools, you are operating at a fraction of your potential. Your judgement is being under-leveraged at the moment when it has never been more valuable.
A Beginner’s Mind…
In 1981, Paul Adamson opened one of the first public affairs consultancies in Brussels. There was no playbook. The institutions were still taking shape. He and others of that generation built as they went, defining the rules while the game was still emerging. What made them effective was curiosity in the face of uncertainty.
Every profession is now entering its own version of that moment.
Judgement Day, the real one, is not a catastrophe. It is a clarification. The machines did not come to end us. They came to show us what we are made of. And for the people whose value was always judgement, that is the best news in a generation.
When was the last time you changed your mind about how you do your job? Not adjusted. Changed.
If you cannot answer that, you are not ready for what is coming.
This piece was inspired by Philippe Borremans’ Wag The Dog newsletter, Julien Bek’s “Services: The New Software” for Sequoia Capital, and a conversation with Paul Adamson. The argument, and any errors in it, are obviously mine.
This article was originally posted by Philip Weiss on Linkedin




