The brutally honest framework for determining when human expertise truly matters versus when AI is genuinely good enough
·
Published in
·
Not all content work deserves saving from AI automation – here’s how to identify what’s actually worth fighting for.
A close-up photograph shows two contrasting loaves of bread side by side on a smooth wooden surface. On the left, a round artisanal sourdough bread with a golden-brown crust displays an intricate pattern of scoring marks and a dusting of flour. On the right, a rectangular factory-produced white bread loaf wrapped in plastic is sliced into uniform pieces, showing soft, dense crumb structure. Both loaves are illuminated by soft directional lighting from above, emphasizing the textural differences

Last week, a potential client asked me a question that perfectly captures our current professional situation: “Why should I pay you £500 for content strategy when I can get ChatGPT to write it for £20?”
It wasn’t the first time I’d faced this question.
After 30+ years in digital transformation, I’ve seen every technological wave bring its own existential crisis for content professionals.
The web was going to make editors obsolete. Then content management systems would replace writers. Then automated translation would eliminate localisation experts.
But this time feels different.
Not because the technology is more advanced (though it is), but because the gap between adequate and exceptional has never been harder to articulate.
When I started my career in the Netherlands, crafting content for early government websites, the distinction between amateur and professional work was immediately visible. Now that line has blurred beyond recognition.
Let’s be honest about where we stand.
The ground we should concede
Some of what we’ve been charging premium rates for absolutely can be done adequately by AI.
Not exceptionally, not brilliantly, but adequately enough that many clients genuinely can’t tell the difference.
I’ve watched content professionals insist that AI can’t possibly understand strategic contexts or craft nuanced messaging.
Then I’ve seen those same professionals grudgingly admit that yes, actually, that AI-generated strategy document does hit most of the key points they would have made.
We must acknowledge several uncomfortable truths:
First, standard blog posts following conventional formats can be drafted quickly and adequately by AI.
If the goal is to maintain SEO visibility with regular content about straightforward topics, AI can handle it with minimal human oversight.
Second, routine content transformations – turning technical documentation into more accessible language, for instance – can be done remarkably well by AI.
It won’t be perfect, but it will be good enough for many business contexts.
Third, first drafts of almost anything can now be generated in seconds rather than hours.
The differential in thinking time for initial concept exploration has collapsed dramatically.
When I started in this industry, the gulf between amateur and professional content was vast and immediately visible.
Today, AI has made that gap much narrower. For many routine content needs, that gap has effectively disappeared.
If we refuse to admit this, we become the equivalent of medieval scribes insisting that printing presses could never capture the soul of true calligraphy.
Technically correct, perhaps, but ultimately irrelevant to most people’s needs.
If all you need is content that looks professional and covers the bases, you shouldn’t pay me.
The territory worth defending
Not everything should be conceded, however.
There are domains where human expertise remains not just valuable but essential. The trick is being honest about where those boundaries actually lie.
Content that shapes crucial life decisions or involves significant emotional investment still demands human oversight.
When I worked on government services like Universal Credit, the impact of content choices was measured not in engagement metrics but in whether vulnerable people could access essential financial support.
Content involving specialist domains where factual accuracy is crucial requires expert human judgement.
I’ve watched AI confidently generate completely incorrect information about legal processes, medical procedures, and financial regulations –
Content that defines brand voice still needs human hands on the tiller.
AI can mimic existing voices, but establishing a new one remains a deeply human creative challenge.
Having worked with organisations from the Metropolitan Police to small charities, I’ve seen how critical this authentic voice becomes in building trust.
Perhaps most importantly, determining what not to say – the strategic choices about focus and emphasis – remains a human judgement call that AI simply cannot make.
I recently reviewed an AI-generated content strategy that included every possible approach without the courage to commit to any particular direction.
The client reality we must accept
Here’s where content professionals often go wrong: we assume clients care about the same quality markers we do.
Many simply don’t.
For many clients, the difference between adequate and exceptional content isn’t worth the price differential.
They understand that AI-generated content may be 80% as good as what a skilled human would produce, but if it costs 20% as much and can be delivered in hours rather than weeks, that’s a trade-off they’re willing to make.
This isn’t a failure of client education. It’s a rational economic choice.
If you’re selling bread to people who just need a sandwich, they won’t pay extra for artisanal sourdough unless you can make them care about the difference.
I’ve worked with government departments that would spend months crafting the perfect guidance only to discover that most users just wanted the basic information as quickly and clearly as possible.
Some content absolutely deserves deep investment in quality; much doesn’t.
Clients aren’t stupid for questioning whether they need to pay premium rates for content expertise.
They’re responding to a technological shift that has genuinely changed the equation.
The framework for moving forward
So where does this leave content professionals? Not in crisis, but in need of a clear-eyed recalibration.
Here’s a framework I’ve developed for determining when to fight for human expertise and when to concede ground to AI:
When content directly impacts critical human outcomes – health, financial wellbeing, safety, legal rights – the cost of errors justifies human expertise.
I’ve seen firsthand how content mistakes in government services can literally leave vulnerable people without food or housing.
When brand perception and trust are primary concerns, the subtle judgement calls that shape voice and tone still benefit from human expertise.
During my work with the Metropolitan Police on sensitive crime reporting, these nuances made the difference between victims coming forward or staying silent.
When content requires genuine strategic choices and prioritisation rather than covering all possible bases, human judgement remains essential.
At the Cabinet Office, I saw how strategic focus in government communications directly impacted policy effectiveness.
When factual verification is crucial and domain knowledge is needed to spot subtle errors, human expertise remains invaluable.
Working on financial services content, I’ve spotted AI-generated inaccuracies that could have created genuine legal liability if published.
For almost everything else, we should be honest: AI plus light human oversight is probably good enough.
Not perfect, but good enough for the intended purpose.
This isn’t about protecting our jobs or professional identity. It’s about ensuring content serves its purpose properly in contexts where it genuinely matters.
Sometimes that means insisting on human expertise; other times it means accepting that AI can handle it.
The future of content expertise
Those of us who’ve built careers around content expertise are facing a fundamental shift.
Our value is increasingly concentrated in judgement rather than production, in strategic decision-making rather than execution.
The content professionals who will thrive in this new landscape won’t be those who insist nothing has changed.
They’ll be those who redefine their expertise around what truly requires human judgement, who can articulate when and why human oversight matters, and who can guide clients through this shifting terrain.
I’ve spent decades adapting to technological change in content creation, from hand-coded HTML to sophisticated content management systems.
Each wave eliminated some aspects of our work while opening new possibilities. This one is no different in kind, only in speed and scope.
When that potential client asked why they should pay me £500 rather than give £20 to ChatGPT, my answer wasn’t that AI couldn’t possibly do what I do.
He hired me the next day.
Not because AI couldn’t generate something adequate, but because he realised the difference between adequate and right was worth paying for in his particular context.
That’s the honest conversation we need to be having – not defending all content expertise as equally essential, but helping clients understand when the difference between adequate and exceptional actually matters.