Near the beginning of my journalism career, I worked as a sub-editor, production assistant, and production editor. The sub-editor’s eye never really leaves you, and that’s become a problem in the age of AI because I’m increasingly noticing how much written content looks like it’s been subbed by the same person.
This ‘person’ seems obsessed with words like fluff, phrases like rising above the noise, and a very particular form of sentence structure. The issue, as I see it, is conformity. If everything reads the same, how do brands, er, rise above the noise, as it were?
This sameness isn’t accidental. AI has become embedded in content workflows very quickly, and for good reason. Most marketing and editorial teams are now using some form of generative AI to draft copy, explore ideas, rewrite tone, or summarise long documents. The efficiency gains are obvious and tempting: content gets started faster, campaigns move quicker, and blank-page paralysis is far less common than it used to be.
But speed has a side effect. There is simply more content everywhere. Much of it is polished, competent, and oddly interchangeable. Which brings us back to the more important question: not how we’re using AI for written content creation, but why.
Start with intent when using AI for content writing
AI makes content easier to produce, which is a problem if the underlying purpose is vague. If AI is used simply because it’s available, or because everyone else is using it, the result is more stuff, not better stuff.
The teams that tend to use AI well are clear on some fundamentals before they open a tool:
- who the content is for
- what problem it’s meant to solve
- when and where it will be used
- why it matters
Only once those questions are answered does AI become genuinely useful as a way to support execution. An editorial team might use AI primarily for research, outlining, or summarisation. A communications team may use it to adapt core messages across formats. A commercial content team may use it to scale certain types of writing.
All of those are valid use cases, but they require different levels of oversight and judgement. Problems usually arise when the tool dictates the approach, rather than the other way around.
Be honest about what ‘AI-written’ content means
One of the most unhelpful phrases in modern content discussions is ‘AI wrote this.’ In reality, there’s a meaningful difference between:
- content published straight from a model with no review
- content lightly edited after generation
- content where AI helped draft or explore ideas, but humans shaped the final output
The safest and most effective uses of AI sit firmly in the middle: AI as an assistant or drafting aid, paired with strong human oversight. That means proper editing, fact-checking, and decisions about structure, emphasis, and tone.
Where organisations get into trouble is when AI is allowed to run on autopilot. We’ve already seen high-profile examples of AI-generated content being published with basic errors, invented facts, or misleading claims, not because the technology is malicious, but because no one took responsibility for the final output. If there’s no clear human accountability at the end of the process, the risk climbs very quickly.

Speed, cost, and quality in AI-assisted content writing
There’s a useful mental model that applies neatly to AI-assisted writing: speed, cost, and safety. AI can help content move quickly. It can also reduce production costs. But doing both at once while maintaining high standards is difficult. If material is generated cheaply and published fast with minimal oversight, the likelihood of errors, bias, or reputational damage increases sharply.
On the other hand, teams that combine AI with proper editorial checks, subject-matter expertise, and review processes can produce strong work, but they still need to invest time and care.
In practice, AI is most effective when it accelerates routine tasks and frees humans to focus on judgement, clarity, and decision-making rather than volume.
Where AI genuinely helps writing teams
Used carefully, AI can be genuinely helpful across the writing and publishing process. It’s particularly adept at early-stage thinking: generating angles, outlining structures, and helping teams move past the blank page. It can also help expand rough drafts into fuller pieces, provided everything is reviewed, rewritten where necessary, and fact-checked.
AI is also useful for structural and operational tasks. Improving headings, clarifying structure, suggesting summaries, adapting tone for different audiences, or repurposing long-form work into briefings or platform-specific formats are all sensible, lower-risk applications when the source material is human-created.
Another valuable use is summarisation. Turning long reports, research papers, interviews, or meeting notes into usable briefs can save significant time, as long as someone validates the output and ensures nuance hasn’t been lost.
Across all these cases, the key point is the same: the thinking remains human-led. AI accelerates the work, but it doesn’t decide what’s worth saying.
Why culture matters as much as policy when using AI
Many organisations jump straight to governance when rolling out AI: policies, rules, and restrictions. Those are necessary, but they’re not sufficient.
Teams also need space to experiment, and permission for some of that experimentation not to lead to published output. Some of the most valuable learning comes from discovering what doesn’t work. Clear guardrails help here. People should know which tools are approved, how data can be used safely, and what level of sign-off is required before anything goes live. But those guardrails should be positioned in such a way as not to stifle curiosity.
Interestingly, some of the most productive AI experimentation happens at junior levels, where people are often more willing to try new approaches. With the right oversight, that experimentation can benefit the whole organisation.
Ethics aren’t optional in AI-assisted content writing
All of this rests on a few non-negotiables. Written content needs to be accurate, fair, and respectful. AI systems can reproduce bias, invent facts, or present opinion as certainty, so human review is essential. Transparency also matters. In some contexts, audiences deserve to know when AI has played a role in creating what they’re reading.
Most importantly, AI should support authenticity. Trust is built through reliable, thoughtful writing, that displays clear ownership: someone has to be accountable for what goes out under a name or brand.
Writer’s instinct is a superpower
AI isn’t a threat to writing, because it’s a shortcut to quantity not quality. It’s a powerful set of tools that can help teams work faster and more effectively, if they’re used with care. The organisations that benefit most are being selective: using AI to speed up routine work and explore ideas, while keeping editorial standards firmly human.
The web is filling up with AI-generated material, much of it competent but forgettable. That creates an opportunity. The tools have changed, but the writer’s – and sub-editor’s – instinct hasn’t. Clarity, judgement, and a flair for crafting words still matter. And content that sounds like everyone else’s will always struggle to inspire its reader, and how quickly it was produced is irrelevant.


