I was one coffee, one protein shake and one coconut water deep, fingers twitching over the keyboard, when the dreaded thought struck me - was I even needed anymore?
(Disclosure: I write for work and I love doing it. I love reading, analyzing, getting into the rabbit holes of minor details. The whole process of comprehending and creating something meaningful. And so, yes, I use AI (extensively). I use it for research, for generating ideas and brainstorming. It’s phenomenal at those.)
Content Writing has been hijacked. Not by the usual suspects (no Mad Men execs in suits whispering about ad-spend over whiskeys, but by something colder and stealthier. An AI that never sleeps, never questions (unless asked to) and never drinks. Initially, everyone hoped for it to be an assistant but we all know it’s become a relentless content-churning beast.
In 2022, CIFS expert Timothy Shoup estimated that 99% to 99.9% percent of the internet's content will be AI-generated by 2025 to 2030. That raises a question: even if we could verify what’s human-made (say, with blockchain certificates), will future audiences even want human created content?
Imagine you want to watch The Silence of the Lambs but this time it’s starring Jack Nicholson. Which, by the way, would be awesome. But think about it, today with tools like Sora, Alibaba’s newest thing, if you can create a video from scratch, it’s only time when you can “remaster” the existing media in ways unimaginable.
So the question was this: Should AI be an assistant or a creator? Or is it already too late to decide?
The Promise of the Machine
At first, it seemed harmless enough. Tech leaders said that the AI promises efficiency - the marketing world's most sacred buzzword. It would take the grunt work off our hands, they said. Just feed it a few keywords, and it would spit out a blog post, a Twitter thread, a perfectly optimized SEO article.
I tested it myself, as we all do, one way or the other.
“Write a product launch blogpost for an AI-powered note-taking app.”
In flat 4 seconds, the machine spat out 1281 words of corporate enthusiasm so lifeless it made a rom-com feel like a Kubrick.
But it worked. It was technically coherent and definitely fast. It was as if it had phrases from every generic press release ever written. And oh, the buzzwords and clichés! The draft was full of overused and hacky lines (“game-changer”, “revolutionize”, “disaster waiting to happen”, etc.) and bland metaphors.
It had no personality.
The content was on-topic, grammatically correct, and required no coffee, no salary. In late 2024, eMarketer reported 81% of B2B marketing teams now use generative AI tools, up from 72% just a year before. It’s easy to see why companies find this appealing. Why pay a human writer for days of work when an AI can produce passable text in seconds? This is the great lure of AI in content creation: insane speed and volume. Entire marketing departments are made to do more with less - more posts, more ads, more blogs but much fewer people.
But Does sheer volume mean anything if the content lacks depth and originality? AI-written prose often uses certain phrases over and over to the point of monotony. It tends to stick to a formula. The templated AI content is quite redundant and repetitive, you can spot it when you see it as something you’ve seen a hundred times before.
The Great Marketing Hoax
Chasing that promise of more content for less cost, businesses fell for what I’d call a marketing hoax. In their quest to squeeze more productivity out of fewer people, companies started to treat AI as the panacea. The goal is to feed the bottom line. We’ve seen content teams downsized and budgets slashed. Content Writers are gutted.
AI doesn’t research. It scrapes. It doesn’t analyze. It assembles. It doesn’t form opinions. It regurgitates them. Yes, it can scan millions of articles and stitch together a summary, but can it have an original viewpoint? Can it connect the dots between a niche market trend and a cultural shift? Can it reveal an unspoken pain point before the audience even realizes it themselves?
No. Because it doesn’t think, it just processes. All that requires human judgment, the kind that comes from real-life experiences and intuition that no algorithm has (yet). AI, in its vacuum of mechanical processing, cannot (so far) replicate the originality born of a human mind.
Also, AI doesn’t know truth from falsehood. It can sound authoritative, but it has no conscience or real-world awareness behind its words. This has led to some embarrassing and harmful mistakes. Just a month ago, a BBC experiment showed the scope of this issue when journalists posed 100 factual questions to top AI chatbots, 51% of the AI’s answers had significant issues, and 19% contained outright factual errors. The bots even made up information, fabricating quotes or details that sounded plausible but were completely false. In 13% of responses, quotes attributed to BBC sources were altered or nonexistent.
Tech outlet CNET tried using AI to write financial explainers, only to find errors in more than half of them. They ended up issuing corrections on 41 of 77 AI-generated articles. Worse, some of those passages weren’t entirely original, the AI had plagiarized by piecing together phrasing from its dataset. The incident was a black eye for CNET, underscoring how unchecked AI content can wreck a publication’s credibility.
So, volume isn’t value. Companies are mistaking prolific production for actual business value. When every company blog and social feed produces the same banal AI-crafted lines, the audience stops listening. The end-user is a human and on a human level, the only content that matters is the kind that has something worth saying.
And no machine so far has the intelligence to know what that is.
The Thin Line Between Assistant and Overlord
AI isn’t the enemy. Misusing AI is.
For all its flaws, AI is an extraordinary tool. In the right hands, it’s an enabler. I I think it’s fantastic for research, capable of scanning huge amounts of data and discovering connections and nuances that can take a lot of human hours if not days. It can summarize dense reports, pull together industry trends, and help see a bigger picture faster than ever before. For writing, it’s great for sparking ideas, generating fresh angles and narratives. It can brainstorm without bias, proofread like a seasoned editor, and help tighten up language.
But, it’s still a tool, not a thinker.
The real danger isn’t the tech itself. It’s the human willingness to give up authenticity in favor of convenience. The moment we let the machine dictate what we write, it changes how we think. As of last year, 57% of all web-based text has been AI generated or translated through an AI algorithm. If this interests you, check out the Dead Internet Theory if you haven’t already.
We’ve become editors of machine output. We’re detached, passive, and watching as our work becomes indistinguishable from the algorithmic gunk coming out of a thousand other AI-powered content mills. I guess the key here is balance. For example, the Associated Press has “strict” guidelines - they allow AI for helping draft or research, but forbid publishing AI-written news without human vetting. AI can accelerate the process, but a human must remain responsible for the final product.
It’s also becoming clear that audiences want authenticity. A recent survey in 2025 found that 62% trust news from human journalists while 48% trust AI-generated news. That’s a big trust gap. People might enjoy playing with ChatGPT, but when it comes to information that matters, they still prefer a human behind it. Customers (like you and me) don’t want to feel like they’re“talking to a machine”and they’ll quickly wonder if a company that serves up cookie-cutter AI content is equally indifferent about its products and customers.
Then there’s transparency and ethics. Should readers know when content is AI-generated? The answer is still evolving. Some disclose it, others don’t. CNET’s 2023 fiasco proved that the real damage came from secrecy. They published AI content without disclosure, only admitting it when exposed. Trust was broken, and they scrambled to add disclaimers to regain credibility. The takeaway? Be upfront. Maybe you don’t need a giant AI label on every blog post, but your team and clients should know where AI was used. But in journalism, finance, or any high-stakes industries, disclosure isn’t optional.
So, how do we ensure AI remains an assistant rather than becoming an overlord of content?
Always Fact-Check– No matter how slick the output reads, verify every factual claim. AI models have a habit of “hallucinating”.
Infuse Human Insight and Voice – Don’t let the AI have the last word. Treat the AI’s draft as a rough cut. Then add your own examples, insights, and personality in the edit. Give it your distinctive voice and perspective.
Optimize for Speed – Leverage it for what it does great - generating quick drafts, outlines, or filling in basic content structures.
Use AI as a facilitator, not the one in charge.
But, Can AI Truly Create?
For AI to be a true creator, it would need something beyond computation, beyond pattern recognition, beyond predictive analytics. It would need a reckless, unfiltered, beautifully flawed human instinct - the kind that makes Paul Graham write a 1200-word rant at probably 2 a.m. about how hoarding stuff ties to societal attitudes toward material goods and how this shift affects personal well-being.
It would need emotion. It would need doubt. It would need curiosity that turns a simple question into an all-nighter of research. It would need to go through the process of wrestling with an idea, reshaping it, abandoning it, resurrecting it, and then obsessing over whether it was ever good in the first place.
It would need unhinged originality.
AI doesn’t have human drives. It doesn’t lie awake thinking about why a story matters. It doesn’t feel the thrill of inspiration or the regret when something doesn’t feel true. It doesn’t get writer’s block, but it also doesn’t get writer’s bliss. It never experiences the world, never feels joy or sorrow. It simply executes.
But it does not create.
Are We Feeding the Beast?
And yet, even as I type this, I know the ugly truth. AI isn’t stopping. In fact, it’s accelerating.
Companies will keep pushing for more automation, more efficiency, more content, all at scale. Every brand voice flattened into an artificial monotone. Every marketing campaign a slightly tweaked clone of the last. Every bold idea scrubbed clean of risk, spontaneity, and anything remotely human.
Maybe the real question isn’t whether AI should be an assistant or a creator. Maybe the real question is: How long until it doesn’t need us at all?
And when that day comes - Will anyone notice?