Lately I’ve been coming across a lot of content online that appears to be AI generated. I’ve seen it in articles, blogs and whitepapers, on news sites and social media, and in email, product marketing, and media releases.

It makes me wonder, has the use of generative AI to write marketing content become an accepted business practice?

I’m not referring to using AI to enhance our writing. I’m talking about publishing content that is written predominantly by an AI chatbot like ChatGPT or Google’s Gemini.

I would share examples, but I don’t want to shame anyone. We’re all experimenting with AI. There’s no rulebook, so we’re making up rules as we go along.

Generative AI has the potential to improve the overall caliber of marketing content. However, if not used judiciously, it risks becoming the spam of our time. Here’s why.

How to Spot AI-generated Text

For those of us who use AI tools a lot, it has gotten pretty easy to spot AI-generated marketing content, especially from ChatGPT. Here are a few of the tell-tale signs.

1. It’s formulaic.

ChatGPT is like an old-school grammar teacher who insists you must start every essay by introducing the topic, then explain the topic, and then summarize the topic.

That may work in principle, but if everyone followed this formula, all writing would sound the same. And we’re heading in that direction.

Then there’s ChatGPT’s rampant use of bullet points. Bullet points help distill ideas and make text readable, but if they become a hallmark of synthetic content, no one will want to use them.

Good writers learn the rules and then break them, keeping writing original and varied.

2. It’s generic.

Often when I ask ChatGPT a question, I’m blown away by the quality of the response. As a writer, it’s terrifying. Upon closer inspection, however, I realize that most of it is unusable. Why? Because it’s generic and impersonal.

To answer prompts, large language models analyze massive amounts of information and then repurpose it as their own. None of the content is original, at least not yet.

What makes content truly engaging is the author’s point of view—personal experience, opinions, biases, passion, humor, and real-life examples. ChatGPT produces none of these elements convincingly.

If people and companies want to be known for thought leadership, they need to do the thinking for themselves.

3. It’s almost too perfect.

Often, I find text generated by ChatGPT to be virtually perfect—technically speaking, that is. That’s a problem when people are trying to pass the content off as their own. Human beings don’t write this way.

Our natural flow of writing contains lapses in syntax, transgressions, jumbled thoughts and the occasional mixed metaphor. These imperfections tell us that a human is behind the text, adding to its authenticity.

4. It’s often overwritten.

For a platform that has purportedly vacuumed up all the knowledge on the internet, I’m often surprised by how poor ChatGPT’s writing is.

When writing marketing content, ChatGPT has a tendency to repeat showy, overblown words and phrases like “unlock the power,” “underscoring the importance,” “game changer,” “transformative,” and “revolutionary.”

It also lacks humility. Topics are explained with incredible confidence even when the platform clearly doesn’t fully grasp the concept. Imagine listening to a presentation from a speaker who uses this type of vocabulary. We would all run for the door.

A skilled writer shouldn’t have to resort to hype and bravado to get messages across.

5. It doesn’t provide sources.

This is one of the biggest problems for me. ChatGPT and Gemini appropriate content from others without giving credit or linking back to the source. Where does the information come from? Why should we believe it?

A competent writer gives credit where it’s due. That’s why I prefer using tools like Bing Copilot and Perplexity for research, which provide links to sources.

Any of the above issues can be improved on with effective priming and prompting of AI tools and a thorough edit and fact-check of the output. But if the edits aren’t performed by a skilled writer, the content devolves into a messy patchwork of AI text and bad writing.

What’s the End Game Here?

Companies create and publish content to build and engage a target audience, share ideas, increase visibility in search, and generate website traffic and sales leads. I would argue that synthetic content does more harm than good in advancing these objectives.

Search engines, media sites, social networks and people are getting better at spotting AI-generated text. As long as this content is low-quality, generic and unhelpful, companies that publish it risk damaging their search rankings, turning off customers and prospects, and losing trust and credibility.

In the future, AI will get better at emulating humans, but that doesn’t mean we should let it do the writing for us.

Generative AI can be an incredibly helpful writing assistant, but we must recognize its limitations. With the right prompts, it can provide a good outline, spark ideas, and produce a solid first draft. After that, it’s time for a skilled writer to step in.

Otherwise, the output is little better than spam.

Daniel Edward Craig
reknown.com
+1 604 726 2337
Reknown