This July, a client approached me with a question that would have seemed absurd a few years ago. He wanted to know if AI tools like ChatGPT could create hundreds of unique web pages related to metabolic health.
This “programmatic SEO” approach has worked for companies like Amazon and Zapier that use templated sales pages, but my client doesn’t work for a tech giant. He represents a startup that relies on scientific rigor to sell a subscription service for continuous glucose monitoring. Could AI create valuable content in this context?
Let’s leave the complexities of web development aside (my headache just disappeared!) and focus on the written content. Because if AI can nail scientifically nuanced writing, all the other stuff is a formality.
Is Sounding Human Enough?
Large language models like GPT-4 (the latest iteration of ChatGPT) generate text that sounds human. The text sounds human because we’ve fed the bots a boatload of human-generated content. (For GPT-4, that’s the internet circa September 2021.) This data dump—plus some supervised training—helped GPT-4 form billions, and perhaps trillions, of parameters it uses to conversate on any conceivable topic.
GPT-4 sounds human, but that doesn’t mean it thinks like us. Unlike you and me, GPT-4 isn’t continually updating its brain with new memories, plans, knowledge, and lessons learned. Its rules and source data are fixed in time. It doesn’t learn. It regurgitates.
OpenAI, the company that spawned GPT, doesn’t claim GPT-4 has human-level capabilities. Still, GPT-4 has outperformed predecessors like GPT-3.5 on standardized tests. A lot of Red Bull went into making this thing. I wanted to take it for a spin.
GPT-4 Science Writing Test
A year ago, my test of GPT-3.5 on science writing revealed massive confabulation. GPT’s imagination produced a non-existent study—nice! My test of GPT-4 went better, but the bot consistently:
- Picked inappropriate sources
- Misreported facts
- Made basic errors in judgment
For instance, when I asked GPT-4 to summarize how blood glucose levels relate to chronic disease risk, the bot reported that “a study published in the journal Diabetologia found a direct correlation between elevated blood glucose levels and increased risk of type 2 diabetes.” Since elevated glucose defines type 2 diabetes, this is like citing a correlation between a person’s age and how many birthdays they’ve had. (In reality, the study found that non-diabetic people with higher blood glucose had higher heart disease risk.)
A talented editor could mitigate these nontrivial accuracy issues. But there are other risks an editor couldn’t address.
Other Risks of AI-Generated Content
- Google search and email servers could ban or restrict AI-generated content.
- The writing is conversational but nothing special. Lots of cringe-worthy analogies, and long, fatiguing sentences. Excessive exclamation points create the impression of chatting with a breathless teenage girl.
- The same phrases appear over and over. Not cool for a brand publishing original content.
- Updated versions of GPT may not be available to the public.
- People prefer human-generated content.
On the last bullet: Even if GPT-5 or 6 cleans up the accuracy problems, people will still crave human creativity. There’s a person-to-person warmth you can’t achieve with a disembodied chatbot.
A Different Kind of Language Generator
My test confirmed my suspicions. Like GPT-3.5, GPT-4 lacks expert judgment. It makes repetitive errors and gives repetitive answers. It cites inappropriate sources. It occasionally makes stuff up.
Future GPTs may replace human writers, but the current versions are like supercomputing interns. An intern can devise ten article headlines or suggest edits for concision, but you wouldn’t give him any complex knowledge work.
My client didn’t need me to tell him that. On a follow-up call, he admitted AI wasn’t ready for the nuanced writing he needed. To solve his problem, he’d need a different kind of language generator—the kind who reads the internet one page at a time, selects her sources carefully, and learns from her mistakes.
Special note: this article was written by GPT-4*
*Just kidding
By the way, check out my free report on building traffic and trust with health and wellness content marketing.
And if you’re interested in working with my team, email me at brian@brianjstanton.com and we’ll set up a free consult.