Recent articles raised the question, “Should novelists trust AI?” Let’s explore some of the risks of taking AI shortcuts.
By: Grant P. Ferguson
Date: September 4, 2024
Many posts hinted at the problems writers encounter when they blindly accept AI’s output. However, some articles masked the magnitude of the problems by citing the potential productivity gains. Thankfully, a few posts compared AI’s fast output to thorough research, highlighting how what looked like excellent material actually contained hard-to-detect falsehoods and copyright infringements.
Let’s explore some risks of taking AI shortcuts.
To Trust AI Is Like Averaging Averages
According to Steve Fenton’s website (https://www.stevefenton.co.uk/blog/2020/02/can-you-average-averages-in-your-analytics/), here’s what happens to the results when you average averages.
“There is a common question that crops up in analytics, which is can you average your averages? The short answer is no, but a longer explanation is probably needed.”
“Whether you have grouped your data by month, or region, or some other facet, each average you see is based on a different number of data points. You might have an average of 10 based on 10,000 individual data items and an average of 2 based on a single data point. If you attempt to create an “average of averages”, the single data point will disproportionately affect the outcome. The average of 10,000 data items basically gets valued at the same rate as the average of the single data point. The “average of averages” would be 6, but the correct average of all values would be 10.”
Here’s why the issue of averaging averages is important to consider:
- When you write prompts for output from AI, in many ways, the results are like averaging averages.
- You can enter the same prompt and get a different answer each time.
- If you’re searching for a specific quote or to cite a legal case, AI may average averages by aggregating what it learned and giving you what looks like a factual result when it’s actually nothing more than its hallucinations.
That means that using AI’s answers can put your reputation at risk with falsehoods and plagiarized content, and that raises another question: “Why do so many bloggers hype AI’s potential but skip its dangers?”
Should Beginner Novelists Trust AI?
Beem Weeks wrote an excellent update on AI for the Story Empire Blog (https://storyempire.com/2024/08/30/more-from-the-world-of-ai/).
He gave several examples of how AI can generate character images, personality traits, background, and physical features. The comments from writers produced a mix of pros and cons. What caught my attention was the number of tired cliches AI wrote into the descriptions.
While reading the post, I grew concerned that beginner novelists may not recognize how many readers want authenticity, and audiences dislike overused cliché phrases to describe characters.
Writing Principle: Many bestselling authors attribute success to knowing their characters, gaining rich creative insights while developing traits, behaviors, and emotions.
Should You Trust AI to Replace Proven Answers?
Read enough interviews of top writers and you’ll discern a pattern.
Most worked at learning and practicing this thing we call ‘craft.’ For example, excellent teachers like Malcolm Gladwell, James Scott Bell, Shawn Coyne, Steven Pressfield, Robert McKee and Randy Ingermanson emphasize there are no shortcuts. After all, you can’t substitute a quick fix for turning what you’ve learned into practiced writing skills. These giants in our industry offer us the opportunity to see further down the writing road by standing on their wise shoulders.
And that raises the question: “Why do some writers try to game the system with AI when the answers they seek are already at hand?”
What’s Your Answer, “Should Novelists Trust AI?”
For novelists, AI reminds me of the old joke about a consultant stealing the watch from your wrist and then selling you the time.
From my view, sophisticated computer models trained by scraping copyrighted text and images from websites and databases without the owner’s permission are no better that someone who steals your work and then sells it back to you.
At this stage of AI development, I’ve concluded that the ethical and reputation risks far outweigh the potential gains. My short answer is novelists should not trust AI. You may see this issue differently.
How do you envision AI working out for novelists in the years ahead?
Leave a Reply
Do you use AI, and if yes, how do you apply the output, rationalize the ethical issues, and minimize the dangers to your reputation?


Leave a Reply to Crawford Wheeler IICancel reply