I’ve started to notice a very predictable trend: AI-generated messages in my inbox.
What finally pushed me to write this was how casually people are dumping GPT-generated text into their communications, apparently believing two things:
- That people won’t notice
- That people will actually read it
When I received what was clearly an AI-generated Christmas card, that was enough.
What I mean by AI slop
This sits inside a broader problem. AI models are producing large amounts of boring, verbose, low-effort, LAZY content, and it is steadily flooding the web. Others, like Kurzgesagt or Last Week Tonight, have explored this more publicly, but the pattern is easy to recognise once you’ve seen it a few times.
I am not against AI-generated content. It can be genuinely useful. I use it for drafts, quick exploration, and grammar checks. With some of my juniors, I even encourage generating bad code with AI and reviewing it together. All my blog posts go through an LLM to catch obvious mistakes, even though every single word is written by me. And yes, the cover is from ChatGPT Images, I think it did a great job.
The issue is how people use AI for text, especially in 1:1 communication.
Personal communication is different
We can argue endlessly about when and how AI should be used in daily work. But personal messages are different. At the very least, they deserve a review before being sent.
When you read AI slop, even if it is well masked, you can feel that something is off. The person does not talk like that. Verbs like delve or pivotal, the overuse of bold text, lists, markdown, the bloody emojis, the pompous and verbose prose. The more you interact with an LLM, the more you recognise the way it talks. And you immediately notice, with a fair amount of false positives, when a message is AI-generated.
There is an extensive list of common signs of AI-generated writing on Wikipedia, which is worth reading.
The rule that matters
There is a simple rule I once read in a CTO’s post and never forgot:
Do not make people spend more time reading your content than you spent creating it.
To me, this is not only annoying. It is a waste of my time and, in a subtle way, disrespectful. If a message is clearly unreviewed, it signals that the sender did not take the time to consider what they were sending. Communication is a delicate human skill, shaped over millennia, and neglecting it can erode attention, trust, and the quality of our interactions.
It also trains readers to disengage. For example, I already skip obvious AI-generated content on LinkedIn, and I am increasingly not replying to sloppy messages at all.
I even read a likely AI-generated book by Addy Osmani, and that same “off” feeling was there for the entire reading. This was later confirmed by similar comments on Goodreads. The book was well masked, but it felt empty, without real substance. And I know there are thousands more like it.
A matter of trust
So remember this: your writing time should be greater than the reader’s reading time.
This matters even more in personal communication, where there is an implicit trust that you are speaking directly to another person.
AI tools are getting better: the latest ChatGPT version even removed the infamous em dashes. That does not change your responsibility to review what they produce.
Before sending anything, ask whether the message could genuinely come from you. Not from a generic, verbose machine voice that millions of people are already using, and that is slowly making everyone sound the same.