20. July 2025

I wrote a really bad book with AI…

What comes to mind when you hear the word “nerd”? For most, it’s a label for how someone looks or behaves. But I’ve always seen it as something more: a frame of mind, a lifestyle choice (though, not much of a choice really, more a personality quirk).

Nerds are the kind of people who find something that seems simple on the surface and dive into it deeper than anyone else thinks is normal. We’re driven by a relentless curiosity and a love for connecting the dots.

This sense of wonder has shaped my entire career and governed my life’s choices. It’s what led me to Infrared Photography, what got me into programming, and what made me study psychology after business, and AI after Information Systems Management. It’s also what sparks debates with my sons over whether the destruction of the second Death Star was a war crime. (For the record: the argument was that it was still under civil construction. We eventually agreed that both Death Stars were legitimate military targets.)

This mindset also leads to a collection of what some might call… weird technology choices.

For instance, my thinking tool and self-therapist of choice remains a handwritten journal. Despite my love for modern tech, I just can’t bring myself to pour my personal thoughts onto a keyboard. I need a fountain pen. And yes, because I’m a nerd, there’s a rabbit hole there, too. My fountain pen is picked for reasons, not to show off and my notebook isn’t just some off-the-shelf Moleskine (ever tried using a fountain pen or watercolor on one? It’s a nightmare). It’s a specific brand, carefully chosen for its features, its connection to my needs, and frankly, its lovability.

I’m also on a quest to ditch my smartphone without leaving my modern lifestyle behind. The solution? An LTE watch for payments and calls, paired with a phone-sized eInk reader. This little hack nudges my consumption habits away from brainless dopamine hunts and toward literature. It’s about creating room to think, instead of filling every slow moment with the instant gratification my phone so easily offers.

These are just a few examples of how I sometimes take things a little further than most. Which brings me to my latest experiment: story writing.

I’ve never considered myself a talented writer, but I’ve always wished I were. It’s not for a lack of imagination – it’s a lack of discipline and command over language. My prose feels clumsy and sloppy compared to the texts I’ve admired over the years. Thirty years of on-and-off blogging and journaling haven’t changed that. It’s a lot like my experience with visual arts; I can be creative with a camera, but I lack the patience for drawing or painting.

Then came AI. I came close to exploring automated writing for my master’s thesis, and the idea stuck with me. While the thought of AI competing with human artists is unsettling, I also believe it could unlock creativity for so many people with fantastic ideas. I find myself conflicted, intrigued, fascinated, and appalled – all at once. And, of course, curious. The nerdy kind of curious.

So, I set up three experiments:

  1. The Autonomous Author: I wrote a simple program to simulate an author. It picked names, locations, and plot structures, and made decisions on style, pacing, and plot twists.
  2. The Human Director: I used my AI interface of choice (the Cursor development environment) to do the same, but this time, I kept the director’s role for myself.
  3. The Collaborative Story: I started writing a short story by myself, but with an AI as a close assistant.

The first two experiments aimed to produce a full novel (100,000-120,000 words) from a single prompt. The third was about creating something more personal. I spent two weeks on this, and it was glorious – way more fun than I anticipated.

The first week, spent coding the autonomous author, was a crash course in story structures, character archetypes, and plot devices. The AI was a fantastic teacher, deepening my appreciation for the art of storytelling. The second week, I kicked off experiments 1 and 2 and let them run, prompting, reviewing, and course-correcting. I gave myself one rule: don’t interfere as long as the stories remained coherent and stuck to the plot.

I have to admit, the AI made me feel pretty good at first. I spot-read paragraphs, gave feedback, and checked for obvious problems, but I didn’t read the whole thing cover-to-cover. And I was constantly impressed. The prose seemed creative, and I found real joy in reading some of the descriptions. It was ticking all the boxes. Or so I thought.

The moment of truth arrived when I converted the results to an ePub for my wife and me to read.

In the end, we only had one novel to read. Why one? That was the first surprise: both stories were remarkably similar, even recycling character names (looking at you, Kael). My instructions in the second experiment made little difference compared to the fully autonomous program. The plot points and descriptions were so alike that you could swap chapters between them without creating immediate consistency issues. Given my prompt – a story about a princess who kidnaps a dragon – both AIs defaulted to a standard hero’s journey with a three-act structure. Without guidance to the contrary, they simply followed the most well-trodden paths.

So, we settled on the novel with the start I liked better. My wife and I began to read, but our goal of a satisfying literary experience quickly turned into a game of discovering just how bad the book really was.

Don’t get me wrong, the AI didn’t fail completely. On the surface, everything looked perfect. Some scenes were genuinely gripping page-turners. But others described something utterly unimportant in excruciating detail, drawing from every cliché in the book. Despite a process designed to filter out repetition and exposition, the book was full of it. Our AIs loved doors slamming shut “like a tomb,” the words “precise” and “calloused hands,” and ending chapters on cheesy one-sentence cliffhangers: “The real test was about to begin.”

The novel did everything right, but it missed the mark completely. The pacing was off, the descriptive writing was inconsistent, and the emphasis was all wrong. The result was a book that was confusing, frustrating, and often boring, even when you could see what it was trying to do.

The fascinating part? If you opened the book to a random page, you might not notice how bad it was. It looked like the real thing, just like an AI image that looks right until you spot the extra finger. It was only by reading it from start to finish that the illusion shattered.

This journey led me to a few key observations for my next experiment in collaborative writing:

  • AI has surprisingly stubborn preferences. Its choices for names, sentence structures, and metaphors are far less unique than you’d think. For my master’s thesis on explainable AI, I had it write one-shot stories from an identical prompt. The character often ended up with the same name, the same dilemma, and the same outcome.
  • Pacing is an art, not an algorithm. A human author knows when three words will suffice and when a full page of dialogue is needed. They understand cultural context and know what to repeat and what to say only once. AI has no clue. It gives equal weight to everything, leaving the reader frustrated when a heavily described detail turns out to be meaningless, or when a critical plot point gets buried in a wall of text.
  • Whatever you leave to the AI’s imagination will be filled with formulaic fluff. We underestimate the vast well of quality our brains draw from – our lived experiences, our cultural knowledge, and centuries of narrative tradition. When you let an AI fill in the blanks, you get generic, unsatisfying results. The uncanny thing is, it will still look right half the time. This is what worries me most; I think we’re about to be flooded with low-quality content from people who think “good enough” is good enough.

But here’s the thing: the experiment was an incredible amount of fun. Nothing has sparked my appetite for writing as much as this. Every time I reflected on what the system produced, I went back to my old notes and ideas, thinking about how I could do it better.

And that brings me to what I’m doing now: I’m writing short stories with AI, and it is utterly empowering. I still work mostly in my coding environment (Cursor, Claude Code, Gemini-CLI – they all write prose if you ask nicely). I can maintain a “world facts” document, log my decisions, and use version control through GitHub automatically in an environment built to avoid uncontrolled changes.

But now, there isn’t a single word the AI has written that I didn’t ponder, direct, or at least approve consciously. Instead of asking it to draft a chapter, I ask for a paragraph, often providing notes that are longer than the final output. The writing reflects my intent, but with polished language. I brainstorm with it and use it to break through creative ruts.

Most importantly, I’m sticking to short stories for now. Only when I know how to create something worth reading in a shorter format will I try my hand at a novel again.

In the meantime, I’ve learned some serious humility. Writers, authors, the people who truly know how to tell a story – they are rare, and they are much, much harder to replace than I ever thought.

You may also like...

Leave a Reply

Your email address will not be published. Required fields are marked *