I have read mutliple rants on LinkedIn complaining about how easy it is to spot LLM-generated writing. The emojis, the dramatic pauses, the em-dashes, the contrast-pivot constructions, etc. And I get the frustration, the feed is full of slop.

But it hit differently for me because AI is the reason I even have a blog. If I had to manually write every post, I probably wouldn’t have started at all. I can think clearly but I’ve never been great at turning that thinking into polished prose, writing was always the bottleneck. So I use an LLM to help me serialize the ideas I have, the same way I use a compiler to turn code into something a machine understands.

So reading all the shaming around AI writing made me pause. Not because I feel guilty but because we’re aiming the criticism at the wrong place.

The question isn’t “did an LLM write this?” The question is “are you taking ownership of what’s being said?”

There are folks whose craft is writing. They put effort into phrasing, rhythm, the voice that carries the signal. If that’s the craft, then outsourcing the prose to an LLM is outsourcing the actual work. I get why that feels like a loss of authenticity.

But I don’t live in that world, and a lot of engineers don’t either. For us, ownership matters more than authorship. You own the code you ship and are responsible for its execution results. You own the text you publish and are responsible for what’s being said in it. The tool you used to create it doesn’t change that responsibility.

Our heavy lift sits upstream. The work happens in the models we build in our head, the tradeoffs, the debugging scars, the failures that taught us something real. By the time we sit down to write, the idea already exists. Writing is just the transport layer, the protocol for moving the theory from my head to yours.

And this isn’t a new thought. Peter Naur said the same thing decades ago in Programming as Theory Building: the real product of programming is the theory inside the programmer’s head, not the code. The code is just the artifact produced from that theory.

Writing works the same way for a lot of us. The value is in the theory, the text is the artifact.

When you look at it that way, the conversation changes: If an LLM helps you serialize an idea into something readable, you haven’t outsourced the thinking, you’ve just outsourced the typing.

And this lines up with another argument floating around: we’ll code in plain English soon and the LLM will act like a compiler. In that world, code is just ephemeral state and the real job is telling the machine what you want.

If that’s where we’re heading, the coding purists complaining about code that doesn’t “feel like them” sound a lot like the writing purists complaining about prose that doesn’t “sound authentic.” Both groups are anchoring identity in the wrong place.

Identity lives in the thinking, the theory you built in your head, the syntax is just how you happened to express it.

Here’s the critical part: don’t outsource the thinking. The LLM handles serialization but you still need to do the research, build the mental model, make the tradeoffs, understand the failure modes. That’s non-negotiable. If you skip that step and ask the model to generate both the idea and the prose, you get hollowness.

This is where effort matters. Effort is a virtue but only when it’s applied where it actually changes the outcome. Manually typing every sentence doesn’t make the idea better, doing the upstream thinking does. Putting the effort in the right place matters a lot more than forcing yourself through a friction step that doesn’t add value.

There’s a gatekeeping angle here that’s hard to ignore. Before LLMs, the people who got to share ideas publicly were mostly the ones who knew how to write well, not the ones with the most interesting ideas. Writing used to be the filter. If prose wasn’t your craft, your thinking stayed stuck in your head, even if it came from real experience.

LLMs blow that gate open. They don’t invent ideas, they remove the bottleneck between the idea and the expression. Writing goes back to being the transport layer instead of the barrier to entry.

There’s a worry that keeps coming up: if everyone can publish, the noise is going to explode. And sure, noise is going up. Anyone can ask a model to “write me an interesting post” and hit publish. That dumps a lot of hollow content into the feed.

But signal is going up too. People who couldn’t write before can finally express ideas that were stuck in their head for years. Lowering the barrier doesn’t just create more noise, it also creates more signal.

Filtering gets better at the same time. Humans get better at scrolling past slop, models get better at ranking actual substance, the algorithms get better at detecting hollowness. The ratio doesn’t collapse because both sides move.

Noise is cheap, signal is expensive. If a tool makes signal cheaper for more people, that’s a win.

What actually matters is whether you did the thinking and whether you own what you’re publishing. If the model generates both the content and the voice, that’s empty, there’s no theory behind it and you can feel the hollowness right away. But if the idea is yours, if you built the mental model and understand the tradeoffs, if you’re taking full responsibility for what’s being said, then the authorship question becomes irrelevant. You did the part that counts, you own the output, the model just handled the serialization step.

We’re still treating typing as the proof of authorship but the work lives somewhere else. Ownership lives in the thinking, authenticity lives in the responsibility you take for what you publish. And if more people can share ideas because the prose barrier is gone, I’ll take that. More signal, less gatekeeping.

Pretty cool actually.