AI Will Never Have Shower Thoughts, Even if it Takes a Shower

A few weeks ago I was watching Claude write thousands of lines of code. Nothing fancy, just adding a feature. My eyes were tracking the diff stream but my brain was somewhere else. Started thinking about a different project, started connecting it to a conversation I had on Sunday, and by the time Claude was done I had three ideas I didn’t have when I sat down.
That’s when it hit me, AI can’t have moments like that.
The agent is doing the focused work, and I’m slipping into the exact mental state where good ideas show up. The tool that can’t do this for itself might be giving me more of mine.
Where good ideas actually show up
Not at the keyboard, almost never at the keyboard. Best ideas show up in the shower, on a walk, while loading the dishwasher, halfway through a workout. There’s research behind this, your brain has two modes and they can’t run at the same time. Focused mode is what you use when you’re solving a problem on purpose. Diffuse mode (the default mode network, or the daydream brain) is what runs when you’re not focused on anything in particular.
Insight comes from alternating between them. You do the focused work first, then step away and let the daydream brain stitch things together. That stitching is the incubation effect, and it’s the thing AI doesn’t get to do.
AI doesn’t have a daydream brain
LLMs run when you prompt them, sit idle when you don’t, no background processing, no integration that happens overnight. Each conversation starts cold. There’s no theory of the work building up between sessions the way Naur described it for programmers, and there’s no idle stretch where ideas can collide on their own.
I like Alberto Romero’s spiky star model for this. AI capability isn’t a smooth circle, it’s a star with extreme spikes (math, code, recall) and deep valleys (idle creativity, embodied reasoning). More compute makes the spikes longer, it doesn’t fill the valleys.
The body is part of the thinking
The way we use words like “weight,” “grasp,” “warmth” isn’t decoration. We literally borrow these from having a body. “I grasp the concept” isn’t a metaphor pretending to be a metaphor, it’s how we built the concept in the first place. The technical name for this is embodied cognition.
AI has descriptions of bodies, billions of sentences about what it feels like to be cold, hungry, tired, none of that is lived. Alexander Lerchner has a paper from Google DeepMind that calls this the difference between statistical clustering and a phenomenal concept. AI can group language about hunger, it doesn’t know hunger from the inside. AI is brilliant at finding patterns in language about experience. The patterns aren’t anchored to anything physical.
Needs tell you what’s worth solving
Creativity needs direction, and direction comes from needs. Cold, hungry, scared, bored, lonely, these are the engines. Maslow’s hierarchy of needs describes humans climbing up from survival. Most of us creating today have moved past the bottom, but we got here from there.
AI was born at the top with no foundation. It doesn’t have problems of its own, all of its problems come from us. AI never had a bad day at the office.
Why scaling won’t fix this
The obvious counter is, give AI a body, give it idle time, scale it up enough, problem solved. Lerchner’s paper closes this door directly, and his follow-up FAQ has the cleanest version of the argument I’ve read.
My plain-language version of Lerchner’s argument: surprising outputs don’t equal becoming the thing they describe.
A weather model connected to live atmospheric sensors doesn’t become the atmosphere. A GPU simulating photosynthesis releases zero oxygen. No amount of scale changes the shape of that error. Better prediction still isn’t lived experience. The full philosophical version is in Lerchner’s paper if you want it.
Even if AI gets dramatically better at predicting what a shower thought would sound like, it still won’t be having one.
What AI is actually good at
Execution, recall, drafts, refactors, the glue work. Anything in focused mode where the answer is patterns over a corpus, AI is faster than I am, and getting faster every release.
That’s the trade I make. AI takes execution, I keep incubation. I let it churn on the part it’s better at and I use the time to be in the daydream brain.
Protect the shower
Reflective space isn’t a break from the work, it’s where the work integrates. Skipping it has a cost, ideas don’t compound, the cognitive debt builds up at the individual level the same way it builds up across teams.
If AI takes execution, the win is more shower time, not more meeting time. Saving the time is nice but the real win is in the daydream brain.
So the next time you’re watching your agent do its thing and your mind wanders, that’s not a distraction. That’s the part you’re keeping.