there's a feeling that happens right after you hit Enter on a prompt. the tokens start streaming, the cursor blinks, and for a few seconds you're suspended in pure anticipation. maybe this time it nails it. maybe this is the one-shot. maybe the AI just gets it and you don't have to iterate at all.
that anticipation is the drug. the output is secondary.
i've been circling this topic for a while. in devpad #7 i talked about how AI "lowered the motivation barrier" -- how i could alt-tab between 3 projects, each with an AI agent cooking in the background, and feel a flow state similar to actual coding. in my gen-ai post i described feeling "robbed of my own progress" and wanting a read-only mode for AI. two contradictory positions held simultaneously. i knew something was off but couldn't articulate what.
i think i can now. vibe coding is psychologically addictive through the same mechanisms as slot machines, doom scrolling, and competitive gaming. the structural parallels are exact, the neuroscience maps cleanly, and researchers are already starting to notice.
the slot machine in your terminal
b.f. skinner showed in 1938 that the most persistent behaviors come from variable rewards. ferster and skinner (1957) formalized this as the variable ratio (VR) reinforcement schedule: you get rewarded after an unpredictable number of attempts. slot machines run on VR schedules. so does vibe coding.
sometimes the AI one-shots your feature. sometimes it takes 10 iterations. sometimes it never gets there. you can't predict which. you press Enter, watch the tokens, and wait to see if this pull of the lever pays out.
the partial reinforcement extinction effect (PREE) explains why this is so sticky. behaviors reinforced on variable schedules are dramatically harder to extinguish than consistently rewarded ones. a single perfect one-shot -- that one time it generated an entire working module from a two-line prompt -- keeps you coming back through dozens of mediocre outputs. you remember the wins. you discount the losses.
consistent reliability makes something a tool. intermittent reliability makes it a slot machine.
your brain on "almost"
the output that almost works does more damage than the one that completely fails.
clark et al. (2009) published a study in Neuron showing that gambling near-misses -- two cherries and a lemon -- activate the same ventral striatum reward circuitry as actual wins. your brain registers "almost" the same way it registers "yes." reid (1986) called this the psychology of the near miss and argued it was the primary mechanism keeping people at slot machines. winstanley et al. (2011) showed that even rats on a slot machine task exhibit the near-miss effect -- dopamine modulates reward expectancy regardless of whether the reward actually arrives.
now think about AI-generated code that almost compiles. has one subtle bug. gets 90% of the architecture right but misses an edge case. "almost" keeps you in the chair. outright failure gives you permission to walk away, but a near-miss whispers that the next attempt will land. so you tweak the prompt and try again.
this connects to something deeper about dopamine. schultz, dayan and montague (1997) showed that dopamine neurons fire on reward prediction, well before reward delivery. the spike happens when you anticipate a reward, and it's already declining by the time you actually receive it. berridge and robinson (1998) took this further and distinguished "wanting" from "liking" -- dopamine mediates the craving, the pull toward the thing, independent of how good the thing actually feels when you get it. the hit comes when you press Enter and watch generation begin. by the time you see the output, dopamine has already peaked and fallen.
the whole cycle runs on anticipation. whatever the output turns out to be is just the comedown.
"if i just played 1% better"
anyone who's played League of Legends ranked knows this feeling. you lose a game by a hair. your team had the lead, threw at Baron, lost. you know if you positioned 1% better in that last teamfight you would've won. so you queue again. and again. and again.
losing close is what keeps you in the chair. winning just resets the counter.
kahneman and tversky (1979) showed that losses feel roughly twice as painful as equivalent gains feel good. combine loss aversion with near-miss and you get a brutal engine: the AI almost got it right, you can see exactly what it missed, so you refine the prompt and try again. the gap between what happened and what could have happened does all the work. the near-successes pull harder than the successes.
there's a second layer here that makes it worse. ellen langer showed in 1975 that personal involvement in a task creates an illusion of control, even when the outcome is random. in one experiment, 40% of subjects believed they could improve at coin-flip prediction with practice. langer and roth (1975) found that early wins in a random sequence made people believe the sequence was skill-based.
this maps perfectly to vibe coding. you choose the prompt. you know the domain. you're actively involved. every skill cue langer identified -- choice, familiarity, involvement, competition -- is present. so when the AI fails, you don't think "this is stochastic." you think "i just need to give it more context." "just better architecture." "just one more skill file." structurally identical to the sports bettor who believes that with enough research they can make "informed" bets and beat the house.
one more scroll, one more prompt
after 2 hours iterating on AI-generated code, you've invested time, mental energy, and context. to abandon means admitting those hours were wasted. arkes and blumer (1985) called this the sunk cost fallacy -- the tendency to continue an endeavor once an investment has been made. kahneman and tversky's loss aversion compounds it: the pain of "losing" 2 hours of iteration feels twice as bad as the equivalent gain of starting fresh with a clean approach.
i've been here. in the gen-ai post i described being "left with the consequences of my own haste" -- projects laid out wrong, code verbose and over-fitted, the AI filling its own context window as if it gets a prize for writing large files. that was the sunk cost in action. by the time i realized the output wasn't what i wanted, i'd already invested enough that rewriting felt like losing.
there's a parallel to doom scrolling that i can't unsee. sharma et al. (2022) studied the mechanics of compulsive news feed consumption and identified the core loop: no natural stopping point, variable quality, persistent hope the next item will be the good one. aza raskin, the designer who invented infinite scroll, called it "one of the first products designed to not simply help a user, but to deliberately keep them online as long as possible."
TikTok has no "page 2." your terminal has no "maximum attempts." the friction to stop is higher than the friction to continue. closing the terminal means deciding you're done. hitting Enter means just seeing what happens. one of these requires willpower. the other requires a keystroke.
flow or compulsion?
vibe coding feels like flow. clear goals, immediate feedback, time disappears. csikszentmihalyi (1990) described flow as the state where challenge is matched to skill and you lose yourself in the activity. but flow requires that the challenge actually scales with your skill. in vibe coding, the "skill" is prompt engineering, and the challenge is stochastic. the relationship between your input quality and the output quality is noisy at best. what feels like productive flow is closer to compulsive hyperfocus.
i described exactly this in devpad #7 -- alt-tabbing between 3 AI-driven projects, watching them cook, feeling like i was in the zone. but i was managing three terminals, three levers, waiting to see which one paid out next. that's closer to a trading floor than a programming session.
emerging research is starting to catch up with what the vibe-coding community already knows intuitively. yankouskaya et al. (2025) published "Can ChatGPT be addictive?" in Human-Centric Intelligent Systems, examining the behavioral patterns of generative AI use through the lens of addiction psychology. kooli et al. (2025) went further in the Asian Journal of Psychiatry, proposing "Generative AI Addiction Syndrome" as a new behavioral disorder category with diagnostic criteria.
this isn't metaphorical anymore. none of this research specifically targets coding tools yet. the vibe-coding community -- developers who spend hours in prompt-iterate loops with terminal agents -- is the canary in the coal mine.
the antidote is structure
i'm not going to pretend i've solved this. but i've noticed something: the sessions where i feel worst afterward are the unconstrained ones. the ones where i just kept prompting, kept iterating, kept chasing the one-shot. the sessions that feel productive are the ones with defined stopping points.
that's what my AI workflow is really about. the constraints system -- phase-based verification, atomic commits, mandatory type-checking after each phase, multi-agent orchestration that forces planning before coding -- these aren't just engineering best practices. they're stopping points. you decide when to stop before you start, and the structure holds you to it.
mandatory verification after each phase means the AI can't just keep generating. it has to prove its output compiles, passes tests, and integrates with the existing code. if it fails, you don't "try one more prompt" -- you evaluate whether the approach is working at all. structure forces evaluation rather than more generation.
this won't fix the underlying psychology. the VR schedule is still there, the near-misses still light up your reward circuitry, the sunk cost still whispers "just one more try." but friction in the right places -- defined phases, mandatory verification, commits that force you to look at what actually changed -- at least gives you a chance to notice when the tool has become the task.
let's see where this goes.