We Used to Wait. Then Autocomplete Got Good.
I used to read books. Like, actual books. Not summaries, not YouTube explainers playing at 1.75x, not ChatGPT recaps in bullet points. Books that demanded presence — that made you sit with ideas longer than a dopamine ping. And I didn’t even mind. It felt good. Like taking your brain for a long walk with someone smarter than you.
But now?
Now I open a PDF and it feels like a crime against my time. I skim, CTRL+F the keywords, maybe prompt ChatGPT to explain chapter 7 in the voice of a mildly condescending but fast-talking podcast host. I don’t read, I extract. I don’t explore, I optimize.
Somewhere along the way, I got tired of waiting. And I think autocomplete is to blame.
The Slow Internet Wasn’t Just Slower
It’s easy to say the internet used to be slower because bandwidth was worse. Dial-up, spinning GIFs, text-heavy forums — all true. But the slowness wasn’t just technical. It was temporal. We tolerated waiting. We built mental patience into the experience.
You’d write a blog post and hope someone, maybe, would comment in a week. You’d learn Python by failing for two days before figuring out why IndentationError
was screaming at you. You’d debug by actually reading the docs.
Today? We prompt. We skim. We paste the error into GPT and pray.
We’re time-maximizers now — not because we’re more productive, but because we’ve been trained to believe everything worth knowing is already somewhere, perfectly compressed, ready to be regurgitated by a model. And once you taste that speed — when autocomplete finishes your thoughts before you even fully form them — it’s hard to go back.
From Tool to Crutch
AI didn’t just speed things up. It shortened our threshold for not knowing. It’s like mental muscle atrophy. You used to flex your brain to wrestle with a bug. Now you flinch at the idea of not having a neat little answer in under 30 seconds. When did impatience become intelligence?
Here’s a confession: I once nuked my Manjaro system because of this exact kind of urgency.
I couldn’t get a library to install. I asked ChatGPT. It gave me what sounded like a reasonable set of steps. I didn’t double-check. I didn’t question. I just followed — line by line. One of those lines removed GCC. Another messed with my kernel. Hello, kernel panic. Goodbye, computer. For days, I couldn’t even boot. Eventually, I had to Frankenstein my way back using a bootable USB and a lot of late-night regret.
The scary part isn’t that the AI gave bad instructions. It’s that I trusted it more than I trusted my own instinct. Because thinking felt slow. And slow felt wrong. Like I had failed not because I lacked understanding, but because I hadn’t moved fast enough. It’s like we’ve internalized a mental SLA — a service level agreement for our own cognition.
Thinking ≠ Typing Speed
Here’s the twist: I still consider myself a thinker. I like hard problems. I spend hours on ideas. I write code, build things, wrestle with abstraction.
But thinking — real thinking — isn’t always fast. And AI has subtly rewritten that rule in my head. It’s made it harder for me to sit with ambiguity, to wander through an idea without trying to collapse it into a neat solution. Because I’ve started conflating “not knowing instantly” with “being wrong.”
I’ve noticed it in others, too. Friends who used to write long-form blog posts now churn out AI-assisted LinkedIn thought-bursts. Coders who once took pride in solving deep stack traces now copy-paste entire projects from ChatGPT and feel stuck when things don't compile. The urge to “move fast” has metastasized into a fear of slowness.
We’re optimizing ourselves out of the ability to tinker.
The Tradeoff No One Talks About
Don’t get me wrong — I love AI. I use it constantly. I build with it. I even talk to it when I’m stuck (or bored, or curious, or procrastinating).
But I worry about what I’ve stopped doing because it’s too slow now. Reading actual documentation. Troubleshooting by tracing the system. Reading the source, not the summary.
I used to debug to learn. Now I debug to unblock. And those feel worlds apart.
We used to accept that some things took time — mastery, clarity, trust. Now we microwave our cognition and get cranky if it’s not done in under 30 seconds.
Restlessness as Default
There’s a kind of twitch I’ve developed — a chronic impatience. Like my brain has a page-refresh loop constantly running in the background, looking for quicker ways to “just get to the point.” It shows up in weird places: I can’t read long emails anymore. I skim.
I hate onboarding tutorials. Just give me the CLI.
I abandon academic papers halfway through because I assume I’ve “got the gist.”
That twitch isn’t biological. It’s behavioral. Learned. Reinforced. And AI — for all its utility — is feeding it like a dopamine IV drip.
Reclaiming the Long Way
So what do we do?
I don’t want to be a Luddite yelling at the autocomplete cloud. I like my fancy toys. I want them. I just don’t want them to replace the parts of me that knew how to be patient. That didn’t fear complexity. That could wait.
Maybe it’s about making time for the long way. Reading the whole RFC even if it’s dense. Solving the bug yourself before asking the bot. Thinking with AI, not through it. Letting friction teach you something again.
I’m not against speed. I’m against speed becoming the only metric for value. Sometimes the best thoughts aren’t the fastest ones.
Epilogue for the Impatient
So yes — autocomplete got good. So good it’s warping our sense of self. Of time. Of difficulty. But just because the map got better doesn’t mean the terrain stopped mattering. Sometimes it’s still worth getting lost. Because maybe, just maybe, that’s where the good stuff lives.