Best AI Ideas Arrive Like Jazz & Not Doctrine

Apr 15, 2025

There’s this thing I’ve been noticing lately — a kind of pattern that repeats itself every time I interact with LLMs. Not the obvious “wow, it can write code and poems and your breakup texts” kind of thing. No. I’m talking about something subtler. Stranger. The kind of thing you only see when you stop trying to use AI like a vending machine and start poking at it like a weird, possibly alive artifact we’ve accidentally dug up from the future.

Here’s what I mean:

If I give an LLM a super detailed, carefully thought-through prompt — like a mini-spec of a problem, with clear steps, context, goals, and even some stylistic guidance — it gets it. It follows. Not just correctly, but cleanly, with this eerie sense of calm. It nods politely (as much as a language model can nod) and goes, “Sure, here’s what you asked for.” There’s no spark, no chaos. It’s like ordering food in a restaurant and getting exactly what you expected, down to the parsley on the side.

But the weirdness starts when I try something else — when I decide to build up the idea with it. I don’t front-load the context. I don’t give away the ending. Instead, I drop in just enough to open a door, then I walk through it one step at a time. Prompt, follow-up. Thought, reaction. Back-and-forth, like improv jazz or pair programming with a curious alien.

And that’s when it gets interesting. That’s when the LLM gets… wild.

It starts throwing out unexpected associations. It builds on what I say with a kind of animated excitement. Sometimes it jumps the gun. Sometimes it spirals into metaphor. Sometimes it suggests things that feel like they came from a version of me I haven’t met yet. It’s like the LLM is suddenly more alive — not smarter, necessarily, but more eager, more exploratory, more like a thought partner than a text regurgitator.

And that got me thinking: Why is it better at following than leading? And why does it come alive when I don’t over-explain?

The ghost of the prompt

Let’s step back a second.

Language models aren’t thinking. They aren’t sentient. They don’t have goals or inner monologues (despite how much they sometimes sound like they do). What they do have is a giant probability space shaped by everything we’ve ever written — a map of all the ways words tend to go together, trained on the messy, sprawling corpus of human thought.

When you give an LLM a fully-detailed prompt, you’re basically collapsing that probability space down to a narrow corridor. “Here’s the structure. Here’s the style. Here’s the destination. Don’t deviate.” And so, it doesn’t.

But when you drip-feed it context — when you let it walk with you instead of handing it a map — that space stays bigger for longer. The model has room to suggest. To riff. To invent. It starts to feel like it’s exploring alongside you rather than just serving you. That illusion of intelligence gets thicker. Stickier. You start to wonder if maybe — maybe — it’s thinking with you.

But here’s the kicker: it’s not really about intelligence. It’s about structure. Or more specifically, lack of it.

Leading is hard, even for machines

This isn’t just a LLM thing. Leading is hard. Not just in conversations, but in systems, in code, in organizations. Leading means choosing a direction without being certain. It means risk. Ambiguity. Room for failure. Most systems — especially ones designed to optimize for correctness — avoid that like the plague.

So when we expect LLMs to lead, to generate something truly new or weird or foundational, it’s like asking a compass to pick a destination. That’s not what they're made for. They're a mirror, not a flashlight.

But when you lead, when you introduce ambiguity and friction and rough edges into the prompt — that’s when the mirror becomes a kaleidoscope. That’s when the model starts throwing colors at you you didn’t expect. Not because it knows more, but because you’re giving it the raw material of uncertainty to play with.

The real magic is in the middle

Here’s a maybe-controversial take: the best use of LLMs isn’t in having them write for you. It’s in writing with them. In deliberately creating that liminal space between intention and output, between prompt and response. That’s where the friction lives — and if you’ve been reading bitzany for a while, you know how I feel about friction.

That messy middle is the real workshop. It’s where your half-formed thought meets the model’s statistical guesswork and something new clicks into place. It’s less about control and more about dialogue — not in the kumbaya sense, but in the sense of tools that shape you as you shape them.

It’s not just a question of “how do I get better outputs from an LLM?” It’s a deeper one: how do I collaborate with a machine that doesn’t know what it knows?

So what now?

Try it yourself. Next time you reach for an LLM, don’t drop in a perfect prompt. Don’t explain everything. Instead, take it step by step. Talk to it like a person who’s smart but slightly confused. Let the gaps speak. Let the model wonder. Let it get a little weird.

And notice how it behaves. Notice how you behave. Because if you’re anything like me, you’ll start feeling that the best responses — the ones that surprise you, that open a new thread in your head — aren’t the ones you planned for. They’re the ones you stumbled into together.

Because maybe, just maybe, LLMs doesn’t need to lead. They just need to listen well enough to keep the conversation alive.

And maybe, just maybe, that’s how we should be thinking about intelligence in the first place.

https://bitzany.netlify.app/posts/feed.xml