The Power of “I Don’t Know”
My best prompts for Claude Code aren’t the precise ones — they’re the honest ones. “I don’t know the best way to do this” beats any Stack Overflow snippet I could paste in. It tells the agent the problem is open. That exploration is welcome. That I’m looking for a partner, not a typist.
You don’t have to know the answer
You bring all your old habits with you. Google the answer, find something plausible, tell the AI exactly what to do. “Use Prisma with a PostgreSQL connection string. Set up migrations in a prisma/directory. Add a seed script.”
This works. Code gets written. Migrations run. But you’ve collapsed the solution space to a single point before the agent even looked at your project. Maybe your data model is simple enough that SQLite saves you a database server. Maybe Drizzle fits your existing patterns better. Maybe the real answer is a JSON file and you don’t need a database at all.
You’ll never find out. You never asked — you prescribed.
I did this for weeks. Every prompt was a spec. I’d burn twenty minutes researching the right library, the right pattern, the right config — then hand all of it to an agent whose entire strength is doing exactly that kind of research. Faster. With more context. Without getting sidetracked by Reddit arguments. What I eventually realized: uncertainty isn’t a weakness in your prompts. It’s an invitation for the agent to do its best work. And each time, you learn something that makes the next session sharper.
What happens when you admit it
The shift happened during a real project. I needed persistent storage for a local dashboard — a few thousand records, time-series data, queried by date range, single machine. My instinct was PostgreSQL because that’s what I know. But I paused and typed something different:
I need persistent storage for this dashboard. A few thousand records,
mostly time-series data, queried by date range. It runs on a single
machine. I'm not sure if I should use SQLite, Postgres, or something
else — what would you recommend given the constraints?What came back wasn’t just an answer. It was an analysis. Claude looked at the project structure, noted it was a single-user local app, pointed out that PostgreSQL would require a running server process, and recommended SQLite with WAL mode. Trade-offs included: SQLite is simpler to deploy but doesn’t handle concurrent writes well. For a single-machine dashboard, that limitation doesn’t matter.
I wouldn’t have gotten any of that if I’d said “set up Postgres.” The agent would have dutifully configured a connection pool and moved on. By saying “I’m not sure,” I activated a completely different mode — research, comparison, recommendation. The uncertainty in the prompt became a feature of the output.
When you specify a solution, the agent optimizes for implementation. When you describe a problem, the agent optimizes for fit. These produce fundamentally different outputs — and the second one is almost always more useful when you’re genuinely uncertain.
Describe the destination, not the route
The pattern is simple: describe the problem space, not the solution space. What you need, not how to build it. Sounds like generic advice, but with an AI agent it has specific mechanical effects.
A solution-space prompt:
Add a cron job that runs every 6 hours to fetch new data from
the API and store it in the database.A problem-space prompt:
The dashboard needs fresh data, but the API has rate limits.
I need some kind of periodic sync that respects those limits
and doesn't hammer the endpoint. The data doesn't need to be
real-time — a few hours stale is fine.−Add a cron job that runs every 6 hours to fetch−new data from the API and store it in the database.
+The dashboard needs fresh data, but the API has+rate limits. I need some kind of periodic sync that+respects those limits. A few hours stale is fine.
The second prompt gives Claude room to think. Maybe a cron job is right. Maybe it’s a systemd timer. Maybe it’s a setIntervalin a long-running process. Maybe the API supports webhooks and you don’t need polling at all. The agent can check the API docs, look at existing infrastructure, and propose something that actually fits — instead of implementing the first thing you thought of.
I think of it as the difference between giving directions and giving a destination. “Turn left, go three blocks, turn right” gets you there if the route is correct. “I need to get to the library” lets the driver pick the best route — traffic, construction, roads you didn’t know existed.
The agent is a very good driver. Let it drive.
Why I stopped Googling first
There’s a specific flavor of over-specifying I see constantly, including in my own early prompts: the Stack Overflow paste. Hit a problem. Search for it. Find a solution on SO or a blog post. Paste it in as context.
I'm getting this error with React hydration. I found this solution
on Stack Overflow:
"You need to wrap the component in a Suspense boundary and use
dynamic imports with ssr: false"
Can you implement this?The problem isn’t that the SO answer is wrong — it might be right. The problem is you’ve anchored the agent to someone else’s diagnosis of someone else’s problem. Your hydration error might have a completely different root cause. Maybe the component reads from windowduring server rendering. Maybe there’s a date formatting inconsistency between server and client. A different person’s similar-looking error sends the agent down the wrong path.
Better prompt:
I'm getting a hydration mismatch error on this component. Here's
the error message and the component code. Can you figure out what's
causing it?Now the agent reads the actual error, looks at the actual code, traces the actual cause. In my experience this almost always gets to the fix faster — because it starts from the right diagnosis.
I’m not saying external research is useless. Sometimes you find the exact answer and there’s nothing to explore. But when you’re uncertain enough to be searching Stack Overflow, that uncertainty is information. Pass the uncertainty to the agent, not someone else’s certainty.
Pasting a Stack Overflow answer into a prompt is like going to the doctor and saying “I have strep throat, prescribe amoxicillin.” You might be right. But you’re paying for a diagnosis and skipping it.
When to specify, when to explore
This isn’t a blanket “never specify” argument. Sometimes you know the answer. If I’ve used a library a hundred times and know exactly what I need, telling Claude to use it isn’t over-specifying — it’s being efficient. The skill is knowing when you’re genuinely certain versus when you’re defaulting to familiar.
Two modes:
Specifywhen you have high confidence and the decision is load-bearing. You know the stack. You know the conventions. You’ve evaluated alternatives. A precise prompt saves time and keeps the agent from exploring territory you’ve already mapped.
Explorewhen you have genuine uncertainty or when the decision has long-term consequences you don’t fully understand. Database choice. Auth architecture. State management. Deployment strategy. These are the places where “I don’t know” is actively better than guessing.
The trap is the middle ground — things you sorta know. You’ve used a library once, two years ago. Remember the general shape but not the details. The instinct is to specify because specifying feels productive. You’re giving clear instructions, being a responsible delegator. But you’re actually constraining the agent to your half-remembered understanding of an API that may have changed three major versions ago.
When I catch myself in that middle ground, I just say so:
I think we might want to use Zod for runtime validation here,
but I haven't used it in a while and I'm not sure if it's still
the best option or if there's something lighter-weight. What
would you recommend?This gives the agent a starting point and permission to deviate. Different from “use Zod.” Different from “I need validation.” It says: here’s my intuition, but check my work.
Every session makes you better at not knowing
Nobody tells you this part: it compounds. Every time you let Claude explore a solution space, you learn from the exploration. Not just the answer — the reasoning. You see trade-offs laid out. You pick up why one approach fits better than another. You absorb terminology and mental models that you carry into future sessions.
Six months ago I didn’t know the difference between WAL mode and rollback journaling in SQLite. Didn’t know that better-sqlite3is synchronous and that’s actually an advantage for single-process apps. Didn’t know FTS5 exists and is built into SQLite. I learned all of this because I said “I need search” instead of “add Elasticsearch.”
Now when I start a new project and need local storage, I have a much richer vocabulary for what I need. I can say “SQLite with FTS5 and WAL mode” when that’s clearly right, or “I’m not sure SQLite is enough here, the write concurrency might be a problem” when it’s not. Both are better prompts than I could have written six months ago. The uncertainty bootstrapped the knowledge.
It creates a flywheel. Uncertainty produces exploration. Exploration produces knowledge. Knowledge produces better uncertainty — you learn where the edges of your understanding are, so your “I don’t know” prompts get sharper. You stop saying “I need a database” and start saying “I need something that handles concurrent writes from multiple processes but I don’t want to run a separate server.” Constraints are precise even when the solution is open.
The quality of your uncertainty improves over time. Early “I don’t know” prompts are vague and broad. Later ones are precisely scoped — you know exactly what you don’t know. That’s the real skill: not eliminating uncertainty, but refining it.
Words you didn’t have yet
There’s a deeper version of this I’ve only recently started noticing. Sometimes you can’t specify a solution not because you’re unfamiliar with the options, but because you don’t have the vocabulary for what you want.
I had a project where the UI felt wrong but I couldn’t say why. Layout was fine. Colors were fine. Something about the interaction was off. Couldn’t Google it because I didn’t have words for the problem. So I just described the feeling:
The dashboard works but it feels heavy. Every interaction feels like
a full page commitment. I want it to feel more like browsing — light
touches, quick peeks, easy to back out. I don't know what pattern
that maps to.Claude came back with progressive disclosure. Hover previews. Expandable rows instead of page navigation. Slide-over panels instead of modal dialogs. It had the vocabulary I was missing and could translate my vague feeling into concrete interaction patterns.
This is genuinely hard to get from a search engine. Google requires you to know what to search for. Claude requires you to know what you’re feeling. The second is almost always easier.
Same thing with architecture. “The code is getting tangled” becomes separation of concerns. “Deployment is scary” becomes rollback strategies and health checks. “The tests are slow and I’m not sure they’re testing the right things” becomes testing pyramids and what belongs at each level.
Every time, the starting point was an honest description of an experience. Not a technical spec. And every time, the output was better than what I would have gotten from prescribing a solution — because the agent could see the real problem instead of the fix I assumed would work.
The hardest part isn’t saying “I don’t know.” It’s resisting the urge to pretend you do. Every developer has the reflex — reach for a familiar tool, a known pattern, a Stack Overflow answer that looks close enough. That reflex served us well when the alternative was staring at a blank editor. But when the alternative is an agent that can research, compare, and recommend.. the reflex becomes a constraint.
Once I started letting go of that — just describing the problem — the responses got dramatically better.