Skip to main content
← Blog

When Dialogue Becomes Questioning

VauDium ·

Conversations with AI start as commands and end as questions. Not me asking AI — but AI's proposals asking me, 'why do you want this?' A reflection on those moments.

When Dialogue Becomes Questioning

When a command-driven conversation flips

When you build a product with AI, the conversation always starts with a command. “Add a sort menu to this screen.” “Fix this error.” “How’s this UX?”

But at some point the subject of the conversation flips. AI hands back something it made, and I find myself asking — not AI, but myself — “Is this really what I wanted?”

The one giving the commands becomes the one being questioned. I’m not the one asking AI anymore. AI’s output asks me.

What actually happened today

I was polishing the animation that collapses and expands sort groups. AI suggested, “How about a fade-in animation for the items?” I looked, and it felt off. Each item appearing one by one broke the sense that the list is a single mass.

No, not that way. The whole list has to feel like it’s rising as one.

The moment I said that, I realized: I had never articulated the principle “the list moves as one mass” before. I only knew I disliked the alternative. Seeing the opposite direction AI offered finally put my principle into words.

The same thing happened in another episode. When AI proposed, “Should we highlight overdue events in red?” and I answered “no,” I found myself clearly verbalizing that Fecit’s calendar is not a space of evaluation but a space of record. Before seeing the suggestion, I only knew this vaguely.

Being questioned is what shapes design

This, I think, is the greatest value of working with AI. Not that AI is fast, not that AI knows a lot, but that AI is a tool that draws out the principles I already held but had never put into language.

This doesn’t happen as easily in conversations with people. The other person’s thinking gets entangled with mine, and the work of persuading them slips in. There’s no need to persuade AI. AI agreeing doesn’t change anything. Instead, AI’s output reflects my reactions back like a mirror, and as I read those reactions, I come to know what I want.

What’s interesting is that these questions get deeper over time. Early on, the questions are at the level of “what color should this button be?” Later, they shift toward “what is this app trying to give users?” Even if I’m speaking to AI in the same tone, as the conversation accumulates, the dimension of what I ask myself changes.

Commands decrease, questions increase

Early on, when I was giving AI tasks, it was 90% commands, 10% questions. Lately the ratio is slowly flipping. “Is this right?” trails every command, and AI’s reply leads to “why did I think I wanted this?”

There’s a saying: good tools aren’t tools that give answers, but tools that make you ask questions. I keep confirming every day, while building a product, that AI is that kind of tool.

At the end of every conversation, I understand myself slightly better. Not as fast as the product improves, but still, a little at a time.