Clarifying questions checklist (AI development): what to ask before you let an LLM build
airequirementsprompting

Clarifying questions checklist (AI development): what to ask before you let an LLM build

4 min read

A practical clarifying-questions checklist for AI-assisted development. Turn vague requests into implementable specs by forcing decision points: scope, constraints, failure behavior, acceptance tests, and rollout/ops.

Table of Contents

What clarifying questions should you ask before you let an LLM implement a feature?

Conclusion

Ask questions that force binding decisions into the spec, not “nice-to-have context”:

  • Who is the user and what are the permissions?
  • What is explicitly out of scope?
  • What is the acceptance test (how do we know it’s done)?
  • What is the failure behavior (timeouts, retries, partial success)?
  • What are the non-negotiable constraints (security, latency, cost, ops)?

If you only ask “what do you want?”, the model will guess the rest.

Implementation examples may be available on DevSnips.

Explanation

AI-assisted implementation is fast, but it amplifies ambiguity. The model will “helpfully” fill gaps with defaults that are often wrong:

  • permissive auth (because the UI hides buttons)
  • missing rate limits/timeouts
  • logging sensitive data
  • unclear sources of truth

Clarifying questions are not bureaucracy. They are a risk-control layer that prevents expensive rework and security regressions.

Practical Guide

Use this in two passes:

  • Pass 1 (traffic/basic): lock intent, scope, and success criteria
  • Pass 2 (practitioner): lock constraints, ops, and failure behavior

Pass 1: intent + scope (the minimum to start)

  1. Who is the user?
  • Persona(s): internal admin, paid customer, anonymous user?
  • Permission model: roles, orgs, projects?
  1. What is the goal in one sentence?
  • “User can do X in order to achieve Y.”
  1. What is out of scope?
  • List 3–5 explicit “we are not doing this” items.
  1. What is the success criteria?
  • Metrics, acceptance criteria, or a concrete test.
  1. What examples do we have?
  • Example inputs/outputs, screenshots, sample payloads.

Pass 2: constraints + ops (what breaks in production)

  1. What are the security boundaries?
  • Authn/authz rules
  • PII/secrets handling (what must never be logged)
  • Allowed integrations/domains
  1. What is the latency/cost budget?
  • p95 latency target
  • token budget per request (if using an LLM)
  1. What is the failure behavior?
  • timeouts
  • retries/backoff
  • idempotency
  • partial success vs hard fail
  1. What is the rollout plan?
  • feature flag?
  • staged rollout?
  • rollback trigger?
  1. What must be observable?
  • required logs
  • metrics
  • alerts

Decision rule: when is the spec “good enough” to delegate to AI?

Delegate when you can answer these three:

  • What is the acceptance test?
  • What is the failure behavior?
  • What is the security boundary?

If any is “we’ll figure it out later”, do not delegate full implementation.

Pitfalls

  • No explicit out-of-scope → scope creep becomes “model creep”
  • Deadlines without success criteria → endless iteration
  • Missing non-functional requirements → slow, expensive, or insecure output
  • No failure behavior → production incidents on the first timeout
  • No rollout/rollback → you cannot ship safely

Checklist

  • [ ] User personas are identified (who uses it)
  • [ ] Permission model is stated (roles/org/project scope)
  • [ ] Goal is one sentence tied to user value
  • [ ] Out-of-scope list exists (3–5 items)
  • [ ] Inputs are listed (API/UI/events/batch)
  • [ ] Outputs are listed (DB/response/UI/logs)
  • [ ] Example input/output is provided
  • [ ] Acceptance criteria is concrete and testable
  • [ ] Security constraints are explicit (PII/secrets/logging)
  • [ ] Latency budget is stated (p95/p99)
  • [ ] Cost budget is stated (tokens/external APIs)
  • [ ] Failure behavior is defined (timeouts/retries/idempotency)
  • [ ] Rollout plan exists (flag/staged/rollback)
  • [ ] Observability plan exists (logs/metrics/alerts)

FAQ

Q1. Isn’t this just requirements engineering?

Yes, but compressed. The checklist is a minimal set of decisions that prevents AI-assisted work from drifting into unsafe defaults.

Q2. What if I can’t answer the questions yet?

Then delegate a smaller task: ask the model to propose 2–3 design options with trade-offs, then pick one and only then implement.

Q3. Should I ask all questions every time?

No. Always ask Pass 1. Ask Pass 2 whenever the feature touches auth, payments, PII, external APIs, or anything user-facing in production.

References

Disclaimer

Minimize sensitive data.

Popular

  1. 1Permit2 explained (Web3): why approvals changed and how to use it safely (checklist)
  2. 2Read wallet signing screens (Web3): a 30-second checklist to avoid permission traps
  3. 3Spec-to-implementation prompt template (AI development): how to stop the model from guessing
  4. 4Revoke token approvals on EVM: how to audit allowances safely (checklist)
  5. 5Clarifying questions checklist (AI development): what to ask before you let an LLM build

Related Articles