Surveys & Feedback

How to write survey questions

Question types, wording, ordering, and the biases that quietly ruin most surveys.

9 min read Updated April 29, 2026

Most surveys fail before the first response comes in. The questions are leading, the scales are inconsistent, and the order primes the answer. Writing survey questions well is a craft with a small number of rules — once you know them, the data starts to mean something.

Pick the right question type

Every question should match what you actually want to learn. Mixing types arbitrarily makes the data harder to analyze and the survey harder to finish. The four types that cover almost everything:

  • Closed multiple choice — fastest to answer, easiest to analyze. Use when you can list every reasonable answer.
  • Rating scales (Likert, 0–10, star) — good for attitude, agreement, satisfaction. Keep the scale consistent across the survey.
  • Open text — used sparingly. One or two well-placed open questions surface things you'd never have thought to ask. Ten of them tank your completion rate.
  • Ranking and matrix — useful for prioritization, but mentally expensive. Cap matrix rows at five and never stack two matrices in a row.

If a question could be either a 5-point scale or open text, default to the scale. You can always follow up with a conditional open question for low scores.

Write words that do not bias the answer

Wording is where most surveys quietly break. The patterns to watch:

  • Leading questions — "How much do you love our new dashboard?" assumes the answer. Ask "How would you rate the new dashboard?" instead.
  • Double-barreled questions — "How fast and accurate was the support team?" forces one answer to two things. Split it.
  • Loaded language — "Do you support fair pricing?" is loaded. Strip the adjectives.
  • Jargon and acronyms — write at a reading level your least-technical respondent shares. If half your audience would have to Google a word, replace it.
  • Negatives and double negatives — "Do you disagree that the product is not useful?" is a coin flip. Rewrite in the positive.

Read every question out loud. If you stumble, the respondent will too. For a fuller checklist of patterns to avoid, see survey design mistakes to avoid.

Design the scales

Scales are the engine of quantitative survey analysis. The decisions that matter:

  • Number of points — five for general use, seven when you need finer attitude resolution. Even-numbered scales force a side; odd-numbered scales allow neutral.
  • Anchor labels — label every point if you can. Endpoint-only labels invite respondents to invent their own meaning for the middle.
  • Direction — keep the positive end on the same side every time. Flipping direction to "catch lazy respondents" mostly catches confused ones.
  • Standardize — pick one satisfaction scale, one agreement scale, one frequency scale, and reuse them across the whole instrument.

The Likert scale design guide goes deeper on five-versus-seven, neutral midpoints, and anchor wording.

Order the questions

Order changes answers. Specific questions before broad ones bias the broad answer; sensitive questions early cause drop-off. The reliable shape:

  1. Open with a low-cost, easy question that confirms the respondent is in the right place.
  2. Move to the substantive questions while engagement is highest.
  3. Group related questions together — jumping between topics is exhausting.
  4. Place sensitive or demographic questions last, when sunk cost keeps respondents committed.
  5. End with one open "anything else?" — the highest-signal question in many surveys.

If you're sending the same survey repeatedly, randomize the order of items inside a matrix to neutralize fatigue effects. Don't randomize sections; you'll break the narrative.

Length and screening

The single best predictor of completion is length. Aim for under five minutes for consumer surveys, under ten for B2B. Screen out unqualified respondents in the first two questions so they don't waste your incentive budget or skew your data. Use branching to skip irrelevant sections rather than asking everyone everything.

If response rates are a problem, the fix usually isn't a better incentive — it's a shorter survey, a clearer subject line, and a better send time. How to increase survey response rate covers the levers.

Survey question checklist: one idea per question, neutral wording, consistent scales, logical order, screened audience, length you would tolerate yourself. If a question fails any of those, fix it before you send.

Frequently asked

How many questions is too many?
For consumer surveys, anything over fifteen real questions starts losing meaningful response volume. For B2B research surveys with motivated respondents, you can run thirty if the questions are well designed. The better question is how long the survey takes, not how many items it has — aim for under five minutes for general audiences.
Should I include a "not applicable" option?
Yes when the question genuinely might not apply, no when you want to force engagement on something every respondent has experienced. Overusing N/A turns into a pressure-release valve respondents click to skip thinking. If more than ten percent of answers come back N/A, the question is probably aimed at the wrong audience.
Open text or multiple choice for "why"?
Use multiple choice when you already know the likely answers — they cluster cleanly and analyze in seconds. Use open text when you do not, when the upside of a surprise insight outweighs the analysis cost. A common pattern is to ask multiple choice with an "other (please specify)" escape hatch.
Is it okay to require every question?
Required questions raise data completeness and lower completion rate. Require the questions you cannot analyze without; make everything else optional. Demographic and sensitive questions in particular should be optional unless your sample design requires them.
How do I pilot test a survey?
Send the draft to five to ten people who match your target audience and watch them complete it on a video call. You will catch unclear wording, broken logic, and length issues that no amount of internal review will surface. Twenty minutes of pilot testing saves a survey wave.