Customer feedback survey templates
Twenty-five question sets you can copy, customize, and ship today.
A good template is not a script — it is a starting point that has already removed the obvious mistakes. The question sets below cover the moments that matter most in a customer lifecycle: onboarding, post-purchase, support, churn, product research, and the relationship pulse. Copy what fits, replace the brackets with your specifics, and keep what you cut out for next time.
Onboarding feedback
Sent twenty-four to seventy-two hours after first meaningful use. Goal: catch friction while the experience is still fresh and the customer has not yet decided to give up.
- "How easy was it to get started with [product]?" (1–7 effort scale)
- "What were you hoping to do first when you signed up?" (open)
- "Did you run into anything confusing during setup?" (open, optional)
- "On a scale of 0–10, how likely are you to keep using [product]?" (forecast question)
- "What is one thing that would make this easier?" (open)
Branch the open questions on the effort score: ask "what was hard?" only of respondents who scored 4 or below. The pattern is closer to a Customer Effort Score with diagnostic follow-up than a generic satisfaction survey — see CSAT vs NPS vs CES for the metric choice.
Post-purchase
Sent three to seven days after the order arrives, depending on category. Long enough to use the product, short enough that the experience is still vivid.
- "How satisfied are you with [product]?" (1–5 satisfaction)
- "How does it compare to your expectations?" (worse / same / better)
- "How was the delivery experience?" (1–5)
- "What would you tell a friend who was thinking of buying this?" (open)
- "Anything you would change about the product or the buying experience?" (open, optional)
The "tell a friend" wording surfaces marketing copy as a side effect of the survey. Tag the answers monthly and feed the strongest verbatims into your product page tests. Post-purchase survey best practices covers send timing in more detail.
Support CSAT
Fires automatically when a ticket is closed. Two questions max — anything more is a tax on the customer for using your support channel.
- "How would you rate the support you received?" (1–5)
- "What is the main reason for your score?" (open, optional)
For high-volume support, run CES instead: "It was easy for me to get my issue resolved" on a 1–7 agreement scale. Effort correlates with retention more cleanly than satisfaction in support contexts. Either way, the open follow-up is where the signal lives.
Churn and cancellation
The exit survey runs at the moment of cancellation, before the user closes the tab. Keep it short — they are leaving, you owe them speed.
- "What is the main reason for canceling?" (multiple choice with five to seven categories: too expensive, missing feature, found alternative, no longer need it, support issue, other)
- "If we changed one thing, what would bring you back?" (open, optional)
- "Would it be okay if someone reached out to learn more?" (yes / no, with optional contact)
The categorical question gives you a chart that survives the slide review; the open question gives you the surprises. Tag the verbatims monthly with the same taxonomy used for NPS detractor themes.
Relationship pulse (NPS variant)
Twice a year per customer, capped so heavy users do not get hit more often than light ones.
- "How likely are you to recommend [product] to a friend or colleague?" (0–10)
- Branch: detractors get "What is the main reason for your score?", passives get "What would have made it a 9 or 10?", promoters get "What did you like most?"
- Optional: "Anything else you want us to know?" — the highest-signal question in many surveys.
For the score math, segmentation, and follow-up routing, see the NPS complete guide.
Product research
Used to validate or kill a feature idea before committing engineering time. Send to a known segment, never an open-list panel.
- "How often do you do [task this feature would help with]?" (frequency scale)
- "How do you currently solve this?" (open or multiple choice if you already know the alternatives)
- "How disappointed would you be if [proposed feature] was not available?" (very / somewhat / not)
- "What would the ideal version of this look like for you?" (open)
The "how disappointed" framing surfaces real demand more reliably than "would you use it" — people overestimate future use of features they think sound nice. If responses do not lead with "very disappointed," the feature does not have product-market fit yet.