You've done the research. You sent the survey, got the responses, spotted the patterns. You built the messaging from what came up most often.
And it still doesn't land.
Before I worked in messaging, I ran research in educational psychology. Survey-based studies, with hundreds of participants. Surveys were good for volume and data. I could spot interesting patterns quickly. But I was always working with what people were willing to write down, which often isn't the same as what they actually thought.
Social desirability bias is well documented in academic research. People tell you what they think you want to hear, or what makes them look good, or what feels safe to say in a text box.
You work with it because you don't have a choice. I'd have loved to do deeper interviews, but when working with young people, ethics and consent add extra complexity to the process.
When navigating stuff like that, surveys are a reliable way to get data about a group. You work around the downsides, but you’re always aware they’re there.
In messaging work, you do have a choice though. You don't need to navigate complicated consent and ethics issues. And you don’t need the hundreds of responses academic research requires. But most teams doing research still pick the survey.
What you get with a survey
Surveys are useful. They're not the problem. The problem is mistaking them for the whole job.
When a buyer fills out a survey, they give you a considered answer. It's the version of their opinion they're comfortable committing to in writing. It's already been filtered through what they think is relevant, what they think you're asking, and what feels like a reasonable thing to say.
People say these things in interviews, too. Social desirability bias is of course still a factor. But surveys come up short because you can’t follow up, probe, or challenge.
If something interesting comes up in a survey, you have to take it at face value. If something sounds too good to be true, you can’t get to the bottom of it. The survey ends where it ends.
What you get with an interview
I was running a customer interview recently. The buyer said AI had transformed their workflow. It's the kind of thing that shows up in surveys all the time. It's also the kind of thing marketers take and build messaging around.
I asked what she meant by that.
She said it was fantastic and the whole team loved it.
I asked her to be more specific. What was the team doing now that they couldn't before?
The probing got us to the real meaning of the transformative AI: it let her team do more even though they were picking up the slack after a colleague had left.
The outcome wasn't capability or speed — it was about resources. A completely different angle that probably wouldn't have made it into a survey response, because to her, "transformed our workflow" said enough.
Those follow-up questions are the difference. They didn’t reveal something she was hiding, but they surfaced meaning she assumed was obvious.
She was summarising based on her context, and if I hadn’t had the chance to follow up, the insights would never have gone deeper than that summary.
The version of the problem you get from a survey is incomplete
Buyers in surveys aren't lying to you. They're giving you the considered, top-level version of their problem (which is still useful context). But messaging built from that version stays at the surface because it’s all you have access to.
The words that actually stop someone mid-scroll are usually one or two levels below what they'd type into a text box. It’s the thing behind the thing. The detail that makes a prospect think: this company understands my situation.
That detail comes from conversations. From asking for specifics and examples, from following the threads.
"Transformed our workflow" stays vague on a homepage. "Your team delivers the same output, even when you're short-staffed" stops someone mid-scroll.
One came from a superficial answer and the other came from follow-up questions.
