4: Research Tools
4: Research Tools#
A survey (har har!) of methods
Sam Savage, in “The Flaw of Averages, does a nice job of pointing out how most of us crave a single number – an average – that promises insight or security, or makes a difficult decision for us (ease). A certain approach to survey design makes it easy to generate such averages, with an accompanying massive loss in fidelity and resolution.
Merely collecting and organizing extant data (meta-measurement)
Analysis of extant quant/qual data
Different from the previous because you don’t collect the data yourself.
Reasons people will participate
Goodwill/Interest (they want to)
Desire to “be heard”
They’re forced to
They’re externally incentivized (gift card, etc.)
Some surveys are institutionalized to the extent that recruitment is pretty easy (ex: “Best of Taos”)
Can be 1:1 or 1:many
Gabby’s “lead with a low-effort survey” then following up with direct outreach
Appeal from a platform (an email list, an institutional partner’s platform, etc.)
These are convenience sampling methods, though you could apply some limited statistical correction for bias
The survey itself isn’t something anybody would seek out, but…
You can inline it with existing inbound marketing-ish stuff like popular+relevant articles, email list content, and so on.
These are also convenience sampling methods, of course
Pay-per-completion research companies can recruit for you
These companies may have a relevant panel they can put the survey in front of. If they don’t they owe it to you to be honest, but since the model usually is pay-per-completion, they should be honest. :)
I don’t have any particular negative experience with these companies, but there’s a lot of garbage out there, so be careful, and we generally are doing research on groups we should be present with anyway, so involving a third part intermediary is often unnecessary. That’s why you’re probably getting the vibe that I’m a bit down on using outsourced recruitment.
You probably don’t need a… probability sample :) … but if you did these options would become more attractive. But again, you probably don’t need a probability sample. You just contextualize your findings within the less rigorous convenience sampling methods we tend to use with TEI.
For sake of completeness:
We’d probably never use these because they require insider status within a semi-closed system, but for completeness…
About survey questions
Yes/No, or other forced binary choice
Difficult to do, but worth trying!
Outside eyes on your questions can help
100% of questions delivered every time or conditionals/branching
@TODO: I need to speak in some useful way to the “toy” set of methods. I’m not sure what to say right now, but I don’t want to skip over this.
Surveys can do both (big open-ended questions with paragraph-length answers), but with limitations in terms of qual richness/nuance/interpretation
Range from Indi Young-style listening sessions to facilitated surveys
the latter is technically different than an interview, but still there are “interviews” that are almost as prescribed and rote as a survey
Agenda and prompts
I think this might be the biggest determinant of how an interview unfolds: how detailed is your agenda?
Secondarily: how do you prompt interviewees? Just the question? Do you cave under the pressure of silence and start leading them inappropriately?
Setting and context
Also important is setting and context. How much time is available? How squeezed is the interviewee? How squeezed are YOU?
There’s not one “best” approach to interviews.
Even with lots of mistakes, you’ll get nuanced, useful information that helps you build context and add shades of meaning and nuance to quant data.
It’s hard to recommend a starting point RE: length because uber-short interviews require more skill while longer interviews require stamina and experience.
That said, consider asking for 40m interviews, which are less than an hour by a meaningful amount (good for busy people) and long enough for you to screw up and still walk away with something useful.
Record or take notes during or after?
Potential chilling effect of recording vs. potential distraction of notetaking.
I don’t have a recipe for you here. Depnds on your preferences and strengths.
Neither drawback (the chilling effect or the distraction) is so bad you must avoid it no matter what, and avoiding your personal weakness might more than compensate for the weakness of the choice you’ve made.
Do enough interviews and you will eventually fail spectacularly. Don’t worry; it’s all in the game, yo. These extreme outliers teach you very little that you can use to improve your technique, so you just dust yourself off and move on.
The “data soak”
Pulling out vivid data points either of a qual or quant nature
Another “data soak” to absorb the vibe of the data/patterns/outliers/”odors” that feel interesting or unusual or unexpected
In other words, TIME can really help you notice these patterns
Most questions are likely to benefit from mixed methods with a convenience opt-in sampling method
Don’t rule out other approaches, just know that within TEI (again, you might think of this research project as a prototype or first sprint) the mixed method + convenience sampling tends to be most useful and applicable.
Semi-structured interviews/listening sessions
Measuring through data-scraping
Pattern libraries (not a method per-se, but if the output is going to be a pattern library, then the method will follow suit, and because the pattern library is such a great starting point SSR output, I should mention it here.)