5: Formulating A Good Small-Scale Research Question#

  • Overlap between client-side uncertainty, client-side importance, and you-side interest

  • Scope of the question will need to be quite narrow

    • Help readers set realistic expectations about effort, time, and cadence

    • Help readers set realistic expectations about recruitment/access/early wins or not

  • The nature of answerable questions changes based on the maturity/commoditization of the domain

  • Warning about Halo Effected questions and explanation of how the Halo Effect produces problematic findings

    • Correlation/causation

    • Un-correlated variables seeming like they are correlated

  • Bias, generally.

    • Research bias, specifically sampling bias, just is – deal with it. :cool:

    • But also… understand it!

    • I’ve already mentioned question bias. There are other forms of bias.

      • Sampling bias

        • WHO responds

        • We think of bias in people, especially racial or gender bias, as a negative thing; something they need to work on to improve. It is. But in small-scale research, sampling bias just IS. It’s nearly unavoidable, and the remedy is: awareness and contextualizing your findings within the sampling method and bias.

        • There are lots of forms of sampling bias. It’s basically anything having to do with who within your larger sample population that ends up responding and how the relationship between the sample (the subset of the larger population) and the population is distorted by who does/doesn’t respond.

          • Maybe the sample is really skewed in some way that distorts the findings

          • Maybe only people with a lot of free time actually respond

          • Maybe only venturesome risk-seeking people respond

          • The above are partially mitigated by having skin in the game! If you really care about both your audience and the question, there’s a good chance you will “smell” distortions in the sample, and this “scent” will drive closer scrutiny, revision in your approach, or a deeper reset altogether.

      • Measurement bias

        • WHAT and HOW they respond and tell you

        • Question phrasing

        • Answer design

          • Order

          • Phrasing

        • Interview skill/social membership

      • I’m not trying to discourage you with all this info about bias.

      • Remember the “intellectual hammock” I talked about earlier, where we live in the tension between the power of data and the impotence of data. Understanding the insidious ways in which bias can mislead us is important.

      • The academic/scientific context remedy is pursuing sample size + statistical controls. This is where the idea of statistical significance comes from.

      • The small-scale research context remedy is skin in the game + mixed methods. Both of these lead us to properly incorporate context, and that helps us compensate for bias in the way that humans have successfully been for millenia.

      • Also: bias feels like a bummer to talk about. :) That’s why I put it a bit later in this talk, even though it’s a question you should consider immediately as you begin the research design, and should continue considering throughout the whole project.

      • Some people will be biased against biased data. “That’s biased data” is a convenient way to say: “I’m not interested in changing (at all) or (in the way you propose).” Or… “I’m not interested in doing the intellectual and emotional labor of integrating nuanced new information into my view of things.”

  • Examples:

    • Climbing South Sister, the false summit

    • Sampling the deer population around my house