Episodios

  • Forced Choice vs Neutral Options: Why Forcing Choices May Damage Your Data
    Feb 20 2026

    In this episode of the Total Survey Design Podcast, I explore whether to include a neutral midpoint option in survey questions or force respondents to pick a side. I define forced-choice formats, from even-numbered scales and binary picks to pairwise comparisons. I discuss key evidence like the 2019 Pew Research experiment, which found forced yes/no questions increased reporting on sensitive topics compared to select-all-that-apply lists. I present strong arguments on both sides before explaining why I usually favor neutral options for attitude and opinion questions. I end by offering an elegant solution that combines the presence of a neutral option with the need for respondents to pick a side.

    Support the show

    Contact us at: totalsurveydesign@gmail.com

    Find us online at: instagram.com/totalsurveydesign/

    https://taplink.cc/totalsurveydesign


    Más Menos
    20 m
  • Forced Questions (Will Hurt Your Data)
    Feb 9 2026

    Should you ever force respondents to answer a survey question? In this episode of Total Survey Design, I examine forced questions. While these features are often used under the assumption that more complete data means better data, forcing responses frequently leads to worse outcomes by increasing survey breakoffs, encouraging satisficing, and undermining respondents’ trust. I outline more effective and respectful alternatives, encouraging responses without coercion.

    Support the show

    Contact us at: totalsurveydesign@gmail.com

    Find us online at: instagram.com/totalsurveydesign/

    https://taplink.cc/totalsurveydesign


    Más Menos
    7 m
  • Can Generative AI Help Design a Better Questionnaire? Feedback Help
    Feb 2 2026

    On this episode of the Total Survey Design podcast, I put generative AI to the test by asking it to critique and improve the Microsoft Word Feedback survey: a short, everyday questionnaire that many of us have seen after using the program. I compare the suggestions from Grok, ChatGPT, and Gemini, highlight where they agree, where they differ, and share my own thoughts on what they got right and what they missed.

    I recap the original survey (which takes just 50 seconds to describe), dive into the strengths and weaknesses of the AI outputs, and explain why I still think human expertise in survey methodology matters most. Along the way, I discuss the pitfalls of the Net Promoter Score scale, the importance of balanced question wording, building trust with privacy statements, and more.

    There is still time to have your voices heard! Share your thoughts about this podcast at: https://www.tinyurl.com/totalsurveydesign

    Support the show

    Contact us at: totalsurveydesign@gmail.com

    Find us online at: instagram.com/totalsurveydesign/

    https://taplink.cc/totalsurveydesign


    Más Menos
    14 m
  • Trump & Rogan vs. Survey Sampling: Addendum
    Jan 28 2026

    On this bonus episode of the Total Survey Design podcast, I follow up on Trump and Rogan’s polling critiques with a listener email highlighting nonresponse bias, the real issue behind low response rates, where responders often differ systematically from non-responders.

    I explain how rigorous weighting corrects for this bias, why it’s not foolproof, and how strong weighting helped make the 2024 election polls some of the most accurate in years, with top aggregators and pollsters like AtlasIntel missing final margins by less than 2–3 points. The takeaway: polling isn’t broken, it is actually getting better with time!

    Complete the Podcast Feedback Survey by visiting: www.tinyurl.com/TotalSurveyDesign

    Support the show

    Contact us at: totalsurveydesign@gmail.com

    Find us online at: instagram.com/totalsurveydesign/

    https://taplink.cc/totalsurveydesign


    Más Menos
    5 m
  • Trump & Rogan vs. Survey Sampling: What They Got Right, And Totally Wrong
    Jan 26 2026

    On this episode of the Total Survey Design podcast, I examine an episode of the Joe Rogan Experience podcast, #2219, which aired on October 25th, 2024, where he discusses survey sampling with President Donald Trump. They both express skepticism about the reliability of polls for making broad national claims. They also refute the ability to make predictions from small samples and criticize polling methodology. In this episode, I talk about what they got right and what they got wrong, diving into the science of sampling.

    Support the show

    Contact us at: totalsurveydesign@gmail.com

    Find us online at: instagram.com/totalsurveydesign/

    https://taplink.cc/totalsurveydesign


    Más Menos
    17 m
  • Designing a Feedback Survey From Scratch
    Jan 22 2026

    In this episode, Azdren creates a feedback survey from scratch, showing the process from idea formation, to messy first draft of the questionnaire, to final product.

    To see the final version of the questionnaire and to give your valuable feedback on this podcast, please visit https://tinyurl.com/totalsurveydesign

    Support the show

    Contact us at: totalsurveydesign@gmail.com

    Find us online at: instagram.com/totalsurveydesign/

    https://taplink.cc/totalsurveydesign


    Más Menos
    16 m
  • Survey Weighting: When It Helps And When It Hurts
    Dec 8 2025

    This episode of the Total Survey Design podcast demystifies the statistical technique of survey weighting (or post-stratification), explaining how it corrects sample imbalances by giving "quiet groups" more influence. The hosts define weighting, detail when it's an effective correction tool for probability samples, and provide practical advice on avoiding common pitfalls like over-adjustment that can hurt data precision.

    Support the show

    Contact us at: totalsurveydesign@gmail.com

    Find us online at: instagram.com/totalsurveydesign/

    https://taplink.cc/totalsurveydesign


    Más Menos
    10 m
  • Nonresponse Error. Part 1 - Why Response Rate Does Not Necessarily Predict Nonresponse Error
    Dec 1 2025

    This episode explains the important distinction between response rate and nonresponse error in survey research. Drawing on principles from Dillman et al.(2014), it clarifies that a high response rate does not necessarily mean low nonresponse error, and vice versa. Through two practical examples, the episode illustrates how systematic differences between respondents and nonrespondents -- not the number of responses -- determine the presence of nonresponse error. The episode emphasizes that evaluating survey quality requires more than just looking at response rates; it requires understanding potential bias in who does and does not respond.

    Support the show

    Contact us at: totalsurveydesign@gmail.com

    Find us online at: instagram.com/totalsurveydesign/

    https://taplink.cc/totalsurveydesign


    Más Menos
    5 m