I’ve never designed a research survey, questionnaire, or anything similar before. Well…not exactly, anyway… except that I have generated surveys to ask women at church, for example, about themselves as a sort of ‘getting to know you’ activity. But, I’m not sure how relevant that work is to generating a questionnaire for research purposes. Perhaps, in a round about way, my ‘getting to know you’ form could be considered the kindergarten version of data collection. I was gathering the information to get to know the women at church for our women’s organization, so that we (the leaders of the organization) could make informed decisions about activities, ministering partnerships, and so forth. That said, needing to know favorite colors or favorite treats seems slightly superfluous in the grand scheme of questionnaire generation.
Anyway, for this blog post, I am planning to share my notes of things I found relevant or insightful from Dr. Pullman’s list of Resources. (My comments or thoughts will be italicized, while direct quotations from his site will be regular font). Afterwards, I will share a video that I found to offer some help on survey generation, which actually offered a little more information beyond the resource material. Lastly, I will note how I plan to use (or at least consider using) questionnaires in my upcoming UX Case Study.
Notes from Resources I found helpful:
Survey and Questionnaire Design: Collecting Primary Data to Answer Research Questions by Jane Bourke, Ann Kirby, and Justin Doran
- A ‘good’ hypothesis must be: Adequate; Testable; Better than its rivals
- Smart Test: Specific; Measurable; Achievable; Realistic; Timely
Asking Questions by Norman Bradburn, Seymour Sudman, and Brian Wansink
- Questions must be precisely worded if responses are to be accurate
- …it is difficult to write good questions because the words to describe the phenomenon being studied may be politically charged (OR…my thoughts…an individuals relationship or understanding to a particularly word may not mean the same to them as it does to others, which complicates wording a questions as precisely as possible)
- Respondents require persuasion to participate
Standard Deviations by Gary Smith
- This is sobering: One out of every twenty tests of worthless theories will be statistically significant
- More sobering that a term exists for this practice: Selective reporting and data pillaging – are known as data grubbing
- I never knew this: Watch out for graphs where zero has been omitted from an axis. This omission lets the graph zoom in on the data and show patterns that might otherwise be too compact to detect. However, this magnification exaggerates variations in the data and can be misleading.
- This might be true for ALL claims: Extraordinary claims require extraordinary evidence. True believers settle for less.
Pew Research Center: Writing Survey Questions
- Identify information by “thinking about what is happening in our nation and the world and what will be relevant to the public, policymakers and the media.” These are things that people will care the most about.
- To measure change over time, surveys must be conducted at two different points in time
- For this, questions should remain mostly the same when asked and should remain in roughly the same place within the survey as the previous surveys
- Interesting approach: Use open-ended questions on pilot study to reveal if there are any common answers across the board. Then, use those answers as the choices for close-ended questions for the complete study.
- (I like this idea, because, in part, I feel like this can give insight into what people might actually see as a viable response without being prompted. Then, if there are similar or consistent trends in answers, they might work as a base set of answers that many – if not most – people might choose.)
- Also – limit the number of choices for close-ended questions: 5 TOPS!
- Randomizing questions order helps to limit bias
- ...but how does this work for surveys to measure change over time where the guidance is to keep the questions in a relatively similar order, so, how can the questions be randomized???. Randomizing the order seems like contradictory advice if it doesn’t work for all survey types.
Video: How to Design a Questionnaire or Survey by Dr. Hayley Stainton
Things I found insightful from this video:
- The first thing Dr. Stainton suggests determining before any other work is to decide if the survey type will be independent (meaning the individual takes the survey on their own) or interviewer led (meaning the interviewer will guide the survey taker through the questions). I found this an interesting thing to make the primary objective in questionnaire design, even above determining what focus of questions or types of questions to ask. I guess if you (the researcher) have decided to conduct a survey or questionnaire in a particular fashion, then the next logical step could be to design questions that fit the interview style. Although, I think the reciprocal might also be true too, where if you (the researcher) design questions with a particular focus and/or in a particular way, that might dictate what kind of interview style is available to you.
- A second insightful notions Dr. Stainton added was a third type of question to ask. She listed the common ones I found in the resources of open and closed question types, but then she offered the scaled question, which is a question that uses degrees to determine a range of potential answers split between a positive and a negative scale. The purpose for using a scale question would be to determine a range of positives and/or negatives for a given question, which may or may not contribute helpful statistical data towards the research.
- One part of the video I found EXTREMELY helpful was the notion of considering – like REALLY considering – the questions to ask in a survey. Dr. Stainton gives the example of ‘why would you ask someone their age in a questionnaire, if age has no part in the research.’ This includes avoiding the tendency to want to add the information of an unnecessary question simply because it was collected, when, again, it is not relevant to the overall purpose of the research. In other words, know WHY you are conducting this research should also help dictate WHAT kinds of questions should be asked. I think the WHY can have more than one answer, but the questions generated should try to help support all the WHYs, not just ask for information randomly.
Looking ahead to my UX case study, I anticipate doing a survey, but probably an open-ended one that asks questions about the kinds of information an individual would want or find helpful to be able to search a collection by (hopefully that makes sense). My target ‘audience’ for the questionnaire are people who I consider might be interested in using the Mormon Women’s Oral History Collection for research purposes. I liked the suggestion from the Pew Research Center to use open-ended questions in the pilot study (as I hope to use this experience to generate a more useful survey for my dissertation down the road), and I think using this UX case study exercise as a pilot exercise is a good way to approach my upcoming research.