Greg Kochanski |
Thank you for making your notes on Statistical Sampling available I was after some information regarding the number of samples needed for a survey I am conducting with 40 questions , I am measuring 5 variables. Any suggestions.
I can't really answer your question without more detail, and (to be blunt) I don't really have spare time to solve your problem, but here are some pointers:
First, do you actually want to find out what people believe, or whether you'd rather simply come up with some numbers to support your beliefs? If it's the latter, I don't want anything to do with you. If a survey is planned as a political or idealogical tool, it is a lie, and you'll have to do your own lying. On the other hand, if you really want to know what people think, it is very important to design good questions and have a good sampling procedure.
The trick is to test your questions first, before you give the survey. Bring in some people, give them questions, and get them to talk about the questions. (This is a "focus group"). Try to figure out how they understand the questions. Often, they will have a view of the questions that is very different from yours.
I've run across many awful surveys that were well-intentioned. Good intentions don't prevent a survey from having ambiguous questions or silly questions, or questions that are irrelevant from the viewpoint of the guy providing the answers. Typically, it's because the person making the survey knows a lot about the topic but it's not something the person answering really worries about. So, the questions mean one thing to the designer and something else (or nothing at all) to the person answering.
For instance, just this week I was the recipient of a truly awful survey from some researchers asking my opinions on "eDigital Infrastructure" for psycholinguistics. (This was from an EU-funded research project, from a bunch of professors.) Now, I do know a fair bit about psycholinguistics and I use computers intensively in my research, but I couldn't easily figure out what they were talking about. It was all jargon. It probably made some sense to the people asking, but it made very little sense to me. So, I didn't answer, because I have better things to do than to feed the egos of people who are so careless that they can't ask me questions I can understand.
Try out your questions on a few people who are similar to the people you will sample. It doesn't help to try it on your friends or co-workers: they think too much like you and they will understand your jargon.
Second, think about the ways that your survey could go wrong. Are the people who refuse to answer different in some important way from the people who answer? If so, that's trouble. Try to think up some strategy to minize the difference or use some of the sophisticated techniques that people use to predict election results from surveys.
Third, compute the expected statistical errors in advance. The way you choose the size of your survey is to compute the expected errors for different sizes, and find a good trade-off between an error you can tolerate and a size you can afford.
You need to know how accurate the answer should be. If you need the answer accurate to a percent or two, the survey will need to be large, probably involving thousands of people. You'd also have to be careful about your statistical sampling techniques. On the other hand, if all you want to do is show that "not many people want X" or "a lot of people want X", then you don't need so much accuracy and the sample size might be smaller. Tens or hundreds of people might suffice.
Fourth, don't use convenience sampling.
Good luck.
[ Papers | kochanski.org | Phonetics Lab | Oxford ] | Last Modified Thu Jan 29 05:25:28 2009 | Greg Kochanski: [ Home ] |