Prosody Logo Greg Kochanski Prosody Logo

H1 Bad Surveys, an important ingredient of bad policy.

Surveys can be a force for good in the world. Under the best of conditions, they can help politicians see reason and the will of the people fairly clearly, faster than the electoral cycle. Indeed, even better than the electoral cycle, because an election is just an up-or-down vote on a politician and his whole cluster of policies as a unit. Elections are blunt instruments for communicating with politicians; they toss out the good with the bad or keep the bad to preserve some good. A good survey can communicate a particular issue.

But, along with the good surveys, there are some achingly bad surveys. Some seem to be constructed by well-meaning individuals or organizations. Others are done by consulting firms, which seem to be hired by some organization that thinks it needs to improve its process or customer relations. (Or, to be cynical, by someone in an organization that wants to pretend it is improving its process or customer relations.)

Here is my taxonomy of useless surveys:

H2 The Ego Trip

Some organizations think you have nothing better to do than look at their web site. For instance Megacorp Inc, your source of breakfast cereal sends you this question:

H3 How often do you visit our web site? [Daily] [Weekly] [Monthly] [Occasionally] P Oddly, they have no check box for [Went there once, have planned no future visits]. P A government agency that provides research funding sent me a questionnaire about their web site asking: H3 Our News and Views page is well organized. [agree] [somewhat agree] [somewhat disagree] [disagree] P Oddly, there is no check box for [It's been months since I looked there, I don't really remember, and I don't think I had much of an opinion anyway.]

It is nice that they give us the opportunity to care about their web sites, but it's hard to believe that many people do. I suspect that most of the people who answer those surveys do so out of boredom, and probably claim to check for new cereal offers [Daily], whether they do or not.

H2 The forced answer.

I had one particularly interesting survey asking how we manage research. Now, in general, that's a deep question not really amenable to check boxes. Research is not a predictable process, and is hard (or maybe impossible) to usefully manage. Still, I tried to answer it, because people seem intent on managing research, and if I must be managed, I'd rather be managed well than poorly.

All was sort-of OK until questions 10-14, which were about new employees. Here in the UK, many employees are hired on a probationary basis with fewer rights for the first few months. The intent is that the employee gets evaluated at the end of the probationary period, and you either let them go or plan on having them around for a fairly long time. Question 10 was: H3 Have you been satisfied with the support you institution gives to you at the end of your employees' probationary period? [satisfied] [somewhat satisfied] [somewhat unsatisfied] [unsatisfied]

Unfortunately, my one and only employee hasn't been around that long. I have no idea if my institution will give me good support or not. Unfortunately, the survey is one of those web forms that doesn't let you skip a question. Clearly, someone has decided that they need to know this. What do you do? Obviously, you either ditch the survey or make up an answer.

Now, assuming someone uses that question to make policy, they'll have no idea that 25% of their data is a frustrated click, complete garbage, just to get past the bad question.

But wait, it's worse. Suppose that you've hired hard-working capable people that you want to keep at the end of the probationary period. No problem: you don't have to do anything to keep them on. In that case, you'd hardly need a lot of institutional support now, would you? It'd be pretty hard to be dissatisfied with a personnel office that you never even felt the need to call. So, even among the 75% who have had their employees for more than six months, probably half of them don't know what the answer is, either. They'll either make up an answer, quit the survey, or say "Yeah, I had no problems, I was [satisfied]". Garbage again.

End of probationary period. I left because it became apparent that the answers to all the surveys would all be very similar, and that I could write one letter that would cover all the resources: Q: Would you recommend it to a colleague? A: It depends on whether I thought he/she might need it. Q: Would you use it? A: It depends on whether or not I needed it. Q: Was it well designed? A: It depends on the use. For instance, to a computational linguist, a resource is almost useless unless he/she can download it and process it. Resources that one can search or browse have relatively low value. Also, for the computational linguist, resources generally need to be text (sometimes audio) and they need to be large: millions of words. For instance, my current Master.s student is working with a 2-billion word corpus. On the other hand, the exact opposite conditions may apply for other fields. Someone studying dialects of Old English might be satisfied with a few pages, and would access the data by reading it on the screen, with no thought of downloading. Q: Is it under-utilized? A: I have no way of estimating how often it ought to be utilised and no way of finding out how often it is actually utilised. Moreover, it became apparent during the coffee break that the statistical analysis would have to be rather more complicated than you might anticipate. Among the people I talked to, there were clearly at least two distinct ways to answer these questions. Some would answer only out of their own personal research experience (E.g. saying .no. to .would you use it?. unless they thought they might actually use it themselves.) Others would take a broader view, answering .yes. to that question if they could imagine any reasonable researcher wanting the resource. That means one should not really consider all the subjects to be drawn from a uniform population, and that you would have a confound between the utility of resources and the number of subjects who take the broader view. Given that subjects have no real reason to choose the narrow or broad view of utility, any small influence might change their minds. The worst possibility is that a good lunch might induce mild euphoria in the subjects and that some might switch to the broader view of utility after lunch. It seems unlikely that you have enough data to separate this effect from the desired measurement, which is presumably the (ill-defined) perceived utility of each digital resource. Further, there seemed to be an atmosphere of mild misrepresentation. It was rather less like a typical workshop, and rather more like a focus group. Further, if the survey sheets were just .Mnemonic aids. as I was told, then why were they collected and read? Well, I wish you well, though I don.t expect much enlightenment to come from the exercise. Sincerely, Greg Kochanski P.S. Is a screwdriver usef useful[yes][no] Would you recommend it to your colleagues?
[ Papers | kochanski.org | Phonetics Lab | Oxford ] Last Modified Wed Apr 26 17:47:23 2006 Greg Kochanski: [ ]