Prosody Logo Greg Kochanski Prosody Logo

There was an interesting editorial in the International Herald Tribune recently ( "Try a little tenderness" [also] by Steven Keinman and Matthew Alexander, 12/03/2009). It makes the case that tactics that scare or humiliate detainees into talking are "actually conterproductive to establishing the kind of relationship -- one based on trust -- that is almost always necessary to win a detainee's cooperation."

Suppose they are right (and both authors are experienced in the field), how could this be so? It seems obvious that one should drive an interrogation, push hard, get them to talk fast. But speech isn't information, so even assuming that rough interrogations or torture would get people to talk faster, the quantity of speech produced isn't important. What actually matters is the amount of reliable, confirmed information that you get from the final analysis.

Ultimately, if there is any moral justification for abusive interrogation or torture, it can only be by way of the information produced. If you get information that actually saves lives or injuries or property damage, then you can attempt a moral justification of your actions. On the other hand, an interrogation technique that doesn't actually yield trustworthy information can never be anything more than useless cruelty.

While I am not an expert on interrogation, or torture, I am an expert on piecing some kinds of information together to try to get a trustworthy result. I'm a scientist. "Not the same! Not the same thing!" you might cry, but a crucial part of both science and intelligence work is a person who puts the evidence together to tell a coherent story. The scientist and the intelligence analyst both have the same human capacity for self-delusion and for twisting the facts to fit the stories that we already believe in. In both science and intelligence, the whole purpose is to extract some truth from a complicated and messy situation.

In both science and intelligence, one proceeds by a mixture of listening, guessing, and testing. You listen and look at the available evidence then make a guess about what is really going on. Next, you test your guess by asking some specific questions that will confirm your guess, refute it, or force a revision. (In science, this is what experiments do.) Then, do it all again and again, until you are confident that you know enough and that you trust your answers.

The problem comes when you have humans doing the steps instead of some ideally rational being: People are prone to ignore some of the evidence. People may not be able to imagine what is really going on (if you can't imagine the real situation, you will be a long time getting to the right answer). People's imagination also tends to be governed by their pre-existing beliefs. And, finally, people look for confirmation of their beliefs. Only reluctantly do most of use search for evidence that may force us to change our minds.

If the evidence is weak or confusing, what happens? People tell themselves stories that are easy to imagine and easy to believe. The stories they tell usually fit in well with their pre-existing beliefs. Then, they look for evidence that is consistent with the story, to support it. If the evidence is ambiguous, one can always interpret it as support. Nowhere in this triangle of belief, imagination and confirmation is there much of a way for reality and truth to leak in.

Of course I am over-generalizing here. There are some people who go out of their way to consider more of the possibilities. (But never all of the possibilities: the human mind can't out-think the universe.) There are people who go out of their way to test their ideas, they look for flaws. (But even when we do look for flaws, we are happier if we don't find them, and happier still when we find some confirmation of our beliefs.)

It is only when the evidence becomes obvious and blatant that people find it hard to ignore. Science and intelligence analysis work much better when reality intrudes itself forcefully into the process. And, that is why the speed and quantity of talk produced by an interrogation is not important. A small amount of good information is worth a vast amount of ambiguous, untrustworthy information, because only truly solid evidence can force the analyst to stop, rethink his story and maybe expand his imagination. Low quality information doesn't make the analyst (or scientist) stop and think. As long as the data is messy and ambiguous, the analyst can stay in the comfortable loop of of confirming pre-existing beliefs.

So, on the intelligence side, if building a relationship with a detainee produces more reliable statements then it is the utiltarian thing to do. Rough interrogations may make the analyst's job inhumanly difficult by forcing him to try to strain the truth from a sludge of confusion and lies. In that position, the analyst becomes his/her own worst enemy, telling easy-to-believe stories and convincing him/herself that they are true.

And on the science side? On the science side, this is why you should want a big accurate experiment instead of the small, cheap experiment that your funding agency would like to pay for. On the science side, certainty takes time, imagination and repetition, all of which cost money. This is the reason that people should re-visit old experiments, making sure they were done right and that the answer is solid and repeatable. You'd like to give people a solid anchor on which they can hang their theories, and maybe disprove a few that are wrong. Scientists don't deal with ambiguity and confusion any better than intelligence analysts.


[ Papers | kochanski.org | Phonetics Lab | Oxford ] Last Modified Wed May 20 19:15:59 2009 Greg Kochanski: [ ]