Categories
academics management politics science and how it works writing

“Impact” and the limits of storytelling

John Coleman and Greg Kochanski
In two related changes, HEFCE (the Higher Education Funding Council for England) and the RCUK (Research Councils UK) now require us to tell stories about the impact of our research.  All research council grant proposals must tell a story about the impact the research will have, and now, in the REF (“Research Enhancement Framework”) consultation, it is proposed that we will be required to tell stories about the impact – the demonstrable “economic, social, public, policy, cultural or quality of life impacts” – our research has had outside academia.

So, let’s talk about storytelling. Any stories we tell will be limited by the facts, but they will also be limited by the necessity that they be believable. Logically, there are four kinds of stories: false stories and true stories, combined with stories that are believable and those that are not. Examples of all four sorts abound in history and academia.

Quantum mechanics, for instance is a true story that is nearly unbelievable. We like to think that if an electron starts here and ends up there, it follows a certain path and you can know where it is every step of the way. It’s hard to imagine that electrons don’t work that way, and it takes intensive training to get students to believe what electrons actually do.

And, there are stories that are false but that virtually everyone believes at first. In linguistics tutorials, we see students come in with certain pop-linguistic ideas that are hard to eradicate. There’s a widespread belief that when we listen to speech, break it down into a string of discrete phonemes (consonants and vowels), and then clump those together into words, which enables us to look up their meaning in our mental dictionaries. There’s the belief that there are separate little modules in the brain: one for phonetics, one for grammar, one for meaning, and so on.

So too with impact stories. For research proposals, believability is the dominant effect because there are no hard facts. Proposals must contain a story about impact in the more-or-less unknown future (this will be a mixture of hope, guess, and fiction). Looking forward, it may not be possible to plausibly and honestly predict the direct impact or public benefit. And even for the retrospective case studies that would be required for the REF, believability would still be at least as important as facts. The full story of how any research project affects society is complex and filled with detail; many history books (with competing viewpoints and conclusions) are written about some discoveries. Yet, for the purpose of the REF, it must be condensed to a few paragraphs, and the condensation will be governed by believability.

So, what stories do people tell and believe? Simple, linear stories without too many steps. We like stories where each cause has one effect and each effect has one cause. We like stories with a single inventor, hero, or villain. For instance, when we think about one output of “research and development” – Microsoft Windows – we may think of Bill Gates as the cause, even though a little arithmetic makes it obvious that he couldn’t have written all the code by himself. (Nor, in fact, did he even have time to personally hire all the programmers.) In reality, Microsoft Windows has a large number of direct causes, each of whom was influenced by books they had read, courses they took, e-mails they received, and conversations they had in hallways. There is a backwards-spreading tree of causality behind Windows, much like the tree of your ancestors.

But, suppose you wrote a book on programming in 1988. Would you dare write in your impact statement that you helped make Microsoft Windows? Would anyone take it seriously if you did? Of course not. But, without question, some academic’s research helped Microsoft engineers make the design decisions that were critical to Windows’ success. The trouble with REF is that any specific story you could tell would (with 99.9% probability) be wrong in detail, and therefore hard for the referees to believe, even though impact happened. Windows would certainly have been different had no books on computer science ever been written.

Predicting impact – predicting anything – may be impossible if human society is chaotic. Any small change might grow and eventually lead to a society that would be completely different in detail (though perhaps similar in certain overall properties). A good analogy is the weather, where the flap of a butterfly’s wings will (in a matter of weeks) rearrange the pattern of thunderstorms. Chaos often happens in a system with lots of components and complicated interactions, and society can reasonably be described that way.

If society is chaotic, the only kind of impact you could honestly measure would be immediate impact. If you were to follow the chain of cause and effect more than a few steps, it would be overwhelmed by the effects of flapping butterflies. But, whether or not society is really chaotic, we tell (and believe) stories as if it is not.

Overall, research certainly has a strong impact on society: many aspects of modern society can be traced back to someone’s research, and many improvements were inspired by research. But, telling a specific, convincing story is hard because only certain types of true stories are convincing. Because of this, impact statements are a tool that is biassed towards simple stories.

Even in hard sciences that we think of as being fairly close to market, it was or would have been impossible to believably predict the real impact of such developments as steam engines (which plausibly led to the spread then demise of canals), techniques for growing single crystals (which led to microelectronics), the computer (“The Final Question” [1956] by Isaac Asimov shows how inaccurately the future of computers was predicted), white-light holograms (now most frequently found in gift-wrapping paper and on credit cards), speech codec chips (mostly found in talking birthday cards and mobile phones), lasers (ubiquitous in CD and DVD players) and so on.

How much more difficult it is, then, to tell a story about the wider public impact of history research. And yet, Gibbon’s Decline and Fall of the Roman Empire (1776) woke echoes in 2008, during the credit crunch, when the “decline and fall of the American empire” was the subject of hundreds of blogs. Should  they be part of Gibbon’s impact? And there are also the butterflies in our culture: nihilism, counterpoint, and Abkhazian syllable structure. They form part of the rich dynamic of our culture, but what is the impact of a butterfly on the weather? We do not know how (or if) the world would be different if these things had been thought of differently; but it is hubris to say that they have no effect just because we cannot draw the path.

Telling plausible stories about impact is somewhere between impossible and damaging. The resultant stories will be written to be believed, and reviewers will be continually on the fork between unpalatable mixtures of guesswork and optimism. The stories will have as much in common with fairy tales as with scholarship or science. Forcing researchers to tell such tales as part of their work should be vigorously resisted: it will not lead to the wise expenditure of public money, and teaching us to lie is hardly good policy.

We offer as exercises for the reader:

1) Produce the impact statement that would be part of a proposal to write the Communist Manifesto (Marx and Engels, 1848).

2) Produce the 1968 version of the retrospective impact statement.

3) Pretend to be a reviewer back in 1847: comment on both (1) and (2).

4) Draft impact statements (according to your personal taste) for Gilbert White’s Natural History of Selborne, Milton Friedman’s Capitalism and Freedom, or Germaine Greer’s The Female Eunuch.