Predicting the Economic Impact of Research


This spring, the research councils of the U.K. began to ask all researchers to predict the economic impacts of their research.  There is nothing wrong with economic impact, of course, even though there are other reasons for research.  The problem is that a prediction is required.

Prediction is very hard, especially  about the future.

(quote from Yogi Berra, or perhaps Niels Bohr)

One of the things that scientists do well is find out new and interesting things, but all people (including scientists) do very badly at predicting the future.  For instance, the CEO of a major computer company said “There is no reason for any individual to have a computer in the home.”  (Ken Olson, Digital Equipment Corporation, 1977 meeting of the World Future Society, Boston.  See http://www.snopes.com/quotes/kenolsen.asp) The fact that he later said it was taken out of context and that he wasn’t referring to PCs just makes it a stronger example: in 1977, he could not imagine what computers would become.

Or, Irving Fisher, Yale University’s Professor of Economics said “Stocks have reached what looks like a permanently high plateau,” just weeks before the stock market crash of 1929.  Or, as Lord Kelvin said: “I have given careful consideration to this subject and I do not believe the shareholders of your company need be alarmed at the prospect of wireless telegraphy.” [At a meeting of the Anglo-American Telephone Company on Aug. 1, 1902. BDE 1902-08-03, pg. 39, “Wireless System Not Feared”]

Other examples can be found, though doubtless all but the most egregious tend to be forgotten.  Of course, this inability to predict outcomes is not limited to scientists.  It is a general human problem: for instance, both the US and USSR governments had major programs working towards nuclear powered aircraft from 1946 to circa 1960.  And more recently, the economic impact of innovations in the financial industry (e.g. the shadow banking system and derivatives) was not understood until the 2008 credit crunch was in full swing.

People often fail to realize the potential of an innovation and they often get enthusiastic about impractical things.  However, in the spirit of the Research Councils UK impact statement, I think it is appropriate to predict the future and describe the impact of this new research policy.

Fiction Writing in Science

Let’s start with basic economic incentives: If you ask people to predict the future and if you make their livelihood dependent upon making a prediction, they will indeed predict.  The trouble is, there is no reason to believe their prediction will be any good.

In the context of research proposals, the best that can be hoped for is that each grant proposal will be accompanied by a small piece of fiction and that the reviewers who read these grant proposals will simply ignore the impact prediction.  If that happens, the consequences will not be so bad.  Scientists will be a given a taste of fiction writing.  They will hopefully become culturally broader and beyond a certain amount of aggravation and a modest amount of wasted time, no real problems will arise.

But what if people take it seriously?  It is easy to see  why the writers will take it seriously: their careers and their jobs depend on it. But what about the readers, will they take it seriously?  Probably they will, because these proposals are all peer-reviewed, so the reviewers are often authors too.  The trouble is that people are honest in a strange backward sense: once we say something enough times we tend to believe what we are saying.  Along the same lines, we work backward from the things that we do and come to believe that they are important.  So, if you write something enough times, like a small piece of fiction that pretends to describe in detail how your research will help U.K. competitiveness, you will tend to believe that this kind of fiction is important when you read a grant proposal.

Unfortunately, despite your belief, it will still be fiction.  Perhaps if we are lucky, we will get a generation of researchers who are inspired to write good fiction and judge good fiction.  We will judge their research proposals partially by the quality of that fiction.  That might not be a terrible thing, because there is no reason to believe that good scientists are bad fiction writers, but it will mean that proposals will be chosen less on their scientific merit and more on their literary merit.

If so, and if we are lucky, our research proposals will be judged in a somewhat odd manner, a little bit of fantasy will sneak into the peer review process. A few proposals that ought not to be funded will be funded. Research quality may decline, but by and large, life will go on.

We may be so lucky, but I think we won’t be.  The problem is that the people writing and reading these proposals are basically straightforward, reasonably honest people and if they see a piece of fiction with a little bit of reality in it they will prefer it to a piece of pure fiction.  So given two research proposals that talk about wild and wonderful results to be had and how they will help the economy, the one that tells the more plausible story will win.

Which Stories Will Win?

The most plausible stories about the future are stories of the near future.   People like to believe simple stories that say “I do X and Y will happen,” stories without complex interactions, external effects or multiple steps.  People like linear stories with single causes and obvious effects.

One kind of stories that people do not like are stories where the hopeful discovery will have small effects on hundreds of different parts of society and the economy.   We do not like stories where an idea of yours lodges in the mind of a chance reader, and then two decades later, it plays an important role in an invention.  We do not like stories where we cannot describe this future invention, even though describing an invention before its invention is an obvious impossibility.

Yet, it seems likely that human society does actually work in these complicated ways, at least much of the time.   But we don’t like those stories because they violate the cultural mythology of the lone inventor or the wild-eyed genius who single-handedly changes the world.  Our culture has myths of individual heros, in control, working, heading towards an intentional goal.   We are not happy with the idea of things going on than no one really intends to have happen, and we certainly don’t consider such things heroic.

So, the stories that will sound plausible will be close to reality, short-range ones where one really can see where the research is going.   These are the single-step stories with one cause and one result.  Longer term research (the things that bear their fruit in 20 or 30 years) will be ignored.  Long term research cannot easily tell a story that is as attractive, because the long term story takes more steps, requires belief in more coincidences, and has more alternative possibilities.

If an impact story has four steps, everyone knows that it will go wrong at one step or another.    But what is not so easy to see is that there are hundreds (or maybe millions) of possible four-step stories.     Even if each of them has only a small chance of coming true, it could be likely that one of them will work out.  So, if you tell a four-step story,  even though it is almost certainly wrong in detail, it is quite possible that there is some four-step story that works.  But no one will believe whatever story you write, and we cannot even estimate how many four-step stories there are.  The impact might be real, but the story you tell will not count, not in the realm of an impact statement.

Who would fund the laser?

Let’s look at a real story: consider the laser. Lasers have proven to be worth billions of dollars and they have made people’s lives better in ways beyond their financial impact.  There is a laser inside every DVD player in the world, inside every CD player in the world and just about 100-percent of our modern communication is carried through optical fibres on light coming from little modulated lasers.  But before lasers were invented, could they tell that story?  No.  Even after the laser was invented, would anyone believe  someone who said “every telephone will be fed by a laser”, and that “most people would be spending their time watching something that isn’t quite like TV and that it would be sent over laser beams”?  (That’s a description of YouTube. Remember, we are in the early 1960s, long before the Internet and computer graphics.)  Would any sane reviewer believe them if they said that teenagers would prefer to type to each other, rather than talk on the telephone?  (And that lasers had a crucial role in this?)

Would anyone believe that people would watch movies with lasers?  Well, actually, I might have, but only by imagining all the wrong details.  In my 1960s imagination, a laser movie player would have big reels of film, with thousands and thousands of holograms in a row.   It would be great!   You’d stare through a big lens at the film, and the laser would illuminate the holograms, one at a time, to give a three-dimensional image.  The film would whirr through, clicking along the sprockets, and you’d see a movie.  You could move your head and look behind foreground objects; if one of the characters threw something at the camera, you’d duck.  And, you’d have three lasers, one for each of the three primary colours.  The film would have three coloured holograms on it, one for each laser.

That’s what might have happened, in an alternative history, if technology had gone a different way.  What actually happened was that lasers were used to drill holes in diamonds to make wire. You take a wire and you pull it through a diamond with a hole in it and the wire gets a little bit narrower because the hole is tight and just narrower than the wire.  So you start with a rod of steel and after pulling it through maybe 50 diamonds, each with a hole smaller than the last, you end up with a steel wire. And you need diamonds because you are pulling miles of this wire through the diamonds, and anything else would wear out too fast.  The tricky part of this process is drilling the hole.  Lasers were an obvious solution, but it was not a big deal from an economic point of view:  you do not actually need many diamonds with holes in them.

A reviewer might believe the diamond-drilling story, if you (as the person who is proposing to invent the laser) could imagine it.  But the reviewer would probably see it as far-fetched.   After all, you need a very high-powered laser to burn holes in diamond.  [And, a smart reviewer would say “Hey! Diamonds are transparent.  That laser beam isn’t going to be absorbed, so you won’t be able to drill a hole.”  He’d be wrong, but the reason he was wrong wouldn’t be understood until after the laser was invented.] So it is a dubious economic impact story: not too plausible and not too much impact.

But the real consumer product story has too many steps, all of them too hard to imagine in 1960.    You can’t use a laser to read a DVD unless you have a computer.   (You’d need a computer that is a million times more capable than the 1960s state-of-the-art.)  You need mathematics and software to code the images.   You need organic chemistry to make the liquid crystals for the display.   Too many steps were missing between the laser and the DVD player.

And what about running the Internet over lasers?    Other than computers, you also needed to invent optical fibres.   Otherwise, your Internet  would have to be carried by racks of lasers and telescopes on the tops of tall towers.   Your Internet connection would come through a telescope mounted on top of your house, and it would probably not work very well when it rained.  And, you’d have to assume that someone would invent digital error-correcting protocols, too.   Again, that’d be a very complex story and doubtless it would sound improbable and far-fetched to the poor referee who would have been reviewing your proposal.  So would we have had funded research into a laser?  Not based the Internet or DVD stories.

Where did Lasers come from?

Now, think of the step before lasers. Lasers were originally called optical masers. “Maser” means Microwave Amplification by Stimulated Emission of Radiation, and the word “Laser” is obtained just by changing the first word from microwaves to “Light”.

Who would fund the maser?  Masers have turned out to be an unimportant piece of technology.  So far, fifty years or so on, they have had little practical application.  They have been used for satellite communication a little bit, though not any more. They have been used for amplifiers in radio astronomy, but they have better techniques now.  The only recent use that I can think of are atomic clocks, and while atomic clocks are indeed useful things, the best ones have gone beyond using masers now.  Something related to (but not quite) a maser is orbiting in all the GPS satellites, and you might get lost without them.

It seems that making the connection between masers and competitiveness and economic productivity of the U.K. is rather tenuous, and it was even more tenuous 50 years ago when no one could even imagine the GPS system.  The concept of satellites was in the air but they hadn’t happened yet.  The concept of putting an atomic clock in a satellite, well…  The concept that you could have a radio receiver for microwave frequencies that would be small enough to fit into the palm of your hand that could listen to several satellites  simultaneously: that was not even science fiction yet.

Moreover, you would be lucky to know where the satellite itself was to within a few miles.  We did not have computers that could easily calculate orbits.  We did not have good models of drag from the atmosphere so we could not accurately predict where the satellite was going. If your idea of where the satellite is off by a few miles, then your idea of where your GPS receiver is is likewise going to be off by a few miles.

So who would fund research to develop a maser, knowing that the maser itself would not be useful over the next 50 years?  But, masers led to lasers and lasers led to CD and DVD players, and other bits of technology that make money.  It’s a true story, but not one that would make a good impact statement.  It is too far-fetched.

Short-range, Managed Research

Ultimately, the whole idea of impact statements and managing research assumes that you can manage research.  Or, more precisely, it assumes that when you manage researchers things actually get better, not worse.  To make research management work, one needs to be able to predict — correctly — the outcome and applications of research, and that just cannot be done.  Managing research simply yields short-term research instead of long-term research.

Now you may ask “what is wrong with short-term research?” You might say “we need short-term research to handle all the urgent problems we have.”  “Short-term research,” you may say, “is focused and we will get a solution to the problems we have today.”  Certainly, there is a place for short-term focussed research when the general outlines of the problem are understood and the right tools and ideas available.  But…

Our modern view of managed, focussed research was based on the Apollo space program and the Manhattan Project in World War II.  The idea is that you set a bold goal.  You spend as much as needs to be spent on it and you achieve the goal.  It is a heroic story too.   Not the story of the lone genius, but a military story of organised struggle.  And, sometimes this approach works. The space program worked. The Manhattan project worked, but we already knew (in general terms) how to do both of those projects before the money started to flow.

At the start of the Manhattan project, chain reactions were understood.  Einstein had signed a famous letter in 1939 pointing out that you could make a bomb of unparallelled destructive power.  The basic science had already been done by the start of World War II. The Manhattan Project accomplished an amazing feat, but it accomplished a feat that we knew could be done and we knew broadly how to go about doing it.  Likewise with the space program.  We knew rockets worked.  We knew the mathematics of orbits.  We knew how to make tons of kerosene.  We knew the major ideas involved.  While there were a lot of bits and pieces which were developed during the space program, general shape was known at the start.

When Short-term, Heroic Research Fails

Contrast those successes with the war on cancer.  The war on cancer started in 1971 when I was a child and I may die of cancer before it is finished, even though I do not have it yet. In the course of this “war”, we have learned that cancer is not a single disease.  It is a collection of many things that can go wrong: almost anything that leads to uncontrolled cell division.  When we started the war, we did not know how the disease worked.   We did not know any genetics.  We did not know the mechanics of how cells reproduced and how their division was controlled. Consequently, we have spent untold numbers billions of dollars on it, and only modest progress has been made.   These days, if you get some varieties of cancer, and you have a bit of luck, you may survive for quite a while.  However, you are still more likely to die of cancer than of anything else.

And the reason any progress has been made in the war on cancer is because part of the money for it was spent on basic research.  (Some of them were good storytellers and were able to tie their research to cancer treatments.)  Could you cure cancer using the basic knowledge available in the 1960s, the basic biology that we had in the 1960s?  No.  Without improved understanding of the mechanisms of the disease, you could have spent trillions of dollars rehashing the same old experiments with the same old techniques.  You could have spent money developing better radiation sources and developing sharper scalpels and you would never have solved the problem.

That example shows the real problem with short-range research.  If you don’t have the right knowledge and tools available, you can heroically spend a nearly infinite amount of money on a research question and you will just not get anywhere.

Take linguistics as another example, Chomsky had some brilliant ideas in the 1960s.    They essentially created the field.  But since then, nothing dramatic has happened in general linguistics (the core area that is closest to  Chomsky’s ideas) because we have been still using the same experimental techniques that Chomsky used in the 1960s: that is, none.  Classical (or “General”) linguistics has been built in introspection.  That is, by asking yourself what the answer is, and then telling yourself what the answer is.  While that can yield a certain amount of truth about human language, the resulting data has not gotten any more accurate or sophisticated in the last 40 years.  So the field has stagnated.

That is the exact opposite of a field like astronomy where they build bigger and bigger telescopes.  Ten years ago, a 4-meter telescope was big. Now, 30-meter telescopes are coming.  Bigger telescopes let you you see things better; you see new things.  New things drive new science.  They force the theorist to come up with new ideas. With bigger telescopes, the experimenters can test theories they could not test before. Astronomy has been in a lucky position in that they have not had to do too much of the basic research underlying their amazing telescopes.  Much of telescope technology has been supplied by other people.  They have been the beneficiaries of a lot of  long-range (and short range) research from all corners of the world, ranging from computers that control the telescopes to improved optical systems to improved materials.  Those new telescopes would not have been there without a lot of basic research in other fields.

Improvements from Unexpected Directions

Compare this to linguistics. The competition to linguistics these days is not coming from improved techniques of introspection. The competition is coming from things that were unimaginable back in the 1960s.  It is coming from genetic engineering that allows us to place light-sensitive genes in mice, so that you can turn on and off a gene in the mouse’s brain by shining a light on its head. And that gene controls the behaviour of nerve cells in a way that we are beginning to understand.  That technique that will allow us, over the next few years, to understand the neurocircuitry of animals, then use that knowledge to help understand humans.

Already people have done this with fruit flies.  You can change fruit fly behaviour by genetically engineering them and flicking a light on. And you can see how they respond; that lets you understand the neurocircuitry in the way that introspection about nouns and verbs and past participles cannot reveal.

Those techniques from neuroscience will be displacing much of linguistics, and they came from basic research.  They came from things that were unimaginable in the 1960s.  They came from things like Watson and Crick’s DNA which was by no means short-range applied research.  That is the kind of thing you lose if you focus on short-range research.  You lose the possibility of a surprise, of something coming out of the blue from another field and helping you, of doing a better job at something that you could not hope to do yourself.

Let me give you another analogy. Back before 1700 or 1800, wealth was land. If you had land, you could dole some of it to your vassals and they would provide services to you. Money was relatively unimportant.  Investment banks did not exist.  Most industries did not exist.  It was an agricultural economy and your wealth was strictly proportional to how much dirt that you had under your control.

So let us put ourselves into that economy and ask ourselves how could we make the U.K. richer?  And the answer is of course this: wealth is land so we should make more land.  So in fact, what people did was to drain swampy areas.  It worked.  The Fens  around Cambridge are land that once was tidal marsh.  However, there is only so much land you can create that way.  These short-term approaches always run into some sort of intrinsic limit.  And the intrinsic limit to draining marshes is that the water gets pretty deep pretty fast as you go out into the North Sea.

But society found a completely different solution, one that could not easily  have been gotten to by trying to manage research toward better techniques for filling swamps.  We changed the concept of wealth.  Wealth is not land any more, but rather things.  Wealth became manufactured goods. And you can manufacture a lot of goods.  It turns out you can even manufacture tools to make agriculture more efficient, so not only can you have things, that you can also not be hungry any more, because you have these factories that make tools, fertilisers and pesticides.  Even though the stuff is not food, it turns out that it makes your agriculture better. And so we are actually richer than we were in the medieval period, but not because we have more land.

We are richer because we are not dying of hunger and disease because these factories make tools that you can use for all sorts of purposes.  These are results that you would not easily get by incremental research into better techniques for draining swamps.

Long-term Consequences of Short-term Strategies

We can do short-term research for a while.  But then, what you end up with is a shortage of problems and ideas.  Or, more precisely, you will have applied people trying to operate without the tools and ideas that they need to solve their problems.

In 10 or 20 years, much of the basic research that we have done so far will be mined out.  At that point, we’ll be in the same difficult position as we were in the war on cancer.  It will be like trying to build an atomic bomb before you know about uranium, or trying to make a GPS system without good atomic clocks.  Or trying to build a mechanical computer.  We might get ourselves into the applied research trap where we work very hard and spend huge amounts of money, without success, trying use the wrong tools for the job, or trying to solve the wrong problem.

Hopefully, the U.K. budget will get itself back under control again in 10 years or so. Hopefully, the basic knowledge will not run out before then.  We shall see, but it strikes me that this policy change is really just another way of borrowing against the future.  We will be spending our stack of accumulated basic knowledge.  The collection of answers to which we have not yet figured out the questions will be depleted.  Answers to unknown questions are fortunately not a natural resource.  We can always make more, but they seem to come out of unmanaged or lightly managed research.


3 responses to “Predicting the Economic Impact of Research”