The freedom to avoid the obvious task.


Idleness is equated to laziness or incompetence.  It is the modern sin.  Lose your job and people look at you funny, thinking “well, I guess he just wasn’t capable of it…”.  A gap in your resume, between jobs, is considered suspicious.  More than that, we pay management much hard-earned money to ensure that no one is idle when they are employed.

This attitude comes from 19th century factories and early 20th century time management practices.  In that world-view, everyone has a well-defined job to do.  Think of a car factory with people hanging doors on the car: the number of cars that you can sell is exactly 1/2 or 1/4 of the number of doors that they hang; if the door people stop work then the entire assembly line comes to a halt.  Fifty years ago, there was a very clear demarcation between jobs: Management (or Engineering) did the thinking and the assembly workers hung the doors.  Thinking was not their job.  Idleness — in the sense of not hanging doors — was (and is) equated to theft from the employer.  Someone being paid and not doing any visible work seemed (and seems) not much different from someone who sneaks in at night and takes money from the till.

Modern management has backed off from the time-and-motion studies pioneered by Frank and Ernestine Gilbreth.  We (generally) no longer treat workers as automata to be programmed on a second-by-second basis.  Factory floor teams are now (often) have some control over work details.  Further, they are encouraged to think (though preferably without stopping work) and come up with ways to improve the manufacturing process.  But, perhaps it wasn’t management changing its view so much as the type of job changing.  Many of the simple tasks that the Gilbreths could improve by detailed motion planning have now been automated.  The detailed motion plans are generated by manufacturing engineers (and/or computers) and fed into industrial robots or programmable milling machines.  The manufacturing jobs that remain (at least in the US and Europe) tend to be tasks that are not repetitive and stylized.

But, in the last half-century, while factory jobs were becoming less tightly managed, the reverse has happened to the thinking professions.  Computer programming has gone half-way to becoming factory work, and doctors now work to the tick of a clock.  Scientists spend months writing research proposals, to satisfy a management structure that demands to know what you are going to discover and what it will be good for, even before you start the search.

Let’s start with the family doctor or “general practitioner”.  I go to a group practice here in England, run by the National Health System.  The group manages itself, but does so under some fairly tight requirements imposed from above.  Specifically, the payments are fixed, the number of people they serve is (effectively) fixed, and they are required to see anyone within 48 hours.  As a result, I’ve only once spent more than five minutes in a doctor’s office.  Doctors can spend more time with patients (if they want), but if they do, they would have to see fewer patients and take a corresponding cut in pay.   Because the working hours are fixed and pay per patient is fixed, if you spend 10 minutes per patient (instead of 5) you need to be prepared to have your salary cut by 50%.  (I have no complaints with the doctors here.  In fact I have a lot of sympathy for them: an endless sequence of 5-minute descriptions of sniffles can’t be good for their souls.)

Now, 5 minutes is enough most of the time, for most people.  And the doctors have enough humanity and professional pride so that people who need more attention will get more attention.  But, doctors also have another job to do beyond treating the sick.  They are the first line of warning against epidemics.  They need to notice when something unusual comes along and then spread that knowledge.  But, epidemics don’t come along every day, so this second job will inevitably require doctors to have some idle time on the 364 days a year when nothing unusual is happening.

Consider the current swine-flu epidemic.  According to today’s International Herald Tribune (30 April 2009, page 6, UK edition, “A cough that was heard around the world”) the epidemic started in La Gloria, Mexico in mid-March.  It didn’t really get any attention until about 11 April, triggered by a seriously ill patient in a hospital.  Now, in some sense, that’s not too bad: one cannot hope to recognize the first case of a new disease at the current state of the art.  And doctors are busy, as we all know.  If they are anything like the British doctors, they have another patient coming in soon.  Did they have time to worry about a few people who seem to have severe flu, late in the season?  Did they have time to compare notes over a cup of coffee with the doctors down the hall?  Probably not.

But, suppose that they did.  Let’s suppose that the medical profession recognized this epidemic just a week earlier.  Let’s suppose that it turns out to be a moderate-sized pandemic like the one I have lived through, the 1968 Hong-Kong flu.  That killed about 1 million people over a year, for an average death rate of something like 20,000 per week.

With luck, this epidemic will be controlled by a flu vaccine.  We know how to make one; it is simply a question of assembling it, testing it for safety, and manufacturing billions of doses.  That takes about 6 months these days, so with any luck we should have a vaccine in time for next winter’s flu season.  If the experts make a good set of choices of which strains of virus to include in the vaccine and if the real virus doesn’t mutate in some unexpected way, then we can expect the vaccine to stop the epidemic in its tracks.

But, if the epidemic is like the 1968 one, people will be dying at the rate of 20,000 per week around then.  If the disease had been detected a week earlier, we would have a vaccine a week earlier, and we would save about 20,000 lives.  (A week would save several times more people if this epidemic turns out to be as bad as the 1918 flu.)  That’s not a bad return for allowing doctors a little idle time.

If one wants to be an economist about it, you’d have to compare two scenarios.  One with 20,000 deaths from not detecting an epidemic early.  Against that, you’d need another scenario where doctors are given a bit more time to think and they’d have a little less time with other patients.  Doubtless, some patients would die in that scenario who lived in the real world; the picture is not all black and white.  But we can clearly see that there are some advantages to what hospital management would see as “idle doctors.”  There is a clear conflict between managing doctors toward the obvious task (today’s patients) and another task (epidemic detection) that doesn’t normally seem urgent.

Similar arguments apply to scientists.  One argument is parallel to the doctors: scientists are our early-warning system for all kinds of things.  Holes in the ozone layer, global warming, and asteroid impacts are a few recent ones.  None of these things are the kinds of “useful” research that people would pay for in advance.

For instance, back in the 1970s, no government would have had the imagination to ask for research into the ecological effects of chlorofluorocarbon gases (CFCs) that are known to be entirely inert and non-toxic.  (Chlourofluorocarbons are simple molecules made of chlorine, fluorine, carbon, and hydrogen; typically no more than a dozen atoms in the molecule.)  We now ban them because they destroy the Earth’s ozone layer that protects us from dangerous ultraviolet light from the Sun.

People had been using these gases in spray cans and air conditioners since the 1930s; it was well established that you could breathe large quantities without harm, and they were no more dangerous than common gases like CO2.  For some of these gases, if you opened a cylinder, you should worry more about suffocating due to lack of oxygen than any serious harm from the gas itself.  And they were inert under any normal conditions, simply not participating in any chemistry.  So, they were harmless and considered a triumph of technology.

Or, so it seemed in the 1970s.  But, then, people started doing research that, to any manager tasked to watch out for ecological threats, would have been a waste of time.  Everyone knew what the ecological threats were, back then.  Smog, unburned hydrocarbons from car exhausts, pesticides, mostly.  No sensible manager would have let his researchers work on the interactions of chlorine and ozone: there is no chlorine in the atmosphere, he would say.

Remember, in the 1970s we did have an ozone problem: we had too much of it down at ground level.  Pollution from the primitive cars of the day interacted with ultraviolet light in the lower atmosphere to make ozone.  That ozone was down at ground level, and it harmed people and plants directly.  It would oxidize the leave of the plants and gently corrode the insides of your lungs.  So, any sensible results-oriented manager would say “OK, so you found some reactions where chlorine breaks down ozone into oxygen.  Nice try, but you know as well as I that Chlorine is a lot more toxic than ozone.   You can’t possibly imagine spraying seriously toxic gases around to solve an annoyance.  Now, stop that and go off and do something useful.”

The thing is, no one really believed the idea that we could change our planet’s atmosphere.  Sure, we’d polluted things a bit near cities, but cities only covered a tiny part of the planet.   And, anyway, even in cities, pollution was seen as more of a manageable annoyance rather than something serious.   (Except for some special cases like Los Angles and Pittsburgh where the pollutants were trapped by mountains.)  Today’s mainstream generic concern for the ecology would have been pretty radical back in the 1970s.

And, even if you could change the atmosphere, who would imagine looking at CFCs as the source of trouble?  They are essentially inert and they are produced in tiny quantities compared to many pollutants.  No manager would push for a research program in that direction.  Or, more precisely, no rational manager who was not almost omniscient would do it.  With 20/20 hindsight, it is tempting to say “look there!” at the field, now that we know the answer, but it wasn’t at all obvious to real humans in the 1970s.

I was working in science at the time; I was getting my undergraduate and graduate degrees in a part of physics that wasn’t miles away from atmospheric chemistry.  I was in a good position to watch this research happen and understand it.  I can remember my reactions, reading an article in Physics Today that explained the chemistry.  The chemistry was straightforward, and clear, but I couldn’t imagine how the hell anyone had been lucky enough to make the connection.  I was in awe.   Not in awe of the people who had done it — I’d met enough scientists by then to know that they were smart people but not really that different from the rest of humanity.  It was really awe (and pride) that somehow we had managed to find the right, important connection in an entirely unexpected place.

There were three factors, all of which were necessary to make CFCs important pollutants:

First, they affect the ozone layer and there actually isn’t a lot of ozone up there.  Ozone is an incredibly good absorber of ultraviolet light, and if you separated out the ozone and brought it down to the surface, it would be a layer just 3 mm thick.  (In reality, that small amount of ozone is mixed with air and spread out over a region of the atmosphere that’s 10 kilometers deep).  It’s hard to imagine that there is a critical part of the atmosphere that is so small.

Second, CFCs are inert in the lower atmosphere.  That means they don’t dissolve in rainwater and there isn’t any chemistry that pulls them out of the atmosphere.  The float around for decades and accumulate.  But, when they get high up into the stratosphere, ultraviolet light breaks them into pieces.  One of the pieces is a chlorine atom.  (I’m simplifying the chemistry quite a bit here…)  It’s hard to imagine that something that is inert and safe at ground level would deposit a nasty payload at high altitudes.   Then it’s hard to imagine why you should care about high altitudes (but that’s where the ozone is…).

Third, the chlorine eats the ozone particularly efficiently.  It’s really a network of catalytic reactions where the chlorine comes back out of the reactions unscathed and unattached.  A single chlorine can then be responsible for destroying many ozone molecules.  While it’s not hard to imagine a catalyst (they are well known in chemistry), good catalysts are not particularly common.

CFCs were one of Mother Nature’s booby traps, and I’d contend that we found it in time because our research was not managed.  Good managers would push research in the direction of things that are likely to be important, and CFCs did not fall into that category.  Beforehand, by any reasonable estimate, they were unlikely to be important.  It wasn’t until someone put all the puzzle pieces into place that they were obviously a problem.

This is a case where science needed (and got) the freedom to avoid the obvious problems.  People were largely unmanaged, so they had the freedom to follow their own ideas.  Some of the researchers involved appeared as if they were just idly satisfying their personal curiosity, rather than working on problems of importance to society.  But it turned out to be useful in the end.   It just goes to show you can’t predict the future very well, and if you can’t predict it, you can’t really manage it.