Prosody Logo Greg Kochanski Prosody Logo

Performance Reviews from the Inside

(This appeared in Oxford Magazine, No. 239, Fourth Week, Trinity Term 2005, page 5. The "Oxford Magazine" referred to is an internal publication of Oxford University, not published in Miami, and not filled with adverts for gracious living in Oxfordshire.)

Performance reviews can be a good thing, if done right, but they will certainly change the character of Oxford. I spent 15 years in industry, at Bell Laboratories, with a performance review each year. For the first half of that period, Bell was one of the world's best research institutions, doing fundamental curiosity-driven research in speech and signal processing, computer science, chemistry and physics, even astrophysics.

Bell Labs was not a quiet, contemplative place. It was a place that pushed people to do good research with a high impact, get it published rapidly, and get going on the next project. We all felt that we were the best people in the best place to do science. Performance reviews helped to maintain that culture. Later, though, as the environment changed, performance reviews became a strongly negative influence on the research at Bell Labs.

Every year in the 1980s and early 1990s, we wrote a description of what we had accomplished, and these descriptions were debated by management in long, intense meetings. As a result, you got a large or a small raise. A few people (maybe 2% per year) were told that their future at Bell was bleak and that they should start looking for a job elsewhere. More than that, though, performance review controlled your access to scientific resources. High-rated people could hire post-docs or technicians, and had easy access to large amounts of money for equipment. Low-rated people had to be more convincing to buy equipment and might find it harder to travel to conferences. There were both carrots and sticks.

The meetings were somewhat democratic. Ten managers representing 100 people would review these descriptions of ours. Your manager would make a case for you. However, most of the other nine knew something about you: it was a tightly knit community with no walls, and the first-level managers kept some research of their own going, so they were not completely separate from the group of researchers. In the end, they'd reach a consensus on most people.

When the system worked, it worked because all the managers came up from the ranks (indeed, the first-level managers kept some research active and thus had never completely left the ranks). The managers were successful scientists who knew that research results are never guaranteed, who knew the effort required to analyse data or to write a paper.

Part of the reason it worked -- which doesn't apply here -- is that the managers needed to approve expenditures over $1000 or $2000, so researchers had to talk to them regularly, thus the managers could have a fairly good idea what their people were doing. Another reason it worked -- which doesn't entirely match Oxford -- is that Bell Labs research was concentrated in a few fields. That means the managers who made the performance review decisions were able to understand (at least broadly) the technical details of all 100 researchers they evaluated, not just their own ten. Another reason it worked was that competition for money was not severe: managers could afford to give praise to deserving people in other departments without feeling that it would significantly hurt their own people.

So, when the system worked, it worked because the managers:

  1. were technically competent,
  2. were able to understand the research they were evaluating,
  3. understood the process of research because they had done it themselves,
  4. were socially close to the researchers (managers typically even still did some research),
  5. were respected scientists,
  6. were respected by the people they evaluated, and
  7. competition was not severe.

The system fell apart when money got tight. By the mid 1990s, Bell Lab's parent company (AT&T and then Lucent Technologies) was demanding that the research should contribute to the corporate bottom line. Researchers then had not only to show that they were doing first-class research, but that the end-result of the research would benefit the company. Often, it couldn't. A certain amount of intellectual dishonesty therefore crept into the system, as researchers inflated the industrial relevance of their work. Managers played along, because they typically wanted to protect their people from the demands from above. Also, each manager could then pass up summaries of inflated estimates to their own manager, and thus look personally more relevant to the corporation.

Later, this went further (c. 2000), and we were recommended to do basic research and applied research, but only report the latter for performance evaluations. This made a mockery of the evaluations, as they had even less relationship to the abilities of individual researchers.

Further, as research became more tightly coupled to corporate goals, the kinds of research that needed to be done started changing. Points (1), (2), and (3) were eroded because the new research was different -- more applied and in slightly different fields -- from what the managers had done. Researchers didn't like being redirected: for many, it involved decisions whether to move to applied research or development vs. staying in basic research. These could be major career changes, as parts of Bell Labs thought of themselves as academic and people regularly left to professorships at universities. As might be expected, many lost the respect (6) of their researchers in the conflict, deservedly or not.

The corporate demands also forced the managers to do more than just manage their department's research: they had to sell their research to other parts of the company and sometimes even external customers. They no longer had time to do research of their own, which further eroded point (3) and distanced them (4) from the researchers.

Finally, layoffs started and competition for good evaluations became too intense. Researchers could not afford to fail, so any risky research had to be forgotten. An empty list of accomplishments for a year's performance review could terminate your career. This effectively ended all interesting research, except among a few people who could risk being fired.

As competition became stronger, managers became too concerned with protecting their own people to support researchers in other departments. Indeed, the managers were threatened too, and their own interests sometimes came in conflict with the interests of their department members. Performance review became a political power game, with goals that changed faster than one could change one's research project; sometimes the goals changed even faster than one could rename one's research project.

I saw broadly similar stories played out in all the other fragments of the old AT&T (AT&T, Avaya, Agere Systems).

Performance review systems for industry may not be appropriate for academia. Bell Labs provides a strong example where trying to force research into a corporate mold succeeded merely in killing the research. Industrial practices are well adapted to situations that are fairly well understood: you have clear goals, all the intermediate steps are well understood, and success or failure can be judged reliably and rapidly. Business practices work well when one can control nature. When constructing a building, one makes sure that the next steel beam will fit with the ones already assembled. That predictability allows you to plan many steps in advance, and allows you to attribute failures to the people running the machines, since you know that the steel will never misbehave.

Research is more of a dialogue with nature. You ask nature a question, and never quite know what you will get in reply. One's ability to plan is limited, just as it is in a conversation; an attempt to launch into a clever and witty anecdote can be foiled by a pre-emptive anecdote from the other side. As a result, to whom do you attribute a failed experiment? It can be human incompetence, but often it is uncooperative Mother Nature. Because of this, any performance review system for research always runs the risk of unjustly penalizing people. In a just system, sometimes the managers should write a good stiff letter to the Author of the Universe instead.

What might be the moral for Oxford?

First, that performance review is not necessarily evil. Under the right conditions, it can lead to more and better research, and an atmosphere of elite teamwork.

Second, that performance review is a very powerful process, and not one that is completely under control of any level of management. Putting performance review into effect in Oxford will change the university from a loosely-coupled system where everyone does their own thing into a strongly coupled system. People will start thinking of how they can influence the performance review process for their own ends. Game Theory enters. Politics enters.

As Bell Labs went downhill, each group (upper management, middle management, line management and researchers) all had their goals, and the end result satisfied none of them. It gave one the impression of a dysfunctional family, or a rowboat with four captains.

Third, if done badly, performance review can help destroy an organisation. It would be foolhardy to enter into such a system without designing it carefully, and doubly foolhardy to enter into one without treating it as a dangerous experiment that may need to be terminated if adverse consequences appear.


[ Papers | kochanski.org | Phonetics Lab | Oxford ] Last Modified Tue Nov 24 19:14:34 2009 Greg Kochanski: [ ]