Fraudulent science? Not so much.

The News

We have now been painted as cheats.   According to the Telegraph and a study from the University of Edinburgh (, falsification of results is common in science.   The Telegraph, in something less than the finest journalistic traditions, picks up the most spectacular bits and exaggerates them: according to the Telegraph article “The results paint a picture of a profession in which dishonesty and misrepresentation are widespread.”
(See )

Shit.  It’s really not true.    In science and scientists, confusion is common enough.  Exaggeration is not rare.   Publicity seeking happens.  Self-delusion is an occasional state of mind.   Especially when people’s jobs are on the line, I’ve seen unwillingness to follow up troublesome clues that might overturn a cherished result.  But I’ve been in various fields of science for about 30 years now, and I’ve never been convinced that someone I was talking to was intentionally dishonest*  about their science.**

[* In the strong sense of making things up.  I’ve met plenty who were more-or-less intentionally exaggerating the importance of their work.  That is a rather weaker kind of dishonesty.  Exaggeration is, I think, to some degree unavoidable: people need to feel they are accomplishing something important.  It’s also driven by the modern science culture, where blowing your horn is a requirement for getting a job.]

[** I was at Bell Labs during the Henrik Schön fraud, but it was a big place and he didn’t travel in my circles.  So, I’ve met one, but I don’t think we spoke enough for him to count.]

The Source

Now, I am not too impressed by the source article behind the Telegraph story, either.  It is a meta-analysis of other investigations on scientific fraud.     I have little respect for meta-analyses, because they tend to assume there are no common errors across the studies they use.    And that’s clearly dumb, because people tend to use the same techniques and ideas over and over again.  If one study gets it wrong somehow, the odds are quite good that everyone else does, and in the same direction for the same reason.   So, I don’t think that meta-analyses are (usually) any better than the underlying research.   If there is disagreement (which there is, among the meta-analyzed papers), there is no reason to think that the meta-analysis will resolve it correctly.

And, more than that, the paper makes much of the fact that more scientists see fraud in others than in themselves.   The paper interprets that as a reporting bias, suggesting that we don’t acknowledge our own fraud.  What they seem not to realize is that there is only one of “me” and many others.

In my career so far, I’ve collaborated with at least a couple of dozen people.  I have carefully read the papers of hundreds more.  So, of course, I have lots of chances to see or guess at fraud elsewhere.   That’s because elsewhere is a big place.  This year, I see the work of five other scientists on a daily basis, and so I have five chances to see fraud in “others” vs. one in me.  Of course I’m more likely to see fraud in others, even neglecting papers I read or people I talk to at conferences.

If you think about it, you’ll see that I’m also more likely to see blue eyes in others, too, for the very same reason.  There is only one of me (who happens to be brown-eyed) and I know dozens of people other than me.   Some of them are blue eyed.   Probably 90% of scientists know someone who is blue-eyed, but only 10% or so would admit to being blue-eyed themselves.  Sheesh!

The Sources behind the Source

Beyond that, the original studies are based on surveys and I don’t have a lot of respect for most surveys either.  In my experience, the questions that are asked are almost always the wrong ones, simply because the real world is more complex than the imaginations of the authors of the surveys.  [The only surveys I really trust are about elections, and that’s because one can test the surveys against a reality: the eventual secret ballot in the voting booth.]

For instance, I remember filling out a survey asking about working conditions for research staff at Oxford.   (Or, maybe I gave up because the questions were silly.  I do that a lot.)  There were questions about how much support I was getting from my project leader.  I found those a bit hard to answer, because I was my own project leader.  I suppose they hadn’t imagined me when they made up the questions, and it was pretty clear that because of that, nothing I could say on the survey would be interpreted correctly.

Or, I remember filling out a survey on bicycle use and accidents (for someone at the University of Bristol, perhaps?).   It wasn’t a bad survey, but it asked about bicycle accidents and the number of times you’d fallen off your bicycle.  To me, it seemed that the survey was written with road users in mind, with a focus on traffic, cars, and streets.

Well, I had fallen off my bicycle, but it was one of those weird accidents.  It was on one of those little English paved walkways between houses.  I’d swerved to avoid the biggest pile of dog excrement I’d ever seen, brushed my handlebar against a fence, and thud.  (Fortunately, perhaps, it wasn’t splut.)   So, I reported it on the survey, but couldn’t help wondering if my answer would be used to support traffic calming measures rather than canine intestinal control.

What to do?

But, dubious though the analysis might be, it points out that there is a certain amount of fraud in the world and a larger amount of sloppy research.  I see a risk that this kind of thing will turn into a fuss that will undermine research freedom and flexibility and lead to a lot of useless and unproductive paperwork.

The best plan I can think of is to do our research as openly as possible so that people can see what we have done and check it or reproduce it.  We are already committed to doing much of that: we release our data and we are already committed to releasing our software in one of our projects.  That’s undoubtedly in the interests of science as a whole.   Open science is like open source software.  If you share data and tools, then everyone can progress faster.

The only thing is, I worry about shooting ourselves in the foot.  Many researchers have no job security; it is tempting to keep tools and data secret so that we maintain our advantages.  If we make it easy for everyone else to do our research, then what’s to say that they won’t get the next research grant, instead of me, and I’ll be filling bags at a supermarket?  [Well, realistically, it’s actually fairly hard to get people to adopt one’s tools.  That’s the saving grace from the economic point of view.  Unfortunately, it means that most of the benefits of open science may be more theoretical than actual.]

There is a good argument to be made that it is the extreme level of competition in science that drives a lot of fraud and bad behaviour.  And it drives a lot of the self-delusion, too.  It’s much easier to, somehow, never get around to making those potentially embarassing checks of your results if you are in a hurry and under pressure.

Decisions, decisions…