Friday 27 August 2010

Harvard misconduct: setting the record straight

Here is my official response to, and interpretation of, the investigation by Harvard University into the circumstances that led to the retraction of an article published in 2002 in the journal Cognition (before my time as Editor-in-Chief, but that is irrelevant). I’m posting this here to avoid any misunderstanding given that my comments to the press will likely be misquoted. At least now, I can point them to this blog, and save myself a lot of phone calls.

The article was multi-authored, but the retraction of this article attributed to Professor Marc Hauser sole responsibility for the need to retract the article. Anyone who does not know who that is can simply skip this post.

As Editor of the journal Cognition, I was given access to the results of the investigation into the allegations of misconduct against Marc Hauser as they pertained to the paper published in Cognition in 2002 which has now been retracted. My understanding from those results is the following: the monkeys were trained on what we might call two different grammars (i.e. underlying patterns of sequences of syllables). One group of monkeys were trained on Grammar A, and another group on Grammar B. At test, they were given, according to the published paper, one sequence from Grammar A, and another sequence from Grammar B - so for each monkey, one sequence was drawn from the "same" grammar as it had been trained on, and the other sequence was drawn from the "different" grammar. The critical test was whether their response to the "different" sequence was different to their response to the "same" sequence (this would then allow the conclusion, as reported in the paper, that the monkeys were able to discriminate between the two underlying grammars). On investigation of the original videotapes, it was found that the monkeys had only been tested on sequences from the "different" grammar  - that is, the different underlying grammatical patterns to those they had been trained on.  There was no evidence they had been tested on sequences from the "same" grammar (that is, with the same underlying grammatical patterns). Why is this important? Because if you just tested the monkeys on one underlying pattern, and you record how many times they turn around to look towards the hidden loudspeaker (this is how it was done), perhaps they would turn round as often if they heard **anything** coming from that speaker. So you'd need to include the "same" condition - that is, the sequence of syllables that had the same underlying pattern as the monkey had been trained on, to show that the monkeys *discriminated* between (i.e. turned a different number of times in response to) the different grammars.

It would therefore appear that the description of the study in the Cognition paper was incorrect (because the stimuli used during testing were not as described), and that the experiment *as run* did not allow any conclusions to be drawn regarding monkeys' ability to distinguish between different grammatical patterns. Given that there is no evidence that the data, as reported, were in fact collected (it is not plausible to suppose, for example, that each of the two test trials were recorded onto different videotapes, or that somehow all the videotapes from the same condition were lost or mislaid), and given that the reported data were subjected to statistical analyses to show how they supported the paper's conclusions, I am forced to conclude that there was most likely an intention here, using data that appear to have been fabricated, to deceive the field into believing something for which there was in fact no evidence at all. This is, to my mind, the worst form of academic misconduct. However, this is just conjecture; I note that the investigation found no explanation for the discrepancy between what was found on the videotapes and what was reported in the paper. Perhaps, therefore, the data were not fabricated, and there
is some hitherto undiscovered or undisclosed explanation. But I do assume that if the investigation had uncovered a more plausible alternative explanation (and I know that the investigation was rigorous to the extreme), it would not have found Hauser guilty of scientific misconduct.

As a further bit of background, it’s probably worth knowing that according to the various definitions of misconduct, simply losing your data does not constitute misconduct. Losing your data just constitutes stupidity.

Tuesday 17 August 2010

scientific fraud, whisky, coffee, and fine food

Last week was taken up with the Cognitive Science conference in Portland. Much of the conversation centred around the revelation, revealed in the Boston Globe, of academic misconduct perpetrated by a famous Harvard professor. I had some small part to play in the revelation, having recently received the official retraction of a paper that he had previously published in the journal I edit (although it was published before my time at the journal). The retraction confirmed that there had been an internal investigation at Harvard - a much-needed piece of evidence in the face of Harvard’s initial stone-walling of requests for information. Shortly after the Boston Globe published their piece, Harvard relented and confirmed the investigation. Although I cannot reveal the information I am privy to, I have every confidence that the full details of the misconduct, and the findings of the inquiry, will shortly be made public. My own view in all this is that we should not be too quick to throw out the science perpetuated by this individual - there is little doubt he contributed some important and useful ideas to the scientific literature. But we should be quick to show that wilful misrepresentation (and misreporting) of data cannot be tolerated: Harvard need, sooner rather than later, to report the sanctions that will be imposed against this individual. It would be a travesty of justice, I believe, if he kept his tenured position and all that happened, perhaps, was that he would be barred from receiving federal (government) funding for future research - after all, there are many researchers who fail to receive funding simply because there’s too little to go around, and if the worst that happens to him is that he joins the ranks of the very many researchers who, for quite different reasons, also fail to receive federal funding, then that sends a dreadful signal to those researchers - fake your data, and if you’re found out, there’s no need to worry as you’ll just go back to square one, which is in any case where you are now. I concede that if Harvard are prevented, for whatever reason, from firing the man, it may be difficult to come up with appropriate disciplinary action (I dislike the word punishment, but ultimately, the example that has to be set requires just that). But the scientific community is watching, and Harvard may well be judged on how they deal with this.

The talk of scientific fraud did not distract me or various colleagues from talking science, planning science, and drinking some excellent whisky. I am now in Philadelphia, a regular port of call given my collaboration here (for which, as I mentioned in my last post, I just recently received funding - three years’ worth to collaborate with people in Philadelphia, New York, and a city that shares not very much with either of these two: Dundee).

La Colombe, which is almost just around the corner from my hotel, continues to be my favourite coffee shop, and recently added to the list of must-visit stores is Di Bruno - I could live there quite easily if they would just let me sleep in a quiet corner...

Saturday 7 August 2010

Samos, Tenerife, Cognition...

Since last posting, my feet barely touched the ground. I returned from Germany, having enjoyed a personal chauffeur service in a rather nice BMW Z4 (or some such.. what would I know? It had just two seats, no roof, a BMW badge, and most importantly of all, a driver familiar with the accelerator and break pedals; he may even have used the clutch, but I have no evidence of that...), and flew two days later to holiday in Samos with Silvia, where we were the guests of the Honorary (German) Consul. I now regret the German failure to secure the World Cup, as I can attest to the generosity and humor of their Consul on Samos. Oh... I forget. He’s not German. Oh well... I take that back - I have no regrets whatsoever that the Germans failed to secure a World Cup win. Viva España!

Got back from Samos a week later, and flew two days later to Tenerife for a family holiday. Villa, pool, volcano, whales, dolphins, beer... all essential ingredients to a successful holiday. The cetaceans were wild, seen and photographed during a boat trip. The beer was almost as wild.

Now... anyone familiar with my journal duties will no doubt feel aggrieved at what appears to have been way too much time away from the journal’s coalface. So here are the facts lest anyone complain: This year I have processed on average 16 manuscripts each week (split roughly equally between dealing with new submissions and with revisions - so this means accepting or rejecting manuscripts, recommending revisions, or sending out to review). In the past three weeks, during my vacation ... vacation ... I averaged 19 manuscripts in each of those three weeks. I also revised a manuscript of my own (and a student’s) for resubmission to Psych Science, read and commented on a student’s thesis chapter, and learned I had been awarded £600K by one of the UK research councils for another three years’ research in my lab. Needless to say, I write this because of some bizarre sense of guilt at having been away from my desk for so long. But despite the work that I somehow managed in these last three weeks, I do feel rested and relaxed, and even a little tanned. And hey - I have another three days before I fly to the Cognitive Science conference in Oregon...

Other journal statistics that may be of interest - as of today, I have this year triaged (rejected without sending to review) 45% of the new submissions that I have handled. But with an acceptance rate of 20%, that actually means that any author who gets past the triage stage has a roughly one-in-three chance of eventually being accepted into the journal. I’m not sure if this is a good thing or not.

On a more practical note: We receive more submissions during the summer months than at any other time, and because I have two fewer associate editors working with me than I should have (Elsevier gave us the money for two more, in view of the increased submission rate), I have inevitably been unable to keep up with the flow of submissions - we all (the other associate editors included) are working as hard as we can. But it is not enough, and this means that it can take a few weeks from when a manuscript enters one of the queues (as a new submission, or as requiring a decision) to when it can be dealt with. Fortunately, most authors are either very patient, or are themselves on vacation and have not noticed. A minority, like me, will be working so hard that they barely have time to notice anything at all. To them, I offer this advice: MORE BEER.