Brooke Gladstone: On Tuesday, one of the authors of a headline grabbing study published in December in the leading journal Science, called for a retraction of his own work. The study claimed that a 20-minute face-to-face conversation with a gay canvasser could very often change someone’s mind about gay marriage. So often, noted an editor at Science, that “the magnitude of the shift for the person who answered the door was as large as the difference between attitudes in Georgia and Massachusetts.”
The study’s astonishing findings were widely reported - in the New York Times, the Washington Post, Vox and others. Here’s Ira Glass from This American Life:
Glass: And the big surprise was, 6 months, 9 months, a year after the canvassers visited, the voters STAYED CHANGED. The researchers were so skeptical that this could be real that they did the entire study a second time, at huge cost by the way- hundreds of thousands of dollars. And again, same result. Professor Green says he and his colleagues have read 900 papers and they haven’t seen anything like this result- anyone who’s changed people’s views and it lasted like this.
Brooke: But when two graduate students, David Brookman and Joshua Kalla, tried to replicate that result, they found some worrisome red flags in the data.
That prompted one of the study’s co-authors, Donald Green, a Columbia University professor, to get the original data from his co-author UCLA graduate student Michael LaCour. Green wanted to examine it, which he hadn’t, prior to publication. But LaCour didn't provide it. So Green asked Science to retract the study. Ivan Oransky is the co-founder of Retraction Watch and vice president and global editorial director at MedPage Today. And he says those results should've been the first tip-off.
Ivan Oransky: The idea that you could convince anyone of anything in this day and age, is actually quite remarkable.
Brooke Gladstone: But what were the problems with the study, that these graduate students found?
Ivan Oransky: Well, they were actually really taken with the study. One of them said, I'm gay, and this is something I think about a lot, and I wanted to believe this study. I also, he said, wanted to replicate it and then do the next step in the experiment. So he starts looking at the data, they're remarkable. The response rate-
Brooke Gladstone: Something like 90 percent?
Ivan Oransky: Yeah, you don't get 90 percent in surveys. And just the magnitude of what they were able to get- again, from Georgia to Massachusetts. It was pretty dramatic, and so that raised some red flags.
Brooke Gladstone: And then they found that the data were distributed over a pattern that could easily be generated by a computer program?
Ivan Oransky: Correct. And this is how cheaters are caught in science, it looked funny. It looked too perfect. Then they dug further and said, let's look at how this was actually obtained. And when they called the survey company that had allegedly done the work, they said we don't have anyone by that name here, and we don't do that kind of work.
Brooke Gladstone: I think it's pretty clear why Michael LaCour may have cooked this thing up. He was at the beginning of his career, he wanted to make a splash, and there isn't a way to make a bigger splash than to publish a paper in a journal like Science.
Ivan Oransky: LaCour gave us the same statement that he gave lots of other news organizations and hasn't really responded to any questions in a substantive way. So I don't want to read his mind here but if you look at the timeline, it's pretty clear that here's a guy who's out there looking for a job. And this would've been the time, December-ish, when he would've been out on the road applying for jobs. Which, he according to the Princeton University website, did get at Princeton. But the pressure to publish, it's not far-fetched to say, he was feeling that.
Brooke Gladstone: The motivation that isn't so clear to me is that of his co-author, Professor Green at Columbia. It strikes me that his failure to look at the original data and the excuse he gives just doesn't wash.
Ivan Oransky: His rationale, for not having looked at the data, was that essentially if you do research involving humans, which surveying them, particularly if you're trying to change their mind, is, then you need to get what's known as Institutional Review Board approval. And this is for good reasons, it's been around for about 40 years here in the US, protects human subjects. But somebody had already given IRB approval for this: UCLA. Green could've easily sort of said to Columbia, here's what I want to do is that OK, or give me some kind of appointment at UCLA for 15 minutes so I can do- whatever it would be, there's lots of ways around that. But often, what happens is, people put their names on papers. And when the proverbial hits the fan, they disown any knowledge of it-well, I didn't do that part of the paper. Well, to be fair, Green is giving a rationale for why he didn't look at it, but he's the one who's actually coming forward here and doing the right thing.
Brooke Gladstone: So let's talk about peer review. As you've noted, this is a volunteer, unpaid position. I think that situations like this make a very strong argument for actually paying peer reviewers, because anybody could've done what those graduates students did. Which was to call up the place that generated the data, especially if it looked a little fishy.
Ivan Oransky: If you talk to science editors about it, they say there's no way for us to have found that, no way for peer reviewers to have found that, but it rings a bit hollow. Because as you point out, anyone could've called the place that allegedly did this survey. That probably would've been going beyond what is reasonable given that peer reviewers are unpaid, it's a volunteer activity, and they don't have that much time.
Brooke Gladstone: Well in that case they shouldn't be doing the peer reviewing.
Ivan Oransky: Then who should?
Brooke Gladstone: Pay. Them. Money. And pay them by the hour.
Ivan Oransky: I'm not against that. On the other hand, they would still need access to more information than they currently have, and they'd need more time to do it. We've seen a case, a study in Cell, came out about two years ago. Cloning embryonic stem cells. They missed all of these really problematic figures that were in the paper, which ended up having to get corrected. The people who caught that were not peer reviewers, because the peer reviewers had said, at the request of the journal, that they would do this in 24 hours. Now imagine that- those errors were found within days of the paper's publication.
Brooke Gladstone: Ok, now let's look at the journalists. We assume that once it's gone through the peer review process, we can trust it. You think you know, not so fast.
Ivan Oransky: What I tell my students at NYU, where I teach medical journalism, is you should keep a biostatistician in your back pocket. I think they're wonderful people. They tend to be somewhat lonely people, that of course is not fair, but having worked with them it was so valuable, they could rip these studies apart, find out what was wrong. Give me the questions to ask at the very least. Because in this case, really no one raised very many red flags. That was somewhat amazing to me.
Brooke Gladstone: You think the entire scientific slash journalism establishment had confirmation bias when it came to this study?
Ivan Oransky: Yeah. When studies show something that we sort of believe is true, or want to be true, which is probably more pernicious than believe is true, we tend to sort of go along with it. This is why, to us, the argument should really be about post-publication peer review. Do some preliminary peer review, but let it out in the world and accept that just because it's published in Science, or Nature, or the New England Journal of Medicine, it probably still has flaws that more eyeballs will find.
Brooke Gladstone: But it's already made its impact on the world. And it's much harder to make a counter-impact with a correction.
Ivan Oransky: John Maddox was the editor-in-chief of Nature. And a reporter was trying to be a little provocative, they said John, how much of what you publish every week in Nature- which, again, world's leading scientific journal- how much of it is wrong? And Maddox said, oh that's easy! All of it. And what he meant was, over time, all of it would be proven wrong, with a couple of exceptions. The bigger problem in science is not so much the fraud, it's this lack of reproduce-ability. It's very much an epidemic in science right now. If you look at cancer studies for example, as many as 90 percent of studies that are licensed into drug companies, are not reproducible. That's the much bigger problem. If this gets everyone thinking about that, and if it makes a single reporter think twice about covering a study without any skepticism next time, we've accomplished something.
Brooke Gladstone: Ivan thank you very much.
Ivan Oransky: Always great to be here.
Brooke Gladstone: Ivan Oransky is vice-president and global editorial director at MedPage Today, the co-founder of Retraction Watch, and a doctor who teaches medical journalism at NYU.