No, but you just can't convince some people.
Not being able to make money might be a good thing, in so far as science is concerned, though. A damning article by Harriet Washington (probably to promote her forthcoming book) recently published in The American Scholar has a familiar moral to its story: commercial concerns can get in the way of doing good science. But boy is it one heck of a story. (A similar sad tale is told in the context of psychiatric drugs in a recent book review by Marcia Angell.)
Medical research suffers from severe conflict-of-interest issues,
"Today, medical-journal editors estimate that 95 percent of the academic-medicine specialists who assess patented treatments have financial relationships with pharmaceutical companies, and even the prestigious NEJM gave up its search for objective reviewers in June 1992, announcing that it could find no reviewers that did not accept industry funds."
... from the seedy plague of ghostwriting,
"Many biased medical-journal articles are the work not of physicians or scientists, but of ghostwriters who script them in accordance with the drugmakers’ marketing messages. A medical expert is found who, for a few thousand dollars, is willing to append his or her signature, and then the piece is published without any disclosure of the ghostwriter’s role."
... advertisements passed off as genuine research journals,
"Elsevier, the Dutch publisher of both The Lancet and Gray’s Anatomy, sullied its pristine reputation by publishing an entire sham medical journal devoted solely to promoting Merck products. ... its Australasian Journal of Bone and Joint Medicine, which looks just like a medical journal, and was described as such, was not a peer-reviewed medical journal but rather a collection of reprinted articles that Merck paid Elsevier to publish. At least some of the articles were ghostwritten, and all lavished unalloyed praise on Merck drugs, such as its troubled painkiller Vioxx. There was no disclosure of Merck’s sponsorship."
... and much, much more.
Washington cites a satirical 2003 article published in the British Medical Journal, titled "HARLOT plc: an amalgamation of the world’s two oldest professions" (free registration required to see the full article). HARLOT stands for "How to Achieve positive Results without actually Lying to Overcome the Truth", and was the brainchild of two doctors who effectively catalogued the various ways that one might distort the truth to present a favorable outcome in clinical trials.
Here's where I think the relevance is, for us regular basic science people. It's also a quick tutorial in bad science - what we know we should not do (by commonsense or otherwise) but may be tempted to do, in the interest of making our results look nicer or getting a paper published faster. Some tips from HARLOT, if you want to go down that path:
- Cherry-pick your data - Selectively ignore negative results.
- Shifting the goalposts - Use one set of standards for your experiments and another for your controls or comparisons.
- "Munchausen's statistical grid" - Slice your data in as many ways as you can until you find a statistically significant result.
- Overinterpreting positive results
And that's just what you can do with your data and experimental design. The human element is important too: hiring 'experts' to sell your snake oil, celebrity endorsements, and behind-the-scenes moves like lobbying government agencies, using litigation as a tactical weapon....
Ultimately the lesson we should take from all this is the one lesson that formal education cannot really teach: critical thinking and appraisal. Should I trust a certain research finding, even if it's been published in a big-name journal?
This affirms my thinking on how one should read a scientific paper. One of my tutors told me something offhandedly that has stuck with me since freshman year of college: the materials and methods are the most interesting part of a paper. Get to know how an experiment was really done, and look at the actual data as much as possible, instead of just buying the conclusions and interpretations that the authors try to feed you. Many students make the mistake of jumping straight from the Introduction to the Results and Discussion, but these are precisely the parts where the authors are trying to spin their research as being 'relevant' and 'important'. Also: discussion is cheap. It is the data which are the hardest-won; perhaps our attention should be proportional to the effort put into producing it.
Otherwise we might eventually end up buying that snake oil.
No comments:
Post a Comment