Ivan Oransky, executive editor of Reuters Health, provided excellent evidence yesterday regarding the need to look past abstracts of journal articles if accuracy matters to you:
His own post on Embargo Watch: “More thoughts on ASCO: How the embargo policy can lead to hype”
Gary Schwitzer‘s post on the HealthNewsReview blog: “A prime example of the problem with some TV physician-‘journalists'”
Here’s an intriguing abstract that begs for further study: “Accuracy of cancer information on the Internet: A comparison of a Wiki with a professionally maintained database.”
Gilles Frydman makes two excellent points about it:
“It would be interesting to know which 10 types of cancer. Wikipedia pages in long tail are usually pretty lacking in depth.”
“PDQ exist in 2 versions: Patient & Health Prof. Abstract doesn’t say if one or both kinds compared to Wikipedia.”
Anyone out there have access to the full study? Please post what you find.
Mark Hawker says
Also, always be skeptical about the results in the abstract. They often show the most interesting result which may not even answer the researcher’s initial question.
For example, they use a figure for the “controversial aspects of cancer care” which show results of 2.9±2.8 and 6.1±6.3. Now, I have no idea what the number represents, but if we take 2.9±2.8 and show that as a range from 0.1 to 5.7 you can see that some articles actually were not controversial at all.
For PDQ, this ranges from -0.2 to 12.4. The researchers state the maximum score is 18, but what does 18 really mean? Is that the level of controversy? If so, the PDQ actually varies a lot more than Wikipedia.
Also, I’ve not come across inter-observer variability before but if the range is 0 (no agreement) to 1 (full agreement) a score of 0.53 says they agree half of the time, right? Also, the test-retest reliability seems to suggest that if the researchers read the same article four times, on one occasion they would classify it differently.
Finally, consider the researchers’ motives: did they want to prove that Wikipedia was unreliable? If so, did they mix and match measures to justify their hypothesis?
Susannah Fox says
This is why I wanted to continue this conversation on a blog and off Twitter – thanks so much! I’d even extend your rule: always be skeptical of anything that is summarized in a headline, abstract, or tweet.
Indeed, see Joe’s link to the famous Science News Cycle comic (which I first saw on a slide at an H1N1-readiness presentation – perfect).
Joe McCarthy says
I sent some email to Yaacov Lawrence, a coauthor of the study comparing the accuracy of wikipedia to NCI PDQ who was referenced as the primary source of a Washington Post blog entry on Wikipedia info. passes muster, asking for clarification on the issues that Gilles raises.
Meanwhile, I wanted to share my favorite example of the need to be careful in interpreting studies, from a recent Psychology Today article on Science Secret for Happy Marriages: Be More Attractive Than Your Spouse. Daniel Hawes (@DanielHawes) points out numerous caveats and limitations in this – and any – study, and concludes with the following rather tongue-in-cheek meta-disclaimer:
Oh, he also links to a very funny PhdComics strip on The Science News Cycle.
Susannah Fox says
Some other folks have emailed the lead author, too, so hopefully this mini-groundswell of interest will prompt a response. Love the B.A.D.M.S. reference and will suggest it to the Chancellor of S.M.U.G. (http://social-media-university-global.org/)
Joe McCarthy says
I just got off the phone with Dr. Yaaclov Lawrence, a radiation oncologist at Jefferson University who is a co-author of the aforementioned study, Accuracy of cancer information on the Internet: A comparison of a Wiki with a professionally maintained database.
He offered several clarifications about the study. First of all, it’s important to note that this was a poster abstract, i.e., the work has not yet been submitted for peer review (though that is planned in the near future). In response to issues raised by Gilles, Dr. Lawrence clarified that they were using the Patient (vs. Health Professional) versions of the articles, and evaluated the following 10 types of cancer:
* anal cancer
* breast cancer
* colon cancer
* lung cancer
* melanoma cancer
* osteosarcoma
* prostate cancer
* small bowel cancer
* testicular cancer
* vulvula cancer
All articles from each source were rated for accuracy by 3 medical students using a 5-point scale for assessing how well each article addressed 10 facts drawn from an Oncology textbook.
Unfortunately, I did not remember to ask about the questions raised by Mark in his comment, but Dr. Lawrence did note that neither web site fared particularly well in the assessment.
In addition to writing up and submitting the results of this study, he and his colleagues are interested in doing larger scale studies – at least with respect to the number of people conducting the assessments – and so I mentioned that the Health 2.0 community would be an excellent resource for potential collaboration.
Gilles Frydman says
I have read the abstract many times. I have read the great explanations provided by Joe McCarthy and I am even more surprised at the Washington Post blogger who posted about it in the first place!
There is few things more dangerous than quoting ASCO abstracts without assessing them for scientific validity. In this case at least the authors do not look they tried to play the system by delivering an abstract that would automatically catch the press attention. They are not pushing a new drug, a new treatment or anything of commercial value.
In fact their study could have been very interesting. If its methodology was solid. Unfortunately, using medical students to assess the readability of wikipedia and PDQ statements for readability may not have been the best way to assess the readability as experienced by patients.
Readability testing for patients MUST BE DONE by patients. Participatory medicine principles apply here too! It is absurd to think that medical students can represent the viewpoint of newly diagnosed cancer patients, the obvious target of both wikipedia entry and PDQ statements.
The first point that jumped when checking the wikipedia entries is the lack of structure and lack of standardization of formatting. This makes some entries very hard to understand. As an example sometimes there is a section titled treatment, sometimes not (the breast cancer page has it, the prostate cancer does not. For the later it is titled management, a confusing term). Medical information, even if accurate, presented in a confusing way will confuse readers
The PDQ statements, which I consider very useful for those who have just been diagnosed but useless after that, are based on much science and a huge knowledge base of readability and usability. Their content is delivered piecemeal in a highly structured way, which strict formatting rules. That difference alone is, IMO, more important than all others combined.
And I have to add of course that Wikipedia editors have decided, by choice, to never allow direct links to the online communities of patients specialized in the disease that a wikipedia page covers. In other words, Wikipedia embraces the knowledge of the crowd only when it comes from its own. You can rest assured that the osteosarcoma group on ACOR is a place to find better, more in depth, more current information about that disease than either the wikipedia page or the PDQ statement. But you won’t find a trace of the group in Wikipedia.
Joe McCarthy says
Gilles: thanks for contributing further insights into this issue. I think the authors of the study have succeeded in sparking conversations well beyond their initially intended audience. The discussion on this post alone suggests that this is a promising area for more careful study.
I’m particularly disturbed by one of your claims:
I must admit ignorance on this policy, but would like to learn more. Are you aware of any published descriptions of this policy, and/or do you have any correspondence from Wikipedia editors about this? Have you tried posting links [and had them removed], or know of others who have done so?
If not – or even so – in the interest of science, would you be willing to experiment with editing the Wikipedia article for Osteosarcoma to add an external link to the appropriate ACOR page? Or, if you want, reply to this reply with an ACOR link and I’ll give it a whirl.
Tracking an attempted edit, and the interactions (and potential rejection) that ensues in a [separate] blog post may help to highlight the issue more effectively … and perhaps promote reconsideration and even revision.
Susannah Fox says
Joe,
Thanks for bringing new energy to the conversation here!
John Grohol wrote about Wikipedia’s rules in 2008 on this blog:
Wikipedia’s Arcane Rules Censor Health Information
http://e-patients.net/archives/2008/04/wikipedias-arcane-rules-censor-health-information.html
You may also want to check out the other “Similar Posts” that pop up at the bottom of the page.
Joe McCarthy says
Another highly recommended resource is Howard Rheingold‘s work on Critical Thinking (or as he sometimes refers to it, “crap detection”). Howard also chronicled his recent experience with the diagnosis and treatment of anal cancer via his Howard’s Butt blog.
Betsy Aoki just wrote an article summarizing some of this work on the Huffington Post: Critical Thinking, Howard Rheingold and Cancer at the NCCE conference
@Drsteventucker says
I would leave this abstract alone or certainly only use it as the starting point for a conversation. I am sure one of my esteemed colleagues can point out how many abstracts NEVER go to publication! Also, since when are medical students the proper judge for medical facts noted in textbooks (talk about out of date…maybe at least compare to UpToDate or similar). The real issue here is cancer/medical hype and poor reporting. It seems like everyone who covered the mortgage market BEFORE the crash must now be a medical reporter because tough questions are NEVER asked. Medical stories fall into lifestyle in most papers and clearly they lack scrutiny. Even the business side misses the real questions. One trick I like to do when checking abstracts is look and see if the author has recycled this from the year before or from multiple meetings. Often it is the same old same old or inflammatory subset analysis.
Susannah Fox says
I just want to review our story so far:
On Tuesday morning, I read a short item in the print Washington Post Health section (yes, I still get a dead-tree version delivered to my home). Of course the Post website makes it nearly impossible to find the article, so I’ll link to the more informal blog post written by Jennifer Huget:
Wikipedia cancer info. passes muster
http://voices.washingtonpost.com/checkup/2010/06/wikipedia_cancer_info_passes_m.html
I tweeted a link to that blog post and to the abstract b/c I wanted to hear what the health geek tribe thought:
Accuracy of cancer information on the Internet: A comparison of a Wiki with a professionally maintained database.
http://abstract.asco.org/AbstView_74_41625.html
The WaPo link was RT’d a dozen times to potentially thousands of readers (for those not on Twitter that means that other people reposted it to their own page). The abstract link was RT’d 3 times.
A handful of people decided to dig deeper into the story and e-Patient Dave suggested I post something about this to e-patients.net so we could have a place to hash it out (see comments above).
And now we have some answers to the questions originally posed on Twitter because (get ready, this is pretty radical) Joe McCarthy *picked up the phone and talked* with the author of the abstract.
What I love is that while I didn’t have time to run down all the questions that popped into my head as I sat there reading the paper on Tuesday morning, someone else did!
I don’t know about you, but I like this internet thing. There, I said it.
Eve Harris says
I like this internet thing, too 🙂
Telephone, whooda thunk? I tried email to the 1st authors…
I know we’re all advocates for useful and accessible health information for consumers, so I’m a little uncertain why THIS abstract raised so many questions. There were thousands @ ASCO alone. What am I missing?
Susannah Fox says
I think it was a combination of factors:
– recognition of this particular abstract by MSM (WaPo)
– everyone likes talking about Wikipedia’s credibility
– cancer is another hot-button issue
– Twitter makes it easy to quickly exchange views, which then feeds the attention paid to that abstract
Joe McCarthy says
Kathy Gill (@kegill) wrote a great critical analysis of journalism-as-stenography: Obama’s 90 Percent Clean Up Promise: That’s Not What He Said. Although she is focusing on political journalism, her observations about amateurism vs. professionalism and the need for healthy skepticism are more generally applicable and very relevant to this discussion.
She also links to a related – and insightful – article by Doug Rushkoff on There’s More to Being a Journalist Than Hitting the ‘Publish’ Button: For better or worse, the Internet is ‘biased to the amateur and to the immediate.’
Joe McCarthy says
It turns out the ASCO abstract is quoted in the Wikipedia entry on the Reliability of Wikipedia, under the section on Science and medicine peer reviewed data.
I find this very ironic … so much so that I decided to write a post of my own about this and several other examples I’ve encountered recently that lead me to conclude that all models, studies and Wikipedia entries are wrong, some are useful … though I could be wrong about this.
Susannah Fox says
Joe, thanks so much for sticking with this story. The circularity is almost amusing — but not quite, since the implications are serious.