Pharmacist use of social media

The most recent hat tip for alerting me that one of my articles was published goes to @redheadedpharm, who also has one of the most thoughtful pharmacist authored blogs out there IMHO.  I should note that by drawing my attention to the article, TRP does not endorse the contents nor see eye-to-eye with me regarding pharmacists, pharmacy, or social media.  And that’s ok. I have to think no rational person just wants an echo chamber.  In fact, I may revisit the whole ‘landscape of pharmacist blogs’ in a future post if I can figure out a way to do so that doesn’t involve generating the hate e-mail and widespread snark that the AJHP article did.* 

In any event, I did want to share that the article I assisted Drs. Alkhateeb and Latif with is titled Pharmacist use of social media and was published in the International Journal of Pharmacy Practice.  As you can see to the left, this is a Short Communication and essentially provides a snapshot of social media use by pharmacists in West Virginia.  The most frequently used applications in this group of surveyed pharmacists included: YouTube (74%), Wikipedia (72%), Facebook (50%), and blogs (26%). Twitter (12%) and LinkedIn (12%) were also used by the respondents.  In a sense, it was a confirmatory study in that it verified some things we thought we knew about pharmacists and social media.  Some of the findings (e.g., 50% use of Facebook) were a little surprising.  Use of Facebook, in particular was examined a little more in-depth; only 15.8% indicated they used it for any professional purpose.  Usage patterns largely reflected those of non-healthcare professionals…these pharmacists used Facebook to keep in touch with colleagues, chat, upload pictures, etc. 

@kevinclauson

*It’s interesting how ‘hate e-mail’ can be a touchstone for publication topics.  The pharmacists blog study generated a dubious top 5 level volume of hate e-mail.  It was among the best written hate e-mail (which was oddly encouraging), but didn’t come close to the level produced after our Wikipedia paper came out.  To be fair, the sheer number of Wikipedia users and the widespread coverage** it received probably contributed to its you-are-as-bad-as-the-scientists-doing-research-on-puppies outrage. 

**Curious fact, of all the interviews I’ve done about our research over the years (e.g., New York Times, Wall Street Journal, CNN, BBC, NPR, New Scientist, etc.) the most hardcore fact-checkers were from Good Housekeeping and Fitness Magazine. Seriously.

Wikipedia isn’t good enough for anybody except nurses?

The verdict is in.  The quality of health information in Wikipedia is inadequate as a sole source for pharmacists [1], medical students [2], dentists [3], and patients [4].  However, it is good enough for use by nursing students [5]…well, sort of.

Determinations about adequacy are based on studies which evaluated the freely editable, online encyclopedia based on characteristics such as reliability, scope, and accuracy.  A clear consensus has emerged from that body of literature collectively rendering a decision that Wikipedia is not a suitable resource for high level consultation or citation.  The use (and citation in particular) of Wikipedia by healthcare students and professionals seems to irk practitioners and educators moreso when there are high quality alternatives, suggesting the perception that citing Wikipedia in those cases simply reflects a lack of awareness and laziness.  To be fair, Wikipedia founder Jimmy Wales has been unwavering in his stance that no encyclopedia should be used as a reference source for college level work or above. 

All of this leads to the most recent paper on this topic in Nurse Education Today [5].  It, too, is an assessment of health-related entries in Wikipedia.  However, it is notable for two reasons.  First, it differs from almost all the other articles in that it uses a methodology driven by compiling and analyzing the citations from Wikipedia entries rather than the content itself.  Second, the difference in language between the abstract and conclusion is fascinating.  The abstract closes with:

The quality of the evidence taken obtained from the 2500 plus references from over 50 Wikipedia pages was of sufficiently sound quality to suggest that, for health related entries, Wikipedia is appropriate for use by nursing students

Whereas the conclusion of the article is:

Whilst it is acknowledged that Wikipedia citations should be treated with some caution, the results of this modest study suggest that Wikipedia does have a role to play as a source of health related evidence for use by nursing students.

While this type of journal article ‘abstract-text dissonance’ is not completely rare [6,7] it is exacerbated here due to this article’s findings conflicting with every other study on the topic – at least based the abstract.  It also magnifies the problems that can occur when people draw conclusions based on only reading an abstract.  This has long been an issue for busy clinicians desperately trying to stay current.  However, today biomedical journal abstracts are easily accessible by anyone via PubMed (but the full-text usually remains shrouded by subscription access).  Dissemination of these abstracts is even more rapid, sometimes occurring real-time through tools like Twitter, Facebook, and email.  As healthcare professionals, we need to be careful not to fall into the trap of taking a shortcut and assuming skimming an abstract will allow us to critically evaluate a study.  The onus is also on us to help educate aspiring e-patients avoid these same missteps.

Overall, it’s quite possible that the contribution made by the Haigh article may be more significant as a teaching tool than as a piece of the research puzzle regarding the quality of Wikipedia.

@kevinclauson

P.S.  If you have gotten this far, it means you have not fallen prey to a similar phenomena with blog post titles – kudos!


[1] Clauson KA, Polen HH, Boulos MN, Dzenowagis JH. Scope, completeness, and accuracy of drug information in Wikipedia. Ann Pharmacother. 2008 Dec;42(12):1814-21.
[2] Pender MP, Lasserre KE, Del Mar C, Kruesi L, Anuradha S. Is Wikipedia unsuitable as a clinical information resource for medical students? Med Teach. 2009 Dec;31(12):1095-6.
[3] Stillman-Lowe C. Wikipedia comes second. Br Dent J. 2008 Nov 22;205(10):525.
[4] Leithner A, Maurer-Ertl W, Glehr M, Friesenbichler J, Leithner K, Windhager R. Wikipedia and osteosarcoma: a trustworthy patients’ information? J Am Med Inform Assoc. 2010 Jul-Aug;17(4):373-4.
[5] Haigh CA. Wikipedia as an evidence source for nursing and healthcare students. Nurse Educ Today. 2010 Jun 19. [Epub ahead of print]
[6] Ward LG, Kendrach MG, Price SO. Accuracy of abstracts for original research articles in pharmacy journals. Ann Pharmacother. 2004 Jul-Aug;38(7-8):1173-7.
[7] Pitkin RM, Branagan MA, Burmeister LF. Accuracy of data in abstracts of published research articles. JAMA. 1999 Mar 24-31;281(12):1110-1.

Medical Information Resource Deathmatch – A Closer Look

Last month a Letter was published in the peer-reviewed journal Medical Teacher titled, “Is Wikipedia unsuitable as a clinical information resource for medical students? “ [1].  That paper came on the heels of a Letter published in The Annals of Pharmacotherapy on a related topic, “Evaluation of pharmacist use and perception of Wikipedia as a drug information resource“ [2].  The Annals paper had some serious shortcomings (e.g., survey response rate) which likely contributed to its abbreviated publication form.  Its most eye-opening point was that only one-third of the respondents who used Wikipedia were aware that anyone could edit the entries.  This is perhaps the real value of the Letter and why it was published – it helps illustrate the need for education about appropriate online resources in that group.

The Pender, et al. paper also has some methodological aspects that probably limited it to a Letter.  For those without access to Medical Teacher, the results were initially presented as a conference case study.  Because the work of Pender, et al. was accepted for publication, and reminiscent of the Annals Letter, it went on to generate quite a bit of interest among academics, clinicians, librarians, and social media enthusiasts.  The unfortunate thing about this interesting topic is, like all Letters, the available level of detail was below what the authors envisioned and the readers sought.  However, in this case, the dialogue it has helped stimulate may be as valuable as the research itself.  Because I am in the midst of working on a follow-up study to the Wikipedia study we did a couple years ago [3], I searched for more information about the paper – some of which is discussed below.  (As an aside, the new wiki study is the first I’ve worked on that was partially driven by ‘unsolicited, crowdsourced post-publication peer review’.  More anon.)

In the Pender, et al. study, three content experts each evaluated one medical topic according to five criteria (e.g., accuracy, suitability) in Wikipedia, eMedicine, AccessMedicine, and UpToDate.  All criteria used a three point rating scale.  For example, the accuracy scale was: 1=numerous important errors, 2 = some minor factual errors, and 3 = no factual errors.  Accuracy scores for Wikipedia on this scale were 3, 2, and 2 for the topics.  Interestingly, eMedicine scored the single lowest accuracy rating for a topic (rating of 1) of any of the resources.  It did perform well for the other two topics.  Wikipedia fared even worse for suitability with all three topics rated as ‘1’ (“unsuitable”).  An example of the full scores for the otitis media topic is complements of @LKruesi and detailed below. 

 

Data resources

 

Wikipedia

UpToDate

eMedicine

AccessMedicine

Otitis Media

Accuracy

2

3

2

3

Coverage

3

3

2

3

Concision

2

1

3

1

Currency

3

3

2

3

Suitability

1

2

1

1

 

Two librarians led the project, blinded the resource entries for the content experts, and assessed each resource for accessibility and usability.  They used criteria like cost, ease of finding information, and presentation quality to support their decisions.  Wikipedia did very well here, earning the distinction as the most accessible and easiest to use. 

Ultimately, I think this study adds to the literature and has already contributed to the wider community by sparking debate and discussion.  I hope this supplemental information helps address some of the questions I have seen about this study and thanks again to Lisa Kruesi for the spirit of openness and transparency in electing to make the data available. 
 
UPDATE (7FEB10): The authors of the Med Teach letter have archived a full version with results and tables here.

 
   

 @kevinclauson

[1] Pender MP, Lasserre KE, Del Mar C, Kruesi L, Anuradha S. Is Wikipedia unsuitable as a clinical information resource for medical students? Med Teach. 2009;31(12):1095-6.

[2] Brokowski L, Sheehan AH. Evaluation of pharmacist use and perception of Wikipedia as a drug information resource. Ann Pharmacother. 2009;43(11):1912-3. 

 [3] Clauson KA, Polen HH, Boulos MN, Dzenowagis JH. Scope, completeness, and accuracy of drug information in Wikipedia. Ann Pharmacother. 2008;42(12):1814-21.