test Browse by Author Names Browse by Titles of Works Browse by Subjects of Works Browse by Issue Dates of Works
       

Advanced Search
Home   
 
Browse   
Communities
& Collections
  
Issue Date   
Author   
Title   
Subject   
 
Sign on to:   
Receive email
updates
  
My Account
authorized users
  
Edit Profile   
 
Help   
About T-Space   

T-Space at The University of Toronto Libraries >
Journal of Medical Internet Research >
Volume 4 (2002) >

Please use this identifier to cite or link to this item: http://hdl.handle.net/1807/4631


Title: Reliability of Health Information on the Internet: An Examination of Experts' Ratings
Authors: Craigie, Mark
Loader, Brian
Burrows, Roger
Muncer, Steven
Keywords: Original Paper
Newsgroup
Internet
rating information
reliability
reproducibility of results
statistics
quality control
Issue Date: 17-Jan-2002
Publisher: Gunther Eysenbach; Centre for Global eHealth Innovation, Toronto, Canada
Citation: Mark Craigie, Brian Loader, Roger Burrows, Steven Muncer. Reliability of Health Information on the Internet: An Examination of Experts' Ratings. J Med Internet Res 2002;4(1):e2 <URL: http://www.jmir.org/2002/1/e2/>
Abstract: [This item is a preserved copy and is not necessarily the most recent version. To view the current item, visit http://www.jmir.org/2002/1/e2/ ] Background: The use of medical experts in rating the content of health-related sites on the Internet has flourished in recent years. In this research, it has been common practice to use a single medical expert to rate the content of the Web sites. In many cases, the expert has rated the Internet health information as poor, and even potentially dangerous. However, one problem with this approach is that there is no guarantee that other medical experts will rate the sites in a similar manner. Objectives: The aim was to assess the reliability of medical experts' judgments of threads in an Internet newsgroup related to a common disease. A secondary aim was to show the limitations of commonly-used statistics for measuring reliability (eg, kappa). Method: The participants in this study were 5 medical doctors, who worked in a specialist unit dedicated to the treatment of the disease. They each rated the information contained in newsgroup threads using a 6-point scale designed by the experts themselves. Their ratings were analyzed for reliability using a number of statistics: Cohen's kappa, gamma, Kendall's W, and Cronbach's alpha. Results: Reliability was absent for ratings of questions, and low for ratings of responses. The various measures of reliability used gave conflicting results. No measure produced high reliability. Conclusions: The medical experts showed a low agreement when rating the postings from the newsgroup. Hence, it is important to test inter-rater reliability in research assessing the accuracy and quality of health-related information on the Internet. A discussion of the different measures of agreement that could be used reveals that the choice of statistic can be problematic. It is therefore important to consider the assumptions underlying a measure of reliability before using it. Often, more than one measure will be needed for "triangulation" purposes.
Description: Reviewer: Meaney, B
Reviewer: Uebersax, John
URI: http://hdl.handle.net/1807/4631
ISSN: 1438-8871
Other Identifiers: doi:10.2196/jmir.4.1.e2
Rights: Copyright (cc) Retained by author(s) under a Creative Commons License: http://creativecommons.org/licenses/by/2.0/
Appears in Collections:Volume 4 (2002)

Files in This Item:

File Description SizeFormat
jmir.html36.03 kBHTMLView/Open

Items in T-Space are protected by copyright, with all rights reserved, unless otherwise indicated.

uoft