Edinburgh Research Explorer

The use of global rating scales for OSCEs in veterinary medicine

Research output: Contribution to journalArticle

Related Edinburgh Organisations

Open Access permissions

Open

Documents

  • Download as Adobe PDF

    Rights statement: Copyright: © 2015 Read et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

    Final published version, 3 MB, PDF document

    Licence: Creative Commons: Attribution (CC-BY)

Original languageEnglish
Article numbere0121000
Number of pages10
JournalPLoS ONE
Volume10
Issue number3
DOIs
Publication statusPublished - 30 Mar 2015

Abstract

OSCEs (Objective Structured Clinical Examinations) are widely used in health professions to assess clinical skills competence. Raters use standardized binary checklists (CL) or multi-dimensional global rating scales (GRS) to score candidates performing specific tasks. This study assessed the reliability of CL and GRS scores in the assessment of veterinary students, and is the first study to demonstrate the reliability of GRS within veterinary medical education. Twelve raters from two different schools (6 from University of Calgary [UCVM] and 6 from Royal (Dick) School of Veterinary Studies [R(D)SVS] were asked to score 12 students (6 from each school). All raters assessed all students (video recordings) during 4 OSCE stations (bovine haltering, gowning and gloving, equine bandaging and skin suturing). Raters scored students using a CL, followed by the GRS. Novice raters (6 R(D)SVS) were assessed independently of expert raters (6 UCVM). Generalizability theory (G theory), analysis of variance (ANOVA) and t-tests were used to determine the reliability of rater scores, assess any between school differences (by student, by rater), and determine if there were differences between CL and GRS scores. There was no significant difference in rater performance with use of the CL or the GRS. Scores from the CL were significantly higher than scores from the GRS. The reliability of checklist scores were .42 and .76 for novice and expert raters respectively. The reliability of the global rating scale scores were .7 and .86 for novice and expert raters respectively. A decision study (D-study) showed that once trained using CL, GRS could be utilized to reliably score clinical skills in veterinary medicine with both novice and experienced raters.

Download statistics

No data available

ID: 19060739