OBJECTIVE: To assess the utility and ability of the novel prescribing very short answer (VSA) question format to identify the sources of undergraduate prescribing errors when compared with the conventional single best answer (SBA) question format and assess the acceptability of machine marking prescribing VSAs.
DESIGN: A prospective study involving analysis of data generated from a pilot two-part prescribing assessment.
SETTING: Two UK medical schools.
PARTICIPANTS: 364 final year medical students took part. Participation was voluntary. There were no other inclusion or exclusion criteria.
OUTCOMES: (1) Time taken to mark and verify VSA questions (acceptability), (2) differences between VSA and SBA scores, (3) performance in VSA and (4) SBA format across different subject areas and types of prescribing error made in the VSA format.
RESULTS: 18 200 prescribing VSA questions were marked and verified in 91 min. The median percentage score for the VSA test was significantly lower than the SBA test (28% vs 64%, p<0.0001). Significantly more prescribing errors were detected in the VSA format than the SBA format across all domains, notably in prescribing insulin (96.4% vs 50.3%, p<0.0001), fluids (95.6% vs 55%, p<0.0001) and analgesia (85.7% vs 51%, p<0.0001). Of the incorrect VSA responses, 33.1% were due to the medication prescribed, 6.0% due to the dose, 1.4% due to the route and 4.8% due to the frequency.
CONCLUSIONS: Prescribing VSA questions represent an efficient tool for providing detailed insight into the sources of significant prescribing errors, which are not identified by SBA questions. This makes the prescribing VSA a valuable formative assessment tool to enhance students' skills in safe prescribing and to potentially reduce prescribing errors.