Abstract / Description of output
Comparative judgement in assessment is a process whereby repeated comparison of two items (e.g. assessment answers) can allow an accurate ranking of all the submissions to be achieved. In Adaptive Comparative Judgement (ACJ), technology is used to automate the process and present pairs of pieces of work over iterative cycles. An on-line ACJ system was used to present students with work prepared by a previous cohort at the same stage of their studies. Objective marks given to the work by experienced faculty were compared to the rankings given to the work by a cohort of veterinary students (n=154). Each student was required to review and judge 20 answers provided by the previous cohort to a free text short answer question. The time that students spent on the judgement tasks was recorded and students were asked to reflect on their experiences after engaging with the task. There was a strong positive correlation between student ranking and faculty marking. A weak positive correlation was found between the time students spent on the judgements and their performance on the part of their own examination which contained questions in the same format. Slightly less than half of the students agreed that the exercise was a good use of their time, but 78% agreed that they had learnt from the process. Qualitative data highlighted different levels of benefit from the simplest aspect of learning more about the topic to an appreciation of the more generic lessons to be learned.
Keywords / Materials (for Non-textual outputs)
- assessment literacy
- peer assessment