Fork me on GitHub

Feedback

Feedback opportunities of Comparative Judgement: An overview of possible features and acceptance at different user levels.

Roos Van Gasse, Maarten Goossens, Anneleen Mortier, Jan Vanhoof, Peter Van Petegem, Peter Vlerick en Sven De Maeyer

 

In: Joosten-ten Brinke D., Laanpere M. (eds) Technology Enhanced Assessment. TEA 2016. Communications in Computer and Information Science, vol 653. Springer, Cham

TEA

 

ABSTRACT

Given the increasing criticism on common assessment practices (e.g. assessments using rubrics), the method of Comparative Judgement (CJ) in assessments is on the rise due to its opportunities for reliable and valid competence assessment. However, up to now the emphasis in digital tools making use of CJ has lied primarily on efficient algorithms for CJ rather than on providing valuable feedback. Digital Platform for the Assessment of Competences (D-PAC) investigates the opportunities and constraints of CJ-based feedback and aims to examine the potential of CJ-based feedback for learning. Reporting on design based research, this paper describes the features of D-PAC feedback available at different user levels: the user being assessed (assesse), the user assessing others (assessor) and the user who coordinates the assessment (Performance Assessment Manager (PAM)). Interviews conducted with different users in diverse organizations show that both the characteristics of D-PAC feedback and the acceptance at user level is promising for future use of D-PAC. Despite that further investigations are needed with regard to the contribution of D-PAC feedback for user learning, the characteristics and user acceptance of D-PAC feedback are promising to enlarge the summative scope of CJ to formative assessment and professionalization.

 

Comparative Judgment within Online Assessment: Exploring Students Feedback Reactions

Anneleen Mortier, Marije Lesterhuis, Peter Vlerick en Sven De Maeyer

 

Communications in Computer and Information Science, 2015

CAA

 

ABSTRACT

Nowadays, compartive judgment (CJ) emerged as an alternative method for assessing competences (e.g. Pollitt, 2012). In this method, variou assessors compare indenpendently several representations of diffferent students and decide each time which of thme demonstrates the best performance of the given competence. This study investigated students’ attitudes (honesty, relevancy and trustworthiness) towards feedback that is based upon this method. Additionally, it studied the importance of specific tips in CJ-based feedback.

 

Supporting students to provide valuable feedback: a comparison on two digital feedback methods

Anneleen Mortier, Marije Lesterhuis, Vincent Donche, Peter Vlerick en Sven De Maeyer

 

ingediend bij Studies in Educational Evaluation, 2016

SEE

 

ABSTRACT

Peer evaluation can enhance students’ writing skills, both for student assessor as for the assessed students. Commonly, student assessors provide feedback on one task at the time. Recently, Comparative Judgement (CJ) is introduced as alternative assessment method. Within this method two tasks are compared with each other, and feedback can be provided. The summative advantages of this method have been shown in previous studies. However, the question arises whether peer feedback based on comparisons differs from common peer feedback. This study explored peer feedback provided by eleven student assessors using both methods regarding the tone (positive or negative), the depth (indicating or explaining) and the content. Next to that, focus groups provided insight in the preferred method when evaluating the work of fellow students. The results indicate that CJ feedback is more positive in tone and often more focused on higher order skills (e.g. structure, content and style). Additionally, student assessors are not straightforward in preferring one method over the other. Further research on the potential impact of CJ-based assessment on student assessors’ skill development is recommended.