Fork me on GitHub

English

The D-PAC project in short

The project intends to develop and validate a Digital Platform for the Assessment of Competences (D-PAC). D-PAC is a tool for assessing a wide range of competences. It can be used within a variety of societal contexts, such as education and HR. The project kicked off on the 1st of January, 2014 and will run until the end of 2017. In the mean time the latest version of the tool will always be made available online (E-mail:demo@d-pac.be; password: demo).

The code is published online under the GPLv3 open source licence and can be found on Github. The D-PAC tool allows the creation and management of assessments to evaluate competencies. Assessees are able to upload representations of the competence under assessment. These are then judged by means of ‘comparisons’ with other representations. Based on these comparisons, the tool calculates a ranking of the representations. Feedback on the assessment is generated and is available for the assessees. In addition, data regarding the overall assessment process is available.

The judging process draws on the method of Comparative Judgement (CJ). CJ is based on Thurstone’s ‘Law of Comparative Judgement’ [1]. The CJ approach proposes that people are more reliable in comparing one thing with another than when assigning scores to things [2]. In practice, assessors compare two representations and decide on which of both is ‘better’ regarding the competence. The gathered data is analysed using the Bradley-Terry-Luce model (BTL model) and results in a scaling of representations on an interval (logit-)scale (for more detail, see [3]).

The D-PAC project is set-up as an interdisciplinary research project that integrates scientific knowledge from multiple fields of research, namely: psychometrics (research focus A), individual feedback research (research focus B), organisational feedback research (research focus C), assessor feedback research (research focus D) and design science research (research focus E). See research.

Top

More about the D-PAC system

The figure visualizes the D-PAC tool. Users accessing the D-PAC system can have one of three roles: the assessee, the assessor and the performance assessment manager (PAM). Overview-annotated

D-Pac for the performance assessment manager (PAM)

The performance assessment manager (PAM; anyone who wants to create and manage an assessment within an organisation) can create and manage the performance assessment in D-PAC. When setting up an assessment, the PAM can invite assessees and assessors. The PAM can decide what representation types will be required or allowed from assessees (see D-PAC for assessee) and how many assessors are needed. The PAM can also choose what type of CJ will be used (see analytics module). In these decisions while setting up an assessment, the PAM will receive advice on the consequences for reliability and validity of certain choices. The PAM can also select the type of feedback provided to the assessees and assessors. During the entire process, the PAM is able to follow up the activities of the assessors and assessees. Additionally, the PAM can intervene and adjust the assessment.

After the assessment is completed, the tool provides feedback. The PAM receives feedback on the quality of the assessment process. First, the final ranking of the representations in the assessment is provided. Second, a number of quality indices provide information on whether and how to improve the reliability of the performance assessment (i.e. reliability and misfit statistics; see analytics module). Third, an overview of the argumentation verbalised by the assessors is given. As such, an organisation gains insight in the diversity or similarity of criteria used by the assessors. Lastly, statistics on judgement time are provided. The D-PAC project will examine how such feedback to the PAM is best provided as to maximize its impact (research focus C).

Top

[1] Thurstone, L.L. (1927). A law of comparative judgement. Psychological Review, 34, 273–286.

[2] Jones, I., & Alcock, L. (2012). Summative peer assessment of undergraduate calculus using adaptive comparative judgement. In P. Iannone & A. Simpson. (Eds.). Mapping university mathematics assessment practices (pp 63 – 74). Norwich, U.K.: University of East Anglia.

[3] Bramley, T. (2007). 7 Paired comparison methods. In Newton, P., Baird, J.-A., Goldstein, H., Patrick, H. &, Tymms, P. (Eds.) Techniques for monitoring the comparability of examination standards. London, U.K.: Qualifications and Curriculum Authority.