FAQ

Comparative judgement in general

For what type of products does comparative judgement make sense?

Comparative judgement does not make sense when it comes to very closed and straightforward tasks. Think of closed questions, completion questions or multiple choice questions. Comparative judgement becomes interesting when the competence is too complex to be properly understood in one or more aspects. Think, for example, of different types of writing assignments, reflection reports, action plans, solution strategies, internship videos, visual works, presentations, theses or portfolios.

How large should the group of products be for comparative judgement?

For a limited number of products, comparative judgement is less efficient. Up to 10 or 12 products it is perfectly possible to arrange the products manually according to their quality. As soon as there are more than 12 products, this becomes more difficult and it is easier to compare them one on one. There is no maximum number of products.

How many assessors do you need for comparative judgement?

In order to be able to establish the ranking of products with a high degree of certainty, two assessors are already sufficient. However, we know that evaluators can vary greatly in what they value in products. In order to ensure that all aspects of the competence are taken into account, it is better to work with a minimum of four assessors.

How many comparisons does each assessor have to make?

The number of comparisons per assessor depends on several factors: the number of products, the number of assessors and how reliable you want to make an assessment. For summative assessment, in which the score is a final assessment of the competencies of students, it is good to strive for more evaluations per product (guideline: 15 to 20 evaluations per product). In a situation with 20 products and 5 assessors, this means that each product comes back 300 times in a comparison (20 x 15). A total of 150 comparisons have to be made to achieve this (300:2), because there are two products in each comparison. Distributed over 5 assessors, this is 30 comparisons per assessor.

 

For formative purposes, the number of comparisons per product makes less difference. Here it depends on the purpose of the assessment. If you want to know very accurately where students are in a ranking in order to be able to give them specific feedback, then it is best to make some more comparisons. If you want students to learn from peer assessment, you can simply adjust the number of comparisons to the intended time investment of students.

Why is comparative judgement often used for peer assessment?

Assessing is difficult, not only for experts, but also for students. Comparative judgement is an easy task for students. In addition, they are given many examples of lesser and better quality, from which they can learn a lot. Together with giving and receiving feedback, comparative judgement creates a lot of learning opportunities in no time at all. And in the comparison tool, such a peer assessment is set up quickly!

Why is comparative judgement often used for peer assessment?

Assessing is difficult, not only for experts, but also for students. Comparative judgement is an easy task for students. In addition, they are given many examples of lesser and better quality, from which they can learn a lot. Together with giving and receiving feedback, comparative judgement creates a lot of learning opportunities in no time at all. And in the comparison tool, such a peer assessment is set up quickly!

The comparing tool

How are the pairs put together in the standard algorithm of the comparing tool?

In the comparing tool, the starting point is that comparisons are compiled randomly. Only one element is not random; each product is compared equally often. This means that the algorithm selects for each new pair the product that has been compared the least.

How are the pairs composed in the "equal views" algorithm of the comparing tool?

In some applications (e.g. a peer assessment), users want every reviewer to have seen all the products. The equal views algorithm ensures that each reviewer has seen each product in at least one pair.

Where do I put my rubrics of criteria?

The starting point of comparative judgement is that you make the assessments holistically. No rubrics or criteria are scored. This is not in the comparing tool. How do you make sure that assessors look at the right things? Be sure to give them the task goals/competence. Rubrics and criteria for students are particularly useful for structuring feedback. In the comparing tool you can therefore ask assessors to structure their feedback according to your rubrics or criteria.

How is the assessment fed back to students?

As a teacher, you decide that yourself. You can, of course, give feedback on the rank order, the score it has led to (if you have calculated it), and the feedback from all the assessors. But if and when you do this? You can set it up in the comparing tool. In addition, you can choose to allow students to view products (with corresponding feedback) that were rated as better or less good than their own product.

Do you have any questions after reading our Frequently Asked Questions?

Do not hesitate to contact us, we will be happy to help you!