Author: Jay McGrane, K-12 Educator and Freelance Writer
Rubric assessments change the design of a course because they fundamentally change how a learner approaches a project.
Well-constructed marking rubrics show learners exactly what skills they need to acquire for success. Furthermore, rubrics allow learners to assess for themselves how much of the criteria they’ve met at different milestones in the course. Finally, learners can use rubrics to improve projects through self-assessment and clear feedback from instructors.
There are three types of rubrics:
Check out DePaul Teaching Commons for examples of these different types of rubrics.
Now that you know the three different types of rubrics, it’s time to discuss the seven no-fail steps to designing game-changing marking rubrics.
First, look at the learning objectives for the particular assessment. Your marking rubric must link to those objectives, or the assessment will not be constructively aligned.
Ideally, multiple assessments could use elements of the same rubric.
Why is this important? Learners might start to self-assess much more with a consistent, marking rubric, as well as deepen their knowledge of the content area. You may also see improved learner retention because they can begin to make connections across different assessment tasks.
After you’ve linked your rubric assessment to learning objectives, you will need to decide which type of rubric to use: analytic, developmental or holistic. Each type of rubric has its own strengths and weaknesses. The answer depends on your course design.
Analytic and developmental rubrics work well for giving learners feedback, while holistic rubrics create standardisation between markers more easily. Usually, learning professionals choose analytic or developmental rubrics to improve learner objectives.
So when to use a developmental rubric vs. analytic rubric? Developmental rubrics work well for performance, especially behaviours or soft skills. Essentially, they are a continuum with the hope that learners, with enough feedback, will exit at the high performing end by the close of the course.
Analytic rubrics often give learners fabulous feedback on specific end-projects (e.g., essay, website, graphic design, construction projects, etc.,). Learners often have a wider variety of marks between projects for this type of rubric.
Deciding the criteria to determine whether the learner has met the desired learning objective is hard.
If you’re unsure what criteria you want to see, analyse a high-quality work sample to determine what characteristics make it effective. For performance-based assessments, do the same thing, but analyse a high-quality performance. In business scenarios, think about what qualities improve sales conversion or lower the rate of investment or improve productivity.
Once you’ve determined the qualities of a great work sample, then you’ll simply add them into your marking rubric as the criteria.
By this point in the rubric making process, you should have a list of criteria. If you’re like me they’re already laid out in a grid format so I can happily think, “I’m a third of the way done. Yay!”).
Examples of rubric rating scales include:
Instructors often tack on the rating scale last. But they communicate your course expectations quicker than any other part of the rubric. Often academic contexts set rating scales by departments or administrators removing choice from individual instructions. In other instructional design settings though, you can tweak rating scales to elicit the correct reaction from your learner.
For example, John Oliver takes a crack at Montessori rating scales in this joke: “Well done, Zaden. You got a squirrel on emotional intelligence. But on actual intelligence, you got a frowning walrus which is an F. That’s an F, Zaden. Let’s make that walrus smile.”
Obviously, no one would use a frowning walrus as a rating scale, but the difference between choosing to label the second category in a developmental rubric acceptable versus developing will change how your learner feels about that category. On the one hand, a learner may be more motivated to push toward ‘proficient’ if the others are labelled ‘developing’ or ‘acceptable’. On the other, your adult learners may feel more competent if they see their performance already lands them in the ‘acceptable’ category. Learners may prefer more formal rating systems because they resemble what they had in school.
Basically, it all comes down to what you want to communicate. Tweak the rubric rating scale depending on how you want your learner to react to the feedback.
Each level of your rubric needs to have a descriptor. Most descriptors mirror each other.
Here’s an example of one category from A Rubric for Assessing Rubrics:
|Criteria||1 – Unacceptable||2- Acceptable||3 – Good/Solid||4 – Exemplary|
|Clarity of criteria||Criteria being assessed are unclear, inappropriate and/or have significant overlap||Criteria being assessed can be identified, but are not clearly differentiated or are inappropriate||Criteria being assessed are clear, appropriate and distinct||Each criteria is distinct, clearly delineated and fully appropriate for the assignment(s)/course|
Do you want to assess your whole rubric? Check out A Rubric for Assessing Rubrics: A Tool for Assessing the Quality and Use of Rubrics in Education from Teaching, Learning and Technology.
Instructors might often want to just give learners a marking rubric and tell them to assess themselves! Unfortunately, learners can’t mark themselves without being taught how to use the rubric to their best advantage. Exemplars help learners see what types of work fall into each category. Then, learners benefit from marking the exemplars first before marking their own work.
To gamify your course, learners can try to mark the exemplar the same as a virtual, fictitious marker. Then they can try again until they mark it exactly the same as the fictitious marker.
Finally, learners need to think about how to move the lower-scoring exemplars up the rubric. At this point, feedback becomes crucial. If learners use the same rubric multiple times, they become more reliable because learners begin to understand how to apply the feedback they received on the last assignment.
A rubric becomes a game-changing tool once learners can pinpoint where their performance or project lands on the marking rubric AND THEN change it themselves.
For more on optimising your online courses, read our article, How to prepare your learners for online study
At Oppida, we believe in creating dynamic learning environments through learning management systems which engage with your learners on a deeper level. Whether you’re at project inception or you’re struggling knee-deep to manage content deliverables, Oppida will tailor learning design support for you. Setup a quick consultation with our founder Bianca Raby and discover how we can help you project manage, design, develop and enhance your online courses from any stage in the course’s lifecycle. Also, sign up for our FREE Designing Digital Learning Course to better understand how to design for digital.
Jay is a K-12 educator and Freelance Writer with a passion for learning about learning. You’ll find her trying out new teaching strategies in her classroom or reading about them online. When she’s not reading about teaching, she can be found hanging out with her toddler, preferably at the library.
Follow her on LinkedIn.