Your questions about the assessment policy answered…….
Q: What’s the difference between marking criteria/standards and rubrics?
A: Marking criteria/standards describe distinct levels of quality, achievement or performance. A rubric is defined in the MQ assessment policy as “a brief outline of the assessment criteria.” Rubrics therefore outline the properties or characteristics used to judge the quality of performance.
It is important to be aware that the definition of a rubric used in Turnitin and much of the literature differs from that used in the assessment policy. Turnitin refers to a rubric as a set of descriptors or standards.
What we should be looking at instead is the MQ assessment policy statement which has the following statements:
5.1.1 Assessment is made by reference to explicit and pre-determined criteria and standards that reflect the learning outcomes and not by reference to the achievement of other students.
5.6.2 There should be an explicit and logical alignment between learning outcomes, assessment tasks, the task criteria, feedback and the grades associated with different levels or standards of performance.
5.6.3 Assessments should also be reliable, that is, they should consistently and accurately measure learning. This involves making judgements about student learning that are based on a shared understanding of standards of learning and should not be dependent on the individual teacher, location or time of assessment.
Q: How should grade descriptors be aligned with marking scales?
A: The assessment policy states that “Unit convenors may develop criteria and standards for specific assessment tasks, but these must be aligned with the generic grading descriptors provided in schedule 1 of the policy”. It also requires that criteria and marking processes are transparent to students.
Q: Can we collapse grade descriptors in marking scales from the 5 in the grading schedule (F, P, Cr, D, HD) to 4 or less eg unsatisfactory, proficient, advanced, outstanding?
A: This is allowed. In his recent workshop ‘Engaging Students with Feedback’, Mitch Parsell suggested using 4 categories for qualitative rubrics.
Q: Can we create additional categories e.g. not evident, developing, adequate, advanced, excellent?
A: Yes, this is possible but you would need to consider whether this is making things unnecessarily complex and whether the performance expectations are transparent for students. The descriptors you use still need to align with the generic grade descriptors provided in Schedule 1 of the policy. For example, “not evident” and “developing” would align with a fail grade. “Adequate” aligns with a pass.
Q: Is it possible to have marking scales with NO descriptors in the cells?
A: No because students need guidance on what the different levels of performance against each criteria look like.
Q: How do we treat competency criteria such as ‘Applied APA referencing procedures” or “uses appropriate structure and language features”?
A: These are threshold’s achieved/not achieved. As an example, this is what Mitch Parsell uses:
||Appropriate referencing style used consistently
||No consistent referencing system used
Q: Is it possible to demonstrate High Distinction or Distinction performance against competency criteria?
A: No, it makes no sense.
Q: Is it possible to demonstrate High Distinction or Distinction performance in a task that requires replication or description of knowledge as the highest level of cognitive demand?
A: No, it makes no sense.
Q: How should we grade these competencies? What are the implications for the design of marking scales in Turnitin?
A: Click here to see an example of a marking scale.
Article prepared by Rod Lane – Department of Educational Studies