It’s difficult to determine a clear “standard” for TEDS results, since we know that they are affected by a range of contextual variables that relate to the learning and teaching environment.
Over the years, analysis of TEDS data has demonstrated persistent and consistent differences according to:
• discipline area (Faculty – this is more a reflection of student cohort differences than variation in teaching or curriculum quality);
• class level (100, 200, 300-500, 800-900-level, with 600- 700 level yet to be examined); and
• class size (this tends to have more impact on teaching than unit evaluation results, but is evident in both).
Interpreting Your TEDS Results – in Context
Without a measure of the variation attributable to each of these factors, it’s hard for an individual teacher or unit
convenor to “place” their own TEDS results in the context of their own teaching environment. However, help is at hand!
Now that we have been running the revised TEDS surveys for several semesters, we have sufficient data to provide descriptive statistics for groups of evaluations within the same context, at least to Faculty by Unit Level refinement in
most Faculties. These statistics, based on the distribution of mean (average) scores rather than individual scores in Faculty/Unit Level category, will enable you to see where your results are placed in relation to others who teach in the same context.
Where to Access
Guidance for interpreting your results in relation to the data summaries, and the summary tables themselves, are available NOW for LEU surveys only at http://staff.mq.edu.au/teaching/evaluation/surveys/compare_leu/.
The TEDS team are working on the LET tables and will inform all staff when these are ready to be accessed.