Learning Analytics can sometimes be construed as big data, or even Big Brother. But it’s really about being able to tell a story about a learner, according to Dr Danny Liu.
How do you define Learning Analytics?
It’s fundamentally about using data to understand learners and optimize their learning. It’s not just about looking at one piece of data, but at how all the metrics fit together and can tell you a story about the learner. The idea is to make the data more accessible, easier for someone to look at, to find out information and act on it.
So what sort of metrics could you look at?
We could look at what the learner has been doing in iLearn for example. If a student has been accessing certain resources, how do their interactions with those resources influence their performance? Or if they have weekly quizzes and their performance is trending downward, then let’s see if that is because they’re just not engaging with the unit, or because the unit is getting more difficult and everyone is struggling. It’s about trying to make sense of what’s happening with learners.
A unit convenor might be able look at bunch of data about how students are going in their unit – assessment marks, or accesses online or attendances in tutorials – and then be able to pick out some troublesome patterns. Based on those patterns they could then get in touch with those students, offer support to help them succeed. Behind this there is always the question of ‘what do students define as success’? Is success just getting by, is it getting an HD, is it getting a job at the end? There’s multiple levels of success, but at many of those levels, data should be able to help.
Are there limits on what this data can or should tell us?
Throughout all of this, we have to remember that students are human beings, and human beings are unpredictable. No analytics system should be fully automated. The system should really only be there to present or visualise the data for people in a way that they can decide to act upon as a human being. You don’t ever want to send students an automated warning message, for example, because there could be extenuating circumstances that you know about as a human, and a computer doesn’t. It shouldn’t replace your own judgment, it’s about making it more efficient for the person who needs to understand those things, not about making decisions for them.
We also have to be quite careful with learner data in particular because of privacy and ethical issues, in terms of the level or type of data you collect, who owns it, how it’s stored, where it’s processed, how it’s analysed, and importantly how and who acts on it. It would be very ethically dubious to collect wellbeing data for example – how many visits students have made to the doctor or a Psychologist. Access to the gym, library books borrowed – the question we need to ask is where do you draw the line?
The Learning and Teaching Green Paper talks about incorporating learning analytics into program development and review. How might that work?
Currently there is very little visibility across a program. Imagine if a Program Director could view a map of student activity over the course of a session in multiple units of a program simultaneously. Not just a calendar of assignment due dates, but imagine richer data that tells you how they are interacting with various learning resources. They may be able to see peaks and troughs of activity – perhaps there’s a flurry of activity in week 10 in three concurrent units. The Program Director would then be able to drill down and see what was happening – maybe students are all doing very similar activities across units, which could be better structured at a program level. Visibility of these data could help you identify and decide what needs to be changed.
Are other universities using Learning Analytics?
The shining star of learning analytics internationally, or at least the example that seems to get the most attention, is the Purdue University system Course Signals. A computer essentially looks at the students’ online activities and their academic background, and then uses a special algorithm to judge whether a student is at risk, almost at risk, or not at risk. The students and the staff both get that information as red, yellow and green traffic lights, and the students can self-regulate. How they do that or what a red light means to a student is difficult to determine. You don’t want them to think, a red light means I should log in twice as much in a week, if logging in doesn’t mean anything for them. And just because the traffic light says you’re yellow or green, doesn’t mean you’re going to pass the unit. It only means that based on the computer’s detection of your risk score you are not at risk. So, the human element is key.
And you’re involved with a couple of current projects here at Macquarie?
Yes, I’m involved with a Teaching Development Grant that Chris Froissard has already posted about on Teche, where we’re looking at learning analytics from within iLearn using basic Moodle data.
I’m also running a Innovation and Scholarship Program grant project that’s looking at Learning Analytics from a more holistic standpoint. We’re asking what kind of stories or information can we pull out of iLearn, Echo, student information systems and other things that students interact with. We’ll be trying to find a way to present that data in a usable manner to a whole range of people, from student support officers, unit convenors, educational designers, program directors, Associate Deans, and students themselves.
Dr Danny Liu is a Lecturer, Academic Practice in Macquarie University’s Learning and Teaching Centre