Last week I provided a summary of learning analytics and presented a paper looking at program based metrics from the Learning Analytics and Knowledge Conference in Edinburgh (LAK16). This week I present a few more interesting papers, and another observation…
Pedagogical learning analytics
Bos and Brand-Gruwel in Student differences in regulation strategies and their use of learning resources: implications for learning design, echoed sentiments of others – learning analytics must move beyond identifying at-risk students, towards improving the learning of all students. The authors divided a blended learning class of 333 first year psychology students into three categories: self-regulation, external regulation and no regulation. (Externally-regulated students rely on external sources such as lecturers to regulate their learning.). The researchers wanted to know if students in these three categories used online resources differently and whether they had different results. The categorisation of students was based on a survey instrument completed by students.
Interestingly they found that students in all three categories did not use online resources any differently, but their performances were different. Again surprisingly, self-regulated students did not do as well as externally regulated students. The authors suggested that this may relate to the course design and talked about the expertise reversal effect, where techniques designed to support less experienced learners lose their effectiveness with more experienced learners. Consequently, the authors argue that the use of class averages and simple click-stream data from a Learning Management System may not be useful in determining what the impact is on the learning of different students. Additionally, more nuanced indicators may need to be developed to help teachers understand the learning process of their students.
Financial implications of an early alert system
Harrison et al in Measuring financial implications of an early alert system attempted to financially quantify the additional benefit accruing to the University of New England from student retention as a consequence of their early alert system (EAS) implemented from 2011. This study discussed previous research attempting to show a link between EAS and student retention but did not provide research on the impact of the EAS from UNE. Instead, it assumed that EAS supports student retention and attempted to calculate the financial implications of retaining students. The authors found that the cost (in foregone tuition fees) of not retaining students was approximately $4,687 AUD per student, increasing to $7,170 AUD per student when compared to a graduating student. Significantly they argue that this $4,687 per student provides a ceiling for the funds an institution could spend on EAS.
UNE found that increasing undergraduate student retention by 1% translated to $633,000 AUD in additional revenue to the institution in 2013. Also, they found that students who were identified by their EAS ended up paying more tuition fees as a consequence of being enrolled for a longer duration and undertaking more units. The authors argue that this provides an overall estimate of the value of student support services. Interestingly, there was a difference in the benefits accruing to UNE of student retention by faculty, suggesting that in some faculties a case could be made for additional support services. The authors concluded that there are significant financial benefits that accrue to institutions that attempt to improve student retention, and this study can help managers make decisions about their EAS and support services that are based on sound financial data.
Another observation from the conference was that there was some confusion between correlation and causation. People were making a leap from seeing a pattern in the data and concluding that there was a causative relationship. For example, they saw a relationship between variables X and Y and erroneously concluded that X caused Y or vice versa, without any further analysis. A number of people pointed this out during the presentations in the twitter backchannel. One of the Keynote presenters Professor Paul A. Kirschner shared a website Spurious correlations that made this point elegantly and yet comically. Essentially you can find correlations between any variables if you look hard enough, for example, drownings caused by an accident involving a fishing boat and visitors to Tokyo Disneyland.
If you are interested in reading more on learning analytics you can access the full conference proceedings at the ACM Digital library.