Talk given at the Computers and Learning Research Group (CALRG) annual conference, 12 June 2013, at The Open University, UK.
This presentation briefly reviews learning analytics, using some key examples. It then assesses what the OU is doing, and then sets out some ideas for what the OU could do in future to harness the potential of data about our learners to improve their learning.
"I see eyes in my soup": How Delivery Hero implemented the safety system for ...
Learning Analytics: What it is, where we are, and where we could go
1. Learning Analytics:
What it is, where we are …
Doug Clow
CALRG Conference, 12 June 2013
This work by Doug Clow is licensed under a Creative Commons Attribution 3.0 Unported License.
… and where we could go.
3. Quick Quiz:
Learning Analytics
a) This is the first time I’ve heard of it
b) I know it’s one of the latest buzzwords
c) I’ve heard of some projects
d) I’m doing some projects
e) I have to keep turning down keynote
invitations
on this subject
cc licensed ( BY ) flickr photo by Swaminathan: http://flickr.com/photos/araswami/2168316216/
5. What is learning analytics?
• the measurement, collection, analysis and reporting of
data about learners and their contexts, for purposes of
understanding and optimising learning and the
environments in which it occurs
– First International Conference on Learning Analytics And Knowledge (LAK11), Banff, Alberta,
Feb 27-Mar 1 2011.
cc licensed ( BY ) flickr photo by Cris: http://flickr.com/photos/chrismatos/6917786197/
6. • An abundance of data
• Qualitative and quantitative
• To improve learning
7. • Learning analytics
• Learner analytics
• Academic analytics
• Business Intelligence
• Educational Data Mining
(cc) gareth1953 http://www.flickr.com/photos/gareth1953/5477477947/
• Web analytics
• Social network analysis
• Predictive modeling
• Latent semantic analysis
• … and more!
8. • Predictive modeling, datamining (Blackboard)
• Fit students in to one of three risk groups
=> traffic light
• Trigger for intervention emails
• Consistent grade performance improvement
• Dramatic retention improvements
9. Social Network Analysis
• Social Networks Adapting Pedagogic Practice
• Network visualisations of forum activity data from VLE
• See patterns
• Spot central and
disconnected
• Identify at-risk
• Improve teaching
26. Student Support Tool
• Team specify the most important activities in a module
–What activities are important, and when
• Tool shows
–Which students have done the important activities
–General usage indicators, with comparison
cc licensed ( BY ) flickr photo by Mike Baird: http://flickr.com/photos/mikebaird/398077070/
28. A/B Testing
• Find out what works
–At a smaller, more fine-grained level
–In a matter of weeks, not years
• Settle arguments about how to teach
–With data from actual students!
cc licensed ( BY ) flickr photo by LASZLO ILYES: http://flickr.com/photos/laszlo-photo/4093575863/
29. Bad reasons for not testing Why we should test
We’ve never done it before. Others are doing it. And we have done it,
just small-scale and unsystematically.
We know what’s right already. No. We think we know what’s right.
We might find out we were wrong. Better to find out sooner and improve
faster.
It’ll deskill academics. It’ll help academics do even better.
It’s unethical. It’s unethical not to. Ask doctors.
It’ll cost too much. It’s costing us too much already.
We don’t know how. There is a way to change our capacity to
act: Learning.
30. Good reasons for not testing What we should do about it
Students might not want to be
experimented on.
Explain the benefits. Only test on people
giving informed consent.
It’ll make things more complicated. Design tests carefully to minimise
disruption.
We don’t agree about what’s a
good outcome: retention, pass
rate, progression, attainment,
improvement as a human being, ...
Track multiple outcomes. Work towards
consensus – if we can’t agree what counts
as good we’re in trouble.
You can ‘game’ outcomes – e.g. a
too-easy exam gets ‘good’ results.
Don’t do that. Maintain quality procedures.
What if we start testing and one
version is much, much better?
If the evidence is strong enough, stop the
trial early. (There are sums for this.)
We don’t have the systems, or the
culture, to do systematic testing.
We should build them.
35. Learning analytics at the OU:
•We’re doing some interesting things
•We’re going to do more
•We could and should do a lot, lot more
Doug Clow
@dougclow
dougclow.org
doug.clow@open.ac.uk
cc licensed ( BY ) flickr photo by Vince Alongi: http://flickr.com/photos/vincealongi/2537227873/
Systematic innovation & improvement
… through evidence-based practice
Editor's Notes
Data mining, academic analytics, learner analytics – focus here is on the learning, not the management and administration of learning International profile
We’ve always done this! Just more and faster – Marx quoting Hegel: a sufficiently large quantitative change is a qualitative one.
Builds on Neil Mercer, content analysis
Speed and length of cycles: instant feedback as you learn, through to govt policy
Surveillance. Openness and transparency key.
Raiders of the Lost Ark – the ark is someplace safe, being studied by top people – in a TOP SECRET crate just like millions of others in a warehouse
Predict students at risk of non-completion OU data, mainly VLE Clickstream predictive – if your usage drops (absolute level not so good) Not submitting a TMA is predictive, low mark too Easier to predict non-completion earlier than later (not submitting the first TMA is a worse sign than not submitting the last)
Want faster cycles, larger scale
Speed, scale, quality of response Get it to the learners and teachers