Optimising your learning with data: Are you on the right track?
by Rod Beach
So, you have recently embarked on the adventure of tracking all the learning data you can get your hands on with your learning record store (LRS) or the equivalent systems you have. After all, more learning data means more information – and more information means better decisions, right?
After a couple of months of collecting stats, you sit down with your team in L&D to discuss a couple of unexpected findings:
01. The learning logs report shows that in the past month only 30% of learners have successfully passed the How to Conduct Product Quality Assurance quiz in your quality assurance procedures training.
02. In your LRS, you can see that a particular sequence of a training video about this same topic gets paused, rewound and replayed by over half of the learners at timestamp 2:39:00-2:43:00.
After some debate, this is what your L&D team concludes:
The training video cannot be up to scratch if people have to watch it several times to ‘get it’. This is supported by the fact that 70% have failed to answer the quiz correctly.
Would you agree?
Let’s start with the ‘why?’ of learning data
In our daily practise working with large organisations across Australia and internationally, I speak with many clients who are keen to jump on the ‘big data’ bandwagon in L&D.
This is something I wholeheartedly welcome, having seen first-hand the impact that it can have on learner satisfaction and business outcomes. Our ability to measure the application of learning within the flow of work is both exciting and powerful. However, success in this field requires giving data the respect it deserves, in all aspects.
I ask our clients, ‘Why do you want to track learning data at this level? What will you do with the data?’ And the broad response I receive sounds something like, ‘The more data we gather, the more we will know. We can think about what we will do with it later.’
I beg to differ.
This watering can approach will provide you with a plethora of unstructured words, numbers, and stats. Instead, I encourage people in L&D to liaise with the business units to uncover, for example, the underperformance issues that are supposed to be addressed by the training. What are some KPIs that could be linked back to training-success measures? What is the current performance level and where do we need to get to?
To use data to its full extent, we first need to think about the information we want it to provide us.
Rod Beach
What will you measure?
The next question to answer is what learning performance indicators could you use to link to the intended performance augmentation? For example, could ‘frequency of refresher training completed during a calendar year’ be relevant? Or could ‘speed of rollout of maintenance instructions for new machinery’ make a difference to the number of days the machines stand still on your factory floor?
You can see where I am going with this. To use learning data to its full extent, we first need to think about the information we want it to provide us.
How and where will you measure?
In modern workplaces, there are many data pools to tap into to paint a picture of what is going on. In L&D, data pools range from LMS completion data to training feedback sheets, to xAPI statements and even Google Analytics.
Look at your tech stack and think laterally about which learning data taps you need and want to turn on, depending on your ‘why?’ and ‘what?’ explored above. With modern authoring and publishing tools, for example, you can see which particular cohorts have taken which learning path, completed branching video scenarios and which haven’t.
Who can help you interpret the learning data?
Large data sets need skilled and trained professionals to interpret them within the business and L&D context. Statistics have a notoriety for their ability to skew narratives, so it is important to work with learning- data analysts and/or project/line managers to create meaningful reports that can help you make decisions.
Upskilling or augmenting L&D departments in learning analytics is, in my view, one of the most effective investments that L&D can make to future-proof their influence and to stay a relevant and integral part of an organisation’s success.
So, what really happened with the quality assurance procedures training?
In the case of the quality assurance training mentioned at the beginning of this article, it turned out that in the month that the data was recorded, a one-off, temporary product development task force had worked on a special product innovation push, triggered by an increased number of faulty product returns.
One of the task force’s activities was for the entire team to go through the online training from the QA user/ learner perspective. The project team had decided to not review the quiz content in the online learning piece, so most of them skipped it or put in minimal effort to ‘pass’.
As to why the video element was watched more times than normal? At time stamp 2:39:00, the weak component that had caused the product fault was featured during initial assembly and then in quality assurance.
Mystery solved.
This article originally appeared in Training & Development magazine, September 2021 Vol. 48 No. 3, published by the Australian Institute of Training and Development.