We need to talk about the validation step. It’s incredibly valuable to the solutioning process, yet it’s one of the most frequently skipped steps in learning design.
As a learning strategist, I see it all the time—despite the best of intentions, tight budgets and even tighter timelines squeeze the validation step out of the solutioning process. When validation is undervalued, it chips away at the end result. Small oversights that could’ve been adjusted during validation add up over time and can negatively impact anything from learner performance to L&D’s budget. It’s a small aspect of the learning design process, but it can have a big impact.
Here’s how I think about the validation step, why it matters, and a few best practices to consider when validating learning solutions.
What is the validation step and why is it important?
In learning design, validation is the step between wrapping up production and delivering the solution to the learner audience. As the name implies, this step is all about validating that the solution works for our intended audience and we are solving our initial problem statement. It could also be called the pilot or test phase.
No matter what you call it, the idea is to test and learn before the solution is out in the world, when it becomes much more difficult, costly, and time-consuming to make adjustments. If your learning solution misses the mark after delivery, those are significant amounts of time, money, and resources you can’t get back—and because the learning problem still isn’t solved, you’re likely going to have to invest even more.
At Maestro, we follow a Double Diamond framework for learning design that separates the problem-finding process from problem solving. In the problem-finding stage, we focus on learner research and discovering as much information as possible about the audience and the problems they face. A lot goes into the discovery phase and by the end of it, we have a clear, well-defined understanding of the learner, the problem at hand, and the change we want to create.
How to conduct a learning environment analysis (LEA)
In this guide, you’ll find an in-depth overview of an LEA, a powerful framework for accurately diagnosing and defining learners’ problems, constraints, and needs to prescribe the right learning solution. Plus, get worksheets and templates for each stage of the process.Download your copy→
We use these findings to inform the problem-solving stage, which involves developing and testing a solution (that’s the validation step!) and then delivering it and evaluating effectiveness. Despite all of the research we do to understand the learner and develop the best possible solution, we still believe in piloting what we’ve created to ensure the experience is fine-tuned to perfection and will resonate with learners to achieve the intended outcomes.
Think of it this way: once your learning solution is out there, you can’t take it back. Take the time to test it out with a small group of learners instead of putting the perceptions of your entire learner audience at stake.
What you learn during validation
If you’ve done your homework through learner research, think of validation as going back to check your work. There likely won’t be any huge surprises during validation. Piloting will help uncover the small details you might not expect, allowing you to tweak and make adjustments before delivery. Validation ensures the learning solution—and the time and money spent on it—is truly going to result in the behavior change you designed it for.
What might you learn during validation? Here’s an example. Let’s say you’re developing a learning experience for an audience of surgeons and nurses. During validation, you might pilot the course to a sample group of four to six learners from the target audience. Several of them tell you that the imagery you’ve used during scenarios isn’t realistic—for example, why aren’t the nurses wearing gloves? It’s a small detail that could be easily overlooked by a SME who is more focused on the core content. But to learners, it’s distracting and erodes credibility. Taking their feedback into account, you update the imagery, continue on to have a great launch, and achieve strong results that help you prove value to stakeholders.
Learning validation best practices
Validation is an important step, but it doesn’t have to be complicated. Whether it’s a survey, a focus group, or observing a test pilot, here are a few best practices for validating learning solutions.
Pinpoint your target audience
Make sure those participating in validation truly represent your target audience. Watch out for bias in your selection pool—top performers are often the first to raise their hands as volunteers, but they may not represent the entire learner audience.
This process is meant to uncover small details that those outside of the learner profile may overlook. For example, does the language used ring true? Does the structure of the program fit with their day-to-day lives? These insights should come from a representative sample of your exact audience—not a proxy, like a manager or L&D leader.
Keep it small
Less is more. You don’t need a large sample size to get valuable insights and feedback on the experience. A focus group of five to ten people, depending on the nature of the content, roles, and what you want to learn, is enough to get representative feedback without too much variance.
You don’t need to survey a large group of learners during validation—what’s more important is to ensure you’re reaching representative groups for all of your target audiences. For example, if your solution is distributed to different geographical locations, validate with a small sample in each market.
A little planning goes a long way. Before piloting, think about what you want to learn. Plan your questions in advance to avoid bias or leading questions. It’s also important to consider your learning objectives and KPIs—you’ll want to compare the insights that come out of validation against your objectives so you can optimize accordingly.
After validation, compile your notes and extract key themes. What comes up again and again? Evaluate feedback themes and think about how to create something actionable out of them. What needs to change? Insights are just insights until you have actionable optimization.
Validation leads to more effective learning experiences
Bottom line: don’t skip the validation step! Taking the extra time to test and get feedback on your learning solution will save you time and money in the long run, all while ensuring you’re delivering a solution that will drive real behavior change for learners.
Learner research is a powerful tool for designing effective learning.
Check out four observation tools and how to use them.Read more→