What is an elearning interaction?
Meaningful elearning interactions are purpose-driven and tied directly to learning outcomes. They promote cognitive engagement by giving learners opportunities to apply knowledge, solve problems, make decisions, or practice skills.
Instructional design novices sometimes assume that any learner action, such as clicking to reveal content, constitutes meaningful interactivity. It does not.
Is click-to-reveal always bad for learning?
Not necessarily. Click-to-reveal interactions can be useful when you want to manage cognitive load, reveal information gradually, or work within limited screen space. In those cases, clicking supports the presentation of information.
However, from an instructional perspective, clicking alone does not make an interaction meaningful. An interaction adds value when it asks learners to think, not just trigger more content.
The interaction ideas below require learners to analyze, judge, predict, or diagnose. These are the types of mental actions learners perform in real work settings. Each one includes a short, real-world question to show how the idea might be used in practice.
1. Remove Non-Essential Steps
Interaction Idea: Provide a checklist for a process and ask, “Which items in the checklist are not necessary for completing [process name]?” Or you could reverse it and ask which items in the checklist are missing?
Example: You’re onboarding a new user to the Employee Time Tracker system.
Review the checklist below. Which items are not necessary for a first-time user to complete their initial setup?
2. Rank Effective Solutions
Interaction Idea: Present a set of solutions with varying levels of effectiveness. Learners must compare and rank the solutions in light of solving a specific problem.
Example: A nurse consistently skips a required documentation step. Rank the following responses from most effective to least effective for addressing the issue.
3. Select the Rationale
Interaction Idea: Provide the solution to a complex problem. Ask the learner to select the reasoning that underlies the solution.
Example: After discussing the patient’s symptoms and reviewing the medical history, Dr. Bea identified three possible conditions in the differential diagnosis. When the lab results were returned, she diagnosed the patient with a thyroid disorder. Review the patient’s symptoms, history, and lab results, and select the rationale that best supports this diagnosis.
4. Explain the Mistake
Interaction Idea: Present learners with a poor outcome and a backstory, and ask them to analyze it to uncover one or more assumptions that led to the problem.
Example: A manufacturing team introduced a new digital checklist intended to reduce safety incidents. The checklist was accurate, clearly written, and technically accessible on the shop floor. However, safety incidents did not decrease, and workers frequently skipped the checklist during busy shifts. What assumption about workers or the work environment most likely contributed to this outcome?
5. Make a Constraint-Based Decision
Interaction Idea: Learners must make a decision under realistic limitations, such as time, resources, or other constraints that reflect real-world conditions.
Example: A long-time client has requested several last-minute changes. Your manager is under pressure from executives to deliver the already over-budget project. Given the time and budget constraints, what would you do next?
6. Predict the Consequence
Interaction Idea: Learners are asked to anticipate what will happen before outcomes are revealed. Instead of receiving immediate feedback when responding to a prompt, learners must mentally simulate the effects of an action, decision, or change.
Example: A machine operator bypasses a built-in safety feature to speed up production during a busy shift. If this behavior continues, what is the most likely outcome over the next month?
7. Play Devil’s Advocate
Interaction Idea: After learners commit to a solution, they are asked to argue against their own recommendation to uncover weaknesses or unintended consequences.
Example: A customer complains that a new feature is missing from a software update and is considering canceling their subscription. After choosing the response you think is most appropriate, switch perspectives. From the customer’s point of view, what problems might they have with your approach?
8. See Through a Different Lens
Interaction Idea: Present a scenario in which the learner takes on the role of a supervisor, auditor, or inspector. The learner reviews a situation or work example and identifies violations by applying the standards and perspective of that role.
Example: You are conducting a routine safety inspection in a warehouse. Review the scene below. Which safety issues would you flag before approving the area for operation?
9. Ask the Right Questions
Interaction Idea: Present a situation and ask learners to generate the questions they would need to ask before making a decision.
Example: In your role as a mental health counselor, a client reports increased anxiety and difficulty sleeping over the past month but says “nothing significant has changed.” Before deciding on an intervention or treatment approach, what questions would you need to ask?
10. Provide Feedback
Interaction Idea: After learners analyze a scenario or make a decision, ask them to design the feedback they would give to someone else. Instead of answering questions about what should be done, learners focus on how they would explain, guide, or coach another person.
Example: You are reviewing a sales call recording where the salesperson followed the script closely but missed several cues that the client was hesitant about pricing. What feedback would you give the salesperson to help them improve future calls?
In many cases, learner-generated feedback doesn’t need to be graded. The value comes from articulating reasoning and comparing it to clear criteria or expert examples.
Conclusion
These types of elearning interactions go beyond basic recall to promote higher-order thinking. They mirror the kinds of thinking adults already do at work, which makes learning more transferable and more credible.
What elearning interactions can you add? Enter in the Comments below.


I get so many insights from your emails, posts etc. I’m new in the field so I’m just trying to remember and save everything as much as I can. Thanks!
Oh thank you for your kind words. How new are you? – Just got a job? or Starting to learn ID? I can give you some recommendations from my site, because I have over 360 articles so some of the good stuff gets buried. You can email me at cmalamed[@]gmail.com. No brackets. I just do that for bots.
Best,
Connie
Thank you for these suggestions, Connie! I always loved spot the mistake type images/puzzles/scenarios, etc. When learners review realistic examples (an email, a screenshot, a form, etc.) and have to find what’s wrong/red flags. (Similar concept in #1of your list) It gives them practice spotting problems the way they would on the job, instead of just reading the rule and clicking Next.
I think that’s a great way to implement #1 — similar but different. Thank you.
These are great. I like to create reflection exercises, asking people to apply the new concept to a recent situation of their own, and encourage them to print or download their responses in PDF format etc. I’ve bookmarked this list for future!
Thanks, Sarah. I’ve done that too and I think reflection questions are so effective if a person uses them. I never did figure out how to get some data on their use but there is probably a way with xAPI or using vibecoding to get data out of the interaction.
Two things. 1. I wish you’d talked about the cognitive outcomes of each, and when they’d therefore make sense. Granted, a non-trivial task… 2. I’m reminded of Tom Reeves’ idea of a two-stage interaction. First, choose the response you’d choose, then, choose the underlying rationale. You’d have three choices, and then three rationales for each. Yes, a fair bit of work, but would keep people from succeeding by just guessing, and can diagnose some underlying misconceptions. Or, you could, I suppose, have the wrong ones get remediated with feedback, and only the right one would take the additional step…(fewer responses to generate).
I think that’s a really good point, Clark. I did attempt it and so many seemed similar that I felt it wasn’t helpful. Probably a cognitive scientist could help me make the subtle and nuanced differences between some of the engagements. The two-step approach does seem superior. Thanks.