Why We Do Our Own Machine Learning
Computer vision is the engine of automated proctoring. It’s what analyzes webcam videos and screen recordings to help instructors determine if an exam violation has occurred. For example, if students leave their computer during an online assessment, the computer vision system will “flag” those portions of the video for the instructor.
Respondus Monitor’s computer vision system differs from other proctoring companies in that it doesn’t use off-the-shelf machine learning models that were trained with generic data sets. Those models might be fine for products like outdoor security cameras or access control systems, but they fall short for exam proctoring systems.
Respondus Monitor is continuously trained with proctoring data so it can take advantage of the unique characteristics of online testing environments: indoor lighting, webcam-quality video, low movement, face-level camera angles, and so on. The are many advantages to this approach:
1. It reduces false positives
A false positive is when software thinks an event has occurred, but it hasn’t. Over the past year, the false positive rate for Respondus Monitor has been reduced by 80% with the help of machine learning. This means that when Respondus Monitor flags something today, it is more relevant than ever.
2. It keeps computer/bandwidth requirements low
General-purpose computer vision models can detect hundreds of things out of the box. But Respondus Monitor doesn’t need to detect dogs, automobiles, or facial expressions. By focusing machine learning on factors relevant to proctoring, the models can be kept small. And the smaller the models, the better they work with low bandwidth networks and underpowered computers (like older Chromebooks).
3. It ensures algorithm fairness
Algorithm fairness is a process to ensure that bias is eliminated from machine learning models. Each new model for Respondus Monitor is tested across age, gender, and skin tone groupings, as well as other characteristics that can impact proctoring results (hairstyles, eye glasses, head coverings, etc.). This type of testing is best achieved with data from online testing environments, not general-purpose data sets.
4. It provides the foundation for new features
By training our own models, we can zero in on issues unique to online proctoring. For example, students often slouch in their chairs after an exam has started -- or unwittingly adjust their screen so only the top portion of their face appears in the webcam video.
A recently added feature prompts students to tilt down their webcam when this problem occurs, thus reducing their likelihood of being flagged by Respondus Monitor.
Another recent feature detects if the lighting in the room is especially bad. A gentle prompt by Respondus Monitor will help students adjust their lighting which, again, reduces the chance of getting flagged. It also provides a clearer proctoring video for the instructor. Enhancements like these are possible when you train your models for specific conditions.
Our computer vision system is called Respondus Vision. It’s fast, focused and the foundation for new features. It’s how we see the future.
Related Articles
Sensitivity Levels in Respondus Monitor
Enhanced Flagging for Respondus Monitor