Respondus Monitor: Understanding Proctoring Results

About Respondus Monitor

Respondus Monitor is an add-on feature for LockDown Browser for non-proctored exams.

Respondus Monitor uses a webcam to record student exam sessions, acting as a deterrent to cheating

It flags suspicious behavior and uses advanced data analysis to determine which exam sessions require the greatest level of attention

Class Results

Once your students have completed the exam, to view the class results, select the "Class Results" for an exam from the LockDown Browser Dashboard. The class roster with summary data is shown here:

Respondus Monitor Class Results View

Review Priority is a comprehensive measure that conveys whether a student's exam session warrants a closer look by the instructor. Results appear in Low, Medium, and High categories with a green-to-red bar graph conveying the risk level.

Use [+] to expand the details for a student:

1) Summary of key data, 2) List of Flags and Milestones (see explanation below), 3) Video playback and controls,  4) Timeline with flags (red) and milestones (blue), 5) Thumbnail images from video

Frequently Asked Questions

How is Review Priority Determined?

The Review Priority value is derived from three sources of data:

  • the webcam video of the test taker
  • the computing device & network used for the assessment
  • the student's interaction with the assessment itself

The webcam video is analyzed using facial detection technology, which is how flagged events like "Missing from Frame" and "Different person in frame" are generated. Facial detection/recognition is an especially important part of the data analysis that occurs.

Data from the computing device and network will generate events such as video interruptions, auto-restarts of a webcam session, mouse/trackpad/keyboard/touch usage, attempts to switch applications, and so forth.

Data is also obtained from the student's interaction with the assessment, such as when the exam session starts and ends, when answers are saved, if the student exited the exam early, and so forth.

Using a patent-pending process, the data is then analyzed at two levels. It is first compared to baseline data for all videos analyzed by the Respondus Monitor system. It is then compared to data from other test takers of the same examination. Finally, weights and other adjustments are made to the data, from which the Review Priority value is generated.

What are Flags and Milestones?

Respondus Monitor generates a list of events from the exam session. "Flags" are events where a problem might exist, whereas "milestones" are general occurrences such as when the exam started, or when a question was answered.

Flagged Events*

  • Missing from Frame — the student could not be detected in the video frame for a period of time
  • Different person in Frame — a different person from whom started the exam may have been detected in the video frame for a period of time
  • Multiple persons in Frame — multiple faces are detected in the video for a period of time
  • An Internet interruption occurred — a video interruption occurred as a result of an internet failure
  • Video frame rate lowered due to quality of internet connection — if a poor upload speed is detected with the internet connection, the frame rate is automatically lowered for the webcam video
  • Student exited LockDown Browser early — the student used a manual process to terminate the exam session early; the reason provided by the student is shown
  • Low Facial Detection — facial detection could not be achieved for a significant portion of the exam
  • A webcam was disconnected — the web camera was disconnected from the computing device during the exam
  • A webcam was connected — a web camera was connected to the computing device during the exam
  • An attempt was made to switch to another screen or application — indicates an application-switching swipe or keystroke combination was attempted
  • Video session terminated early — indicates the video session terminated unexpectedly, and that it didn't automatically reconnect before the exam was completed by the student
  • Failed Facial Detection Check — facial detection could not be achieved during the Facial Detection Check portion of the startup sequence
  • Student turned off facial detection alerts — the student selected "Don't show this alert again" when the facial detection alert appeared during the assessment. The student did not receive alerts after this.

Milestone Events*

  • Question X Answered — an answer to the question was entered (or changed) by the student
  • Pre-Exam — the webcam recording that occurs between the environment check and the start of the exam
  • Exam Start — the start of the exam
  • End of Exam — the exam was submitted

* New flags and milestones are added periodically; this list isn't comprehensive.

Important Tips

1) Flags aren't cheating. Flagged events and the Review Priority value don't determine whether a student has cheated or not. Rather, they are tools to help identify suspicious activities, anomalies, or situations where the data is of too low of a quality to analyze.

2) Facial detection is important. Several flagging events rely heavily on facial detection technology. If the face cannot be detected in the video, it isn't possible to determine if the test taker is "missing" or "different". If a student's face is turned away from the webcam or heavily cropped in the video (e.g. you can only see the student's eyes and forehead), facial detection rates will drop. Other things that affect facial detection rates are baseball caps, backlighting, very low lighting, hands on the face, and certain eye glasses.

3) There are more "false positives" than "true positives." Flags that rely on facial detection technology are often incorrect (known as a false positive). If a student is flagged as "missing" but he/she is still visible in the frame, this would be considered a false positive. A "true positive" is a suspicious behavior that is correctly identified by the flagging system. Our goal is to reduce the false positive flags as much as possible, without missing the "true positive" events. It's not a perfect science — yet.

4) Garbage in, garbage out. You can achieve immediate improvement with automated flags that rely on facial detection by having students produce better videos. Provide these simple guidelines to students to help them create higher quality videos so the flagging system works better.

  • Avoid wearing baseball caps or hats that extend beyond the forehead
  • If using a notebook computer, place it on a firm surface like a desk or table, not your lap.
  • If the webcam is built into the screen, avoid making screen adjustments after the exam starts. A common mistake is to push the screen back, resulting in only the top portion of the face being recorded.
  • Don't lie down on a couch or bed while taking an exam. There is a greater chance you'll move out of the video frame or change your relative position to the webcam.
  • Don't take an exam in a dark room. If the details of your face don't show clearly during the webcam check, the automated video analysis is more likely to flag you as missing.
  • Avoid backlighting situations, such as sitting with your back to a window. The general rule is to have light in front of your face, not behind your head.
  • Select a distraction-free environment for the exam. Televisions and other people in the room can draw your attention away from the screen. Other people that come into view of the webcam may also trigger flags by the automated system.

5) Continual improvements. Respondus Monitor is the most advanced system for automated exam proctoring. The goal is to provide "meaningful results," not simply a list of flagged events that require instructors to analyze everything themselves. Respondus Monitor is continually being enhanced, so instructors can focus on teaching, not analyzing the videos of exam sessions.