Online

Evaluation of Student-Submitted Responses

The diversity and complexity of long-answer questions, scientific equations, and hand-drawn diagrams have traditionally posed challenges to transitioning examinations online. The ability to evaluate these varied forms of student responses is essential for a comprehensive understanding of their grasp of the subject matter. DigiProctor's question creation wizard enables the crafting of a broad spectrum of question types, including Multiple-Choice Questions (MCQs), Multiple-Response Questions (MRQs) with partial marking, True/False, Subjective, Case Studies, and Grouped Comprehension.

Students can take advantage of the DigiProctor (Answer) Upload App to submit lengthy written answers and detailed diagrams. This is particularly beneficial for those who find typing extensive responses within tight time limits challenging. The app allows for efficient uploading of these comprehensive answers, which evaluators can subsequently access and assess online.

Online

Evaluation and Result Analytics

Responses to objective questions are automatically evaluated. However, if a question paper includes subjective questions, these require manual marking.

For subjective questions, the system awaits the completion of the evaluation before generating ranked results. Subjective evaluation can be conducted with aggregate marking or detailed step-wise/paragraph-wise marking. All hand-drawn or handwritten responses submitted by candidates are readily accessible to evaluators with a single click.

DigiProctor offers institutions a user-friendly interface for assessing subjective questions, streamlining the evaluation process and enhancing analytical capabilities.

AI Ensuring Test Integrity Image
AI Ensuring Test Integrity Image
Refined

Online Evaluation of

Subjective Questions

Subjective responses are rigorously assessed by either internal or external examiners. The Controller of Examinations (COE) appoints these evaluators and reviewers, allocating specific students whose submissions they are tasked with assessing. Examiners can scrutinise responses and input their observations online, operating autonomously from any location.

Reviewers, or the Chief Examiner, possess the authority to scrutinise and reassess the marks awarded by examiners, ensuring uniformity and equity throughout the evaluation process.

DigiProctor provides a comprehensive result analysis framework that precisely evaluates the effectiveness of tests and analyses students’ learning outcomes, utilising Bloom’s Taxonomy to facilitate a structured understanding of the cognitive levels attained.

Advanced

Online Evaluation of Subjective Questions

Subjective responses uploaded by students undergo a sophisticated online evaluation process. This includes in-depth analytics of both class-wide performance and individual achievements, highlighting proficiency in each designated topic and learning outcomes aligned with Bloom's taxonomy. A comparative analysis using z-scores or T-scores offers insights into each student's relative performance.

The platform extends its analytical capabilities to the tests themselves, assessing test reliability through KR-20 metrics and evaluating individual questions for guessability, difficulty, and anomalies. This enables educators to compare current tests with previous ones, refining the efficacy of future assessments. Students are also empowered to participate in practical or viva-voce segments remotely for vocational assessments, further broadening the scope of DigiProctor's capabilities.

AI Ensuring Test Integrity Image

Comprehensive

Result Analysis

DigiProctor delivers meticulous result analysis, providing detailed insights into test papers, class performance, and individual student metrics. Every test culminates in a ranked list of students based on their test marks and Trust Scores, which serve as a measure of the test's credibility. This sophisticated system allows for a granular evaluation of test integrity and helps educators make informed decisions to enhance educational outcomes.