3rd June 2025

This text was contributed to the Roboflow weblog by Abirami Vina.

Introduction

Digital transformation is a standard follow in varied fields, together with training. There was a big shift in direction of on-line studying and assessments, providing benefits like elevated entry and suppleness for learners worldwide. Nonetheless, this shift has additionally raised considerations about tutorial honesty, notably in distant exams.

On-line proctoring methods play a crucial position in guaranteeing honest and sincere evaluations of scholars’ data and expertise in distant settings. Some college students could also be tempted to make use of their smartphones throughout these assessments, making it important to detect and forestall this habits.

On this information, we’ll construct an utility that makes use of pc imaginative and prescient to create a web based proctoring system.

Let’s get began!

Venture Overview

Our goal is to create a web based proctoring system that may detect when an individual taking a web based examination is utilizing a cellphone. We’ll use object detection to detect the cellphone, a webcam to seize reside footage of the individual taking the examination, and Roboflow Inference to deploy our system. Let’s shortly undergo the components of the system.

Detecting the Cellphone

To detect a cellphone being utilized by a scholar writing a web based examination, we want a skilled object detection mannequin. Roboflow Universe is a superb place to search for skilled fashions. Roboflow Universe is an thrilling community-driven initiative that hosts an enormous assortment of over 200,000 pc imaginative and prescient datasets, a lot of which have related skilled fashions. For this information, we’ll be utilizing the skilled mannequin proven under from Roboflow Universe.

Upon signing up for a Roboflow account, reloading the web page proven above, and scrolling down, you’ll see a bit on easy methods to deploy the API for this mannequin, as proven under.

You’ll be able to perceive the mannequin ID and model quantity from the third and fourth strains of the pattern code. On this case, the mannequin ID is “phone-finder,” and it’s the fourth model of the mannequin. This data can be helpful for us after we put collectively our deployment script.

Deploying the Mannequin

We’ll be leveraging a webcam in our on-line proctoring system as a result of it supplies a direct window into the test-taker’s setting, enabling real-time monitoring of cellphone utilization. We’ve got a skilled mannequin to detect telephones, a webcam for capturing the visible knowledge, and we’ll use Roboflow Inference to serve the mannequin and obtain predictions.

The Roboflow Inference Server is an HTTP microservice interface designed for inference duties. It’s adaptable for varied deployment situations utilizing Docker and has been fine-tuned to effectively deal with requests from each edge gadgets and cloud-based setups in a constant format.

Creating an On-line Examination Proctor

Let’s put the items collectively now!

Organising Roboflow Inference

Roboflow Inference supplies a Python library and Docker interface. Utilizing pip installs the “inference” package deal immediately into your Python setting. It is a light-weight possibility, well-suited for Python-focused tasks.

Alternatively, Docker packages “inference” together with its setting, guaranteeing uniformity throughout completely different setups. This makes it a wonderful selection for scalable deployments the place consistency is paramount.

We’ll check out easy methods to arrange each, and you may select one to attempt.

Organising Docker

First, we’ll want to put in Docker. Docker provides us a containerized setting that ensures the Roboflow Inference Server operates constantly and independently, regardless of the underlying host system. You’ll be able to check with the official Docker set up information.

As soon as Docker is efficiently put in, you are all set to obtain Roboflow Inference. The particular command to execute is dependent upon the kind of machine you are utilizing.

In case you are utilizing an x86 CPU, pull the official Roboflow Inference server Docker picture utilizing the next command:

docker pull roboflow/roboflow-inference-server-cpu

Then, run the Docker picture utilizing the next command:

docker run –net=host roboflow/roboflow-inference-server-cpu:newest

The Docker picture will begin to run, as proven under.

Check out the documentation for extra choices.

Pip Putting in Roboflow Inference

To put in Roboflow Inference on a CPU machine, run:

pip set up inference

To put in Roboflow Inference on a GPU machine, run:

pip set up inference-gpu

Quickstart

You need to use the next items of code to make sure that you will have Roboflow Inference arrange appropriately and have the right parameters like API key, dataset id, and model quantity.

Docker Quickstart

Bear in mind to have the Docker picture operating earlier than you check out this code.

import requests

dataset_id = “phone-finder”
version_id = “4”
#Change URL_FOR_IMAGE with the URL of the picture you might be testing
image_url = “URL_FOR_IMAGE”
#Change ROBOFLOW_API_KEY along with your Roboflow API Key
api_key = “ROBOFLOW_API_KEY “
confidence = 0.5

url = f”http://localhost:9001/{dataset_id}/{version_id}”

params = {
    “api_key”: api_key,
    “confidence”: confidence,
    “picture”: image_url,
}

res = requests.publish(url, params=params)
print(res.json())

Pip Quickstart

from inference.core.data_models import ObjectDetectionInferenceRequest
from inference.fashions.yolov5.yolov5_object_detection import (
    YOLOv5ObjectDetectionOnnxRoboflowInferenceModel,
)

mannequin = YOLOv5ObjectDetectionOnnxRoboflowInferenceModel(
    model_id=“phone-finder/4”, device_id=“my-pc”,
    #Change ROBOFLOW_API_KEY along with your Roboflow API Key
    api_key=“ROBOFLOW_API_KEY”
)

request = ObjectDetectionInferenceRequest(
    picture={
        “kind”: “url”,
        #Change URL_FOR_IMAGE with the URL of the picture you might be testing
        “worth”: “URL_FOR_IMAGE”,
    },
    confidence=0.5,
    iou_threshold=0.5,
)

outcomes = mannequin.infer(request)

print(outcomes)

On-line Examination Proctor

Now that we have now Roboflow Inference arrange, we are able to join our mannequin and the webcam collectively to create a web based examination proctoring system. Relying on whether or not you put in the Python library or Docker interface of Roboflow Inference, you may undergo the code for the web examination proctor under.

The code begins by establishing important dependencies, after which it constantly captures frames from the webcam feed and makes use of the thing detection mannequin to test every body for cellphone utilization. When an unauthorized cellphone is detected, the system triggers an alert. The outcomes are displayed on the display screen, offering a real-time monitoring standing.

Implementation Utilizing Docker

Guarantee to have the Docker picture operating earlier than you check out this code.

import requests
import numpy as np
import cv2
import base64
import json

dataset_id = “phone-finder”
version_id = “4”

#Change ROBOFLOW_API_KEY along with your Roboflow API Key
api_key = “ROBOFLOW_API_KEY”
confidence = 0.8

url = f”http://localhost:9001/{dataset_id}/{version_id}”

#connecting to the webcam and fetching frames
cap = cv2.VideoCapture(0)
if not cap.isOpened():
print(“Can’t open digicam”)
exit()
whereas True:
# Seize frame-by-frame
ret, body = cap.learn()
# if the body is learn appropriately, ret is True
if not ret:
    print(“Cannot obtain body (stream finish?). Exiting …”)
    break
# Our operations on the body come right here

 #changing the frames to base64
retval, buffer = cv2.imencode(‘.jpg’, body)
img_str = base64.b64encode(buffer)

params = {
    “api_key”: api_key,
    “confidence”: confidence
}

res = requests.publish(url, params=params,knowledge=img_str,headers={
        “Content material-Kind”: “utility/x-www-form-urlencoded”})

outcomes = res.json()


# Checking the predictions for detection of a cellphone
if len(outcomes[‘predictions’])==1 and (“cellphone” in str(outcomes[‘predictions’])):
    print(“telephones”)       
    body = cv2.putText(body, ‘You might be utilizing a cellphone’, (50, 50), cv2.FONT_HERSHEY_SIMPLEX, 1,(255, 0, 0), 2, cv2.LINE_AA)
else:
    body = cv2.putText(body, ‘Monitoring’, (50, 50), cv2.FONT_HERSHEY_SIMPLEX, 1, (0, 255, 0), 2, cv2.LINE_AA)
       
# Show the ensuing body
cv2.imshow(‘body’, body)
if cv2.waitKey(1) == ord(‘q’):
    break

# When every part is finished, launch the seize
cap.launch()
cv2.destroyAllWindows()

Implementation Utilizing Pip

import numpy as np
import cv2
import base64
from inference.core.data_models import ObjectDetectionInferenceRequest
from inference.fashions.yolov5.yolov5_object_detection import (
    YOLOv5ObjectDetectionOnnxRoboflowInferenceModel,
)

mannequin = YOLOv5ObjectDetectionOnnxRoboflowInferenceModel(
    model_id=“phone-finder/4”, device_id=“my-pc”,
    #Change ROBOFLOW_API_KEY along with your Roboflow API Key
    api_key=“D75innIKt9XXp8ueZAxa”
)

#connecting to the webcam and fetching frames
cap = cv2.VideoCapture(0)
if not cap.isOpened():
print(“Can’t open digicam”)
exit()
whereas True:
# Seize frame-by-frame
ret, body = cap.learn()
# if the body is learn appropriately, ret is True
if not ret:
    print(“Cannot obtain body (stream finish?). Exiting …”)
    break
# Our operations on the body come right here


#changing the frames to base64
retval, buffer = cv2.imencode(‘.jpg’, body)
img_str = base64.b64encode(buffer)

request = ObjectDetectionInferenceRequest(
    picture={
        “kind”: “base64”,
        “worth”: img_str,
    },
    confidence=0.5,
    iou_threshold=0.5,
)

outcomes = mannequin.infer(request)

# Checking the predictions for detection of a cellphone
if (outcomes.predictions!=True) and (“cellphone” in str(outcomes.predictions)):
    print(“telephones”)
    body = cv2.putText(body, ‘You might be utilizing a cellphone’, (50, 50), cv2.FONT_HERSHEY_SIMPLEX, 1,(255, 0, 0), 2, cv2.LINE_AA)
else:
    body = cv2.putText(body, ‘Monitoring’, (50, 50), cv2.FONT_HERSHEY_SIMPLEX, 1, (0, 255, 0), 2, cv2.LINE_AA)
     
# Show the ensuing body
cv2.imshow(‘body’, body)
if cv2.waitKey(1) == ord(‘q’):
    break


# When every part is finished, launch the seize
cap.launch()
cv2.destroyAllWindows()

Pattern Output

Right here is an instance of the output:

Conclusion

As training and expertise proceed to evolve, so should our strategies for guaranteeing equity and honesty in assessments. On this information, we went over easy methods to create a web based proctoring system able to detecting cellphone utilization throughout on-line exams. This method combines the strengths of object detection, webcam monitoring, and the sturdy capabilities of Roboflow Inference.

Regularly Requested Questions

The place can you discover your Roboflow API key?

To get your API key, navigate to your Roboflow dashboard, and from there, entry the Roboflow API tab discovered within the sidebar navigation of the settings web page. Lastly, make sure that to securely copy your Personal API Key, treating it with the identical confidentiality as a password, because it supplies entry to all knowledge and fashions inside your workspace.

The place can you discover the Mannequin ID and model of a mannequin you prepare?

To entry the skilled mannequin for object detection, you may have to determine the mannequin ID and model quantity related to the mannequin you plan to make use of. You will discover your mannequin ID and model quantity inside the URL of the dataset model web page, which is the web page the place you initiated coaching and considered your outcomes.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.