In 2025, an AI model learned to lip-read. It watched silent video footage and decoded what people were saying, more accurately than professional human lip readers. AlphaFold, an AI system from DeepMind, could look at a protein’s genetic sequence and predict its 3D shape better than most biology labs—a breakthrough that won its creators the Nobel Prize. And one AI model, installed quietly on a laptop, could detect text on screen, infer the question, and generate an answer without anyone clicking a thing.
This is not the AI you ask for help. This is the AI that helps you before you ask.
Now imagine giving someone a test while this is running. You could be screen-sharing, recording audio, using every tool in the book, and you’d still miss it. That’s how assessment changed in the age of ChatGPT.
And that’s why we had to rethink proctoring for our assessments on Equip.
“Our Questions Aren’t Google-able”
That used to be our competitor's claim.
The idea was: if a candidate copied our test question and pasted it into Google, they wouldn’t get a direct answer. Question Banks were “safe”.
But today? People are wondering if Google will survive (while some might claim they were ahead of time). Because ChatGPT doesn’t just search. It solves. It reasons. It explains. You paste the question in, and you get the answer—clearly, instantly, and often for free.
So how do you design a test in a world where every question can be answered instantly by AI?
You change how you run the test.
AI is Invisible. Literally.
Equip's assessments include proctoring by default, but traditional approaches were built for a very different model of cheating.
Earlier, browser-based tools could be flagged through tab monitoring, input tracking, or screen sharing. But today’s AI tools often operate at the OS or system process level. They are not browser extensions or standalone apps with visible UI. Many run as background services or native applications, indistinguishable from legitimate software.
Screen sharing and basic remote monitoring fail because these tools don't generate visual cues or require user interaction—they can read on-screen content via OCR and return answers via overlays or system notifications, all without triggering typical detection methods.
Candidates don’t even need to take a photo or type a prompt. The AI just sees the screen and responds.
Enhanced Proctoring with Two Cameras
That’s why we built aux device proctoring.
Here’s how it works: Candidate starts test on a laptop. They scan a QR code with their phone. The phone turns into a proctoring camera. It’s placed on the desk, watching the screen, the keyboard, the hands.
We take a photo every 10 seconds. And that one camera solves a lot:
- AI tools running on the screen? We’ll see them.
- Phone use to cheat? Not possible—the phone is the camera.
- Typing patterns? We’ve got a clear view of the keyboard and hands.
It’s simple. It’s clever. And it works.
In the ChatGPT Era, Visibility is Everything
Many candidates were already being assessed without auxiliary proctoring enabled even before ChatGPT came into the picture. But now, if you're conducting an assessment without a full view of the candidate—their screen, their hands, their environment—you’re not really conducting an assessment at all.
It's not just about catching cheaters. It's about shifting the default. When candidates know that proctoring has evolved as much as AI has, the incentive to cheat drops sharply. They’re more likely to engage honestly when they know the system watching them isn’t stuck in 2020.