TL;DR
For AI to discriminate, it needs to know whom it's discriminating against. We don't give it that information.
1. How we use AI, and how we prevent bias
Bias requires data. For an AI to favor or penalize someone based on gender, race, age, or religion, it needs access to that information: a name, a photo, a voice. Equip's approach is simple: we strip all of that out before the AI ever sees it.
- Resume parsing: When candidates upload their CVs, our AI extracts skills, experience, and qualifications. Candidates review these fields before we store the data, so they are in full control of what is being saved.
- Job fit scoring: Based on the parsed resume fields and criteria the recruiter sets, we generate a fit score from 0–100 with a detailed explanation of how the AI arrived at that number. We don't pass names, photos, addresses, or any demographic identifiers to the scoring model. The AI evaluates what candidates can do, not who they are.
- AI Interviews: In one-way AI Interviews, the AI never analyzes the video or audio. We transcribe the response to text and evaluate only what was said—no facial analysis, no voice pattern detection, no accent scoring. In conversational AI Interviews, we share only audio to generate follow-up questions, but scoring happens purely on the textual content. Along with the score, our AI provides an explanation for why it scored the way it did, and the recruiter can override the score.
- Proctoring: ML models monitor candidate activity during tests or interviews. We don't save candidate videos. The model runs in the browser and flags violations like no face detected or multiple faces. The Trust Score is calculated from these events, not raw video. Because we only share the metadata of the violations (type, duration, frequency, etc), the AI doesn't have any access to the actual candidate's photo, name, etc.
This is different from how a lot of hiring AI has worked historically. Some tools were built to analyze facial expressions and vocal tones, then had to quietly remove those features after public backlash and legal challenges. We never built them in the first place.
2. What about the underlying AI models?
Fair question. Research has shown that large language models can exhibit bias when given demographic information. To be fair, LLM labs, at least the top ones we use, have worked very hard to eliminate these biases. As of early 2026, we see these efforts bearing fruit.
But, even if an AI model has latent biases baked in from its training data, those biases cannot manifest if the model never receives the inputs that would trigger them. You can't discriminate based on someone's name if you don't know their name.
We don't claim "our AI is 100% unbiased." That would be a bold claim no one can honestly make. We claim something more defensible: our AI literally cannot act on demographic information because it doesn't have access to it.
3. Humans make the decisions
AI on Equip scores and ranks candidates. Humans make the final call.
Every candidate score comes with a reasoning. Recruiters see exactly why someone ranked high or low, based on how their skills and experience match the job requirements. It's not a black box where you're left wondering why the machine said what it said.
Even on AI Interviews, we provide feedback to every score that is AI-generated. Humans can override the score too.
4. Why this actually helps you find better candidates
The concern with AI in hiring is usually "what if it rejects good candidates unfairly?" That's valid. But consider the alternative.
Human reviewers are inconsistent. One recruiter values communication skills, another prioritizes technical depth, a third unconsciously favors candidates from familiar universities. When you're reviewing 500 applications, fatigue sets in. Great candidates slip through because someone skimmed too fast at 4pm on a Friday.
A systematic process applies the same criteria to every candidate. It doesn't get tired. It doesn't have a bad day. It won't move a candidate up the list because they went to the same college or grew up in the same city as the hiring manager.
The goal isn't just to avoid bias. It's to surface candidates you might otherwise miss.
5. What we don't do
A few of these are offered by others in the industry. We have very deliberately chosen not to build them:
- Facial analysis: We don't analyze expressions, eye contact, or visual appearance. The video exists so recruiters can watch it later if they want to. The AI only sees the transcript.
- Voice analysis: We don't evaluate tone, accent, speech patterns, or vocal characteristics. A candidate with a regional accent or a stutter gets evaluated on the same basis as everyone else.
- Social media scraping: We don't pull data from LinkedIn or other platforms. The candidate can share their LinkedIn URL and we show the same URL on their profile. We do not do anything with the data in the profile.
- Proxy discrimination: We don't share with AI zip codes, school names, or other fields that tend to correlate with protected characteristics.
6. What to tell your candidates
You're required to disclose that AI tools are used in your evaluation process. Here's language you can use:
"We use AI tools to help evaluate applications consistently. The AI assesses your skills and experience against job requirements. It does not have access to your name, photo, or other personal identifiers during evaluation. All final hiring decisions are made by our team. You have the right to request an explanation of how AI was used in evaluating your application. Contact Equip at privacy@equip.co to make this request."