You applied for a job, hit submit, and moved on, but did you know your resume, voice sample, video interview, and even your LinkedIn activity could now be living inside one or more AI-powered recruitment systems, being stored, scored, re-used, or even sold behind the scenes? Most job applicants don’t know what happens to their data after they apply, and employers don’t always tell us plainly. In this article, we show you exactly where your data goes, who can see it, what the real risks are (from bias to breaches), and the real, practical steps you can take to protect yourself and demand better from hiring teams.
Read on if you’ve ever wondered: “Did that company keep my resume? Did an algorithm judge my face? Can I make them delete my data?”
How does AI actually handle your data in recruitment?
Where your data comes from. Recruiters and automated systems pull data from a surprising number of places: your uploaded CV, application forms, recorded video interviews, chatbot chats, short-answer assessments, background-check vendors, and publicly available profiles on LinkedIn or social media. Some systems also infer traits from voice cadence or facial expressions in video interviews. If you didn’t read the tiny privacy box before clicking “Apply,” that doesn’t change the fact that these inputs exist and can be processed by AI. For a practical overview of lawful collection and consent in recruitment, see this GDPR guide for recruitment data.
What the AI does with that data. Once collected, AI systems can do three main things:
(1) screen & rank candidates by matching resume keywords or inferred traits to a job profile;
(2) analyse unstructured inputs (video, audio, essays) for signals like sentiment, language use, or facial micro-expressions; and
(3) route or re-use candidate data — e.g., add you to a talent pool, share details with recruiters or vendors, or feed anonymized data into model retraining.
These are standard features for many applicant tracking systems and interview-analysis vendors. If an employer relies solely on automated decision-making, GDPR and other rules may require extra safeguards or human review.
Where the data is stored and who it’s shared with. Candidate data typically lives on cloud servers owned by ATS vendors or video-interview platforms, and sometimes third-party assessment providers. That means multiple parties, the hiring company, the software vendor, background-check services, and possibly external recruiters or data brokers may have access. Some companies explicitly share candidate data with partners for talent marketing or reselling; others don’t make that obvious. The European Data Protection Supervisor (EDPS) advises that applicants must be informed of processing purposes and third-party sharing before the selection begins.
Transparency gaps and “black box” processing. Many AI hiring tools operate opaquely — they evaluate candidates using proprietary models and vague labels like “cultural fit” or “engagement score.” That’s a problem because you can’t correct, contest, or even fully understand a decision if the model’s rules aren’t disclosed. Regulators are noticing: laws like the GDPR and new local rules require disclosure about automated decision-making and sometimes a human-review backstop. In the U.S., Illinois’ AI Video Interview Act already forces employers to disclose AI use and explain, at a high level, how the system evaluates candidates.
The real risks: bias, breaches, and loss of control
Algorithmic bias: the data problem under a different name. AI models aren’t neutral, they learn from past hiring data, and if that history reflects sexism, racism, or other biases, the model often reproduces (or amplifies) those patterns. This effects across different AI hiring tools, for example, Amazon’s scrapped AI recruitment system that penalized resumes containing the word “women’s.” That’s why audits, diverse training data, and removing obvious demographic proxies (like names or photos) matter, but they’re not always implemented. If a model ranks candidates differently because of perceived gender or race from a name, that’s not just unfair, it’s illegal in many jurisdictions.
Real-world breaches and sloppy security. Efficiency is great, until a vendor misconfigures a server or uses weak access controls. A recent Paradox AI breach exposed millions of job applicants’ records from a major hiring platform used by McDonald’s, showing how vulnerable applicant data can be when security practices are weak. That leak contained names, contact details, and application histories, exactly the kind of data that scammers and unscrupulous firms love.
Unintended reuse and third-party sharing. Even if your original application was for one role, companies frequently keep candidate data to build talent pools for future openings. Vendors might aggregate anonymized metrics to improve models, but “anonymized” is sometimes reversible. Worse, some data brokers and recruitment marketplaces buy or harvest candidate records and use them for targeted marketing or reselling. If you’re picky about who sees your personal info, this loss of control is a big deal.
What that actually means for you (in plain terms). Your resume might be used to train a model that will evaluate other applicants; your video could be scanned for facial cues that affect hiring outcomes; your contact info could appear in third-party databases; and, worst case, a breach could expose the data to fraudsters. That’s why transparency, audit logs, and candidate rights (like erasure, access, and human review) are not just legal jargon, they’re practical protections.
Your Rights & Concrete Actions: Speak Up, Delete, Demand
You’ve got rights and they’re powerful. Whether you’re in the EU or elsewhere, privacy laws like the GDPR give you legal rights: the right to access what data employers hold (Article 15), the right to erase it (Article 17), and to demand decisions be handled by a person instead of just an algorithm (Article 22). In parts of the U.S., laws like Illinois’ AI Video Interview Act already require disclosure of AI usage and fairness. Knowing these rights means you can push back and hiring teams must respond.
How to ask in real words. Don’t get stuck on formal legalese. Here’s a simple email script you can customize and send to recruiters or HR:
Hi [Recruiter Name],
I’m writing to request access to the personal data you hold on me in your AI recruitment systems, specifically any analysis results, scoring, or video assessments. Please also share details on whether my data has been shared with any third parties, and how long it’s retained. If possible, I’d also like to request deletion of my data from your systems once my application process is complete.
Thank you for your transparency.
Best, [Your Name]
That’s grounded in rights under GDPR Article 15 and Article 17, but friendly and easy to send.
Checklist — what to ask or look for.
Action | What to check or request |
---|---|
Ask about automated decisions | “Was any AI solely responsible for rejecting or ranking me?” (GDPR Article 22 right) |
Request transparency | Ask “Who sees my data? Third-party vendors? Talent pools? Recruiters?” |
Demand data deletion | “Please delete my data after the process ends, I’m using GDPR Article 17 / your state law.” |
Ask for remediation | If you suspect bias, ask for human review or an explanation of “cultural fit” scoring. |
Follow up | If you don’t hear back in 30 days, send a polite reminder citing your legal rights. |
These are practical steps you can take immediately after applying, or at any point afterward.
When to escalate and who to tell. If the company doesn’t respond or denies your request, escalate it:
- Canada: Privacy is handled at both the federal and provincial level. Federally, contact the Office of the Privacy Commissioner of Canada (OPC) under PIPEDA. If you live in Alberta, British Columbia, or Quebec, you can also contact your provincial privacy office: Alberta Office of the Information and Privacy Commissioner, British Columbia Office of the Information and Privacy Commissioner and Commission d’accès à l’information du Québec.
- Nigeria: Contact the Nigeria Data Protection Commission (NDPC) under the Nigeria Data Protection Act 2023. They handle complaints about unlawful collection, misuse, or sharing of personal data, including recruitment data. You can submit a complaint directly via their reporting portal.
- If you’re in the EU, contact your country’s Data Protection Authority.
- In the UK, reach out to the Information Commissioner’s Office (ICO).
- In the U.S., explore state privacy regulators or the FTC.
Regulators take enforcement seriously, especially when AI is involved without transparency.
Why this matters to creators like you. If you write about recruitment, or run workshops for jobseekers, these are tools you can teach. Templates, checklists, legal grounding, friendly tone, that’s the kind of practical content that wins trust, clicks, and actually empowers real people.
You’re in charge
The AI systems in recruitment are powerful but not omnipotent. This article equips you with knowledge, language, and confidence to say: “Wait, what’s happening with my data? Can you show it to me? Can you delete it? Is a human reviewing my application?” You don’t need to be a lawyer, but you do need to be a data-aware job candidate.