Preparing leaders and organisations to lead with purpose and clarity

Preparing leaders and organisations to lead with purpose and clarity

When the Algorithm Said No

What AI Hiring Tools Are Really Learning From Us

A practical playbook for responsible AI hiring in an era of algorithmic decision making

Indrajeet Sengupta | AIKYOS Consulting Partners

The Wake-Up Call

A team member once confided: whilst we had scaled up hiring from referrals, a qualitative target we had agreed to; he was worried about the “sameness of candidates” joining us. Similar companies. Same engineering schools. Identical past managers.

The candidates were excellent. But something seemed amiss.

As we rush to adopt AI in hiring, we need to pause and ask ourselves: What are these systems actually learning from us?

The Amazon Story: A Cautionary Tale

In 2018, Amazon scrapped an AI recruiting tool trained on a decade of hiring data. The problem? That data was dominated by male candidates.

The system learned to penalise resumes containing:

  • The word “women’s”
  • Mentions of women’s colleges
  • Other gender-associated patterns

36%

Higher Performance

Businesses in top quartile for diversity outperform financially (McKinsey)

37%

Trust Gap

Job seekers who trust AI to fairly evaluate them (Josh Bersin)

If Amazon, with all its resources and technical expertise, couldn’t build a bias-free system, what does that tell us about the hundreds of smaller organizations deploying similar tools today?

India's Challenge: When AI Learns Caste

We often think of AI bias as a Western problem — race, gender, age, sex etc. But India faces something more complex.

MIT Technology Review Investigation: OpenAI's models, widely used in India, consistently associated certain castes with menial labour and others with professional roles.

In one case, ChatGPT automatically “corrected” a candidate’s surname from a Dalit name to an upper caste one in a job application. An AI system, trained on patterns from our society, automatically changed someone’s identity to fit what it had learned about who belongs in professional roles.

Standard bias tests measure age, disability, race, and religion. But they don’t measure caste bias at all. Researchers at IIT, Bombay built a new benchmark called BharatBBQ, specifically to detect these patterns.

India's Challenge: When AI Learns Caste

The legal landscape is shifting quickly. Here are the landmark cases reshaping AI hiring:

1. The Cisco Case (2020)

California’s civil rights department sued Cisco after a Dalit engineer reported discrimination by supervisors from higher castes. A 2018 survey found 67% of Dalits experienced unfair treatment in American workplaces.

2. The Workday Lawsuit

Derek Mobley applied to over 100 jobs through Workday’s AI system. He received rejections within minutes — sometimes at 2 AM. He claims no human ever reviewed his applications. In May 2025, a court certified this as a collective action, covering everyone over 40 rejected by Workday’s system since 2020.

3. The Eightfold Case (January 2026)

Job seekers discovered that Eightfold, used by Microsoft, PayPal, and a third of Fortune 500 companies, was creating secret scores (0–5) about candidates without their knowledge. The lawsuit argues candidates should have the same rights to review and dispute these AI reports as they do credit reports.

4. The HireVue-Intuit Case (March 2025)

A deaf, Indigenous woman was denied a promotion after an AI video interview. She requested human-generated captions — denied. The system then told her to “practise active listening.” Research shows these systems perform ten times worse for deaf individuals.

"Drawing an artificial distinction between software decisionmakers and human decisionmakers would potentially gut anti-discrimination laws." — Court ruling in the Workday case. Translation: If your AI discriminates, you're responsible for that discrimination.

The Real Problem: We Are Teaching AI Our Own Biases

"AI is inherently biased because there is no such thing as unbiased data." — Shervin Khodabandeh, BCG

University of Washington research found that AI models preferred resumes with white-associated names 85% of the time, and Black-associated names only 9% of the time.

Dave Ulrich encourages us to ask “so that…” after every HR initiative. We’re using AI in hiring… so that what? If the answer is “so that we can hire faster,” we are missing the real target. The real answer should be: “So that we can build diverse, high performing teams that drive business results.”

The Hidden Trap: When Your Referral Network Teaches the Algorithm

Most AI hiring systems are not learning from abstract job requirements. They are learning from who we have actually hired. Most of us have built our teams largely through referrals.

Referrals feel meritocratic. Your top performers recommend people they know and trust. But referrals create patterns that AI amplifies:

Geographic Concentration

Your Bangalore team refers candidates from Bangalore. AI learns that “good candidates” come from Bangalore. Talented people from tier-2 cities get filtered out — not because they lack skills, but because they don’t match the geographic pattern.

Educational Institution Bias

If your team mostly graduated from IITs, IIMs or NITs, AI learns these as “quality signals.” Brilliant graduates from regional engineering colleges face an invisible barrier.

Social and Professional Networks

People refer people they know from their communities, their colleges, their social circles. First generation professionals, people from working-class backgrounds, those without access to elite networks simply don’t appear in the referral pipeline.

Communication Patterns

Referrals often share similar communication styles and English fluency levels. AI trained on this can penalize candidates with regional accents or different professional communication norms, even when these factors have nothing to do with job performance.

The Culture Fit Trap and India's Hidden Patterns

"The 'culture fit' trap: The algorithm isn't detecting whether someone will thrive in your culture. It's detecting whether someone looks like the people you have hired before."

In India, this intersects with caste, religion, and regional identity in ways that are particularly difficult to detect. A candidate’s surname, hometown, college, and professional network often correlate with caste background. AI doesn’t need to explicitly know someone’s caste; it infers it through these patterns.

The result? We believe we’re hiring “the best talent” whilst systematically excluding entire populations who never had access to the networks that feed our hiring pipeline.

India's Unique Challenges

Missing from the Data

Marginalized communities are severely underrepresented in the datasets these systems learn from. If you are not in the training data, the system literally cannot see you.

No Regulatory Framework

Unlike Europe’s AI Act or New York’s bias audit requirements, India has no mandatory framework for algorithmic fairness in hiring. The Digital Personal Data Protection Act covers data processing but says nothing about automated hiring decisions.

Name Based Filtering

Studies show that AI systems in India give fewer interview callbacks to candidates with non-Anglicized names. The systems use name as a proxy for other characteristics correlated with past hires.

7 Actions for Responsible AI Hiring

Your Playbook — what responsible AI hiring looks like in practice:

  1. Regular Bias Audits: Test your AI quarterly against all protected groups. Document everything. The EEOC has made clear that you’re liable for discriminatory outcomes, even if a vendor built the tool.
  2. Audit Your Referral Network First: Before AI learns from your hiring data, examine who your referrals actually represent. Are they geographically diverse? From varied educational backgrounds? If your referral network is homogeneous, your AI will be too.
  3. Question Your Training Data: What patterns exist in your historical hiring? Who hired the candidates? If you have mostly hired from elite institutions or specific geographies, your AI will perpetuate this, unless you intervene.
  4. Keep Humans in the Loop: AI should augment hiring managers, not replace them. Ensure that you have human review before making decisions.
  5. Be Transparent: Candidates should know when AI is evaluating them. They should have the right to request human review. This isn’t just ethical — it’s increasingly becoming a legal requirement.
  6. Seek Independent Oversight: Bring in third party experts who understand local discrimination patterns. What works in California might miss caste dynamics in Bangalore.
  7. Actively Expand Your Talent Sources: Recruit from tier 2 and tier 3 cities. Partner with regional colleges. Hire from non-traditional backgrounds. If AI only learns from elite networks, it will only find elite networks.

Two Futures: Which Will We Choose?

The Optimistic View

AI can help us see our own biases. Done right, it can flag when our decisions follow patterns we’d rather not perpetuate. It can help us reach talent we’ve historically missed.

  • Reveals hidden bias patterns
  • Expands talent reach
  • Enables data-driven fairness
  • Scales inclusive practices

The Concerning View

AI at scale can automate discrimination faster than we can correct it. A biased hiring decision by one manager affects one role. A biased algorithm affects thousands.

  • Amplifies existing biases
  • Operates at unprecedented scale
  • Creates invisible barriers
  • Lacks accountability mechanisms

We're teaching these systems. Every hire we make, every candidate we reject, every referral we accept, the algorithm is watching and learning from us. The question is: What are we teaching it?

Start NOW: Pick One Action

You don’t need to solve everything at once. Start with one action immediately:

  • Request a bias audit of your current AI tools
  • Map the demographics of your referral network
  • Ask your vendor for full transparency on training data and decision logic
  • Implement mandatory human review for all AI rejections in one department
  • Partner with one non-traditional talent source you have never recruited from before

The machine doesn't discriminate on its own. We have taught it. Only we can teach it better.

Remember: The organizations that get this right won’t just avoid lawsuits. They will build teams that actually reflect the diversity of talent available — and they will outperform everyone else.

Sources for Further Reading

Key Research & Reports

  • Amazon AI Gender Bias (2018) — MIT Technology Review
  • OpenAI Caste Bias Investigation (2025) — MIT Technology Review
  • McKinsey Diversity Research — Diversity Wins Report
  • Josh Bersin — AI Hiring Trust Research
  • University of Washington — AI Name Bias Study

Legal Cases

  • Cisco Caste Discrimination Case (2020) — California Civil Rights Department
  • Workday AI Bias Lawsuit — HR Dive
  • Eightfold AI Lawsuit — Fortune
  • HireVue/Intuit ACLU Case — HR Dive
Share this post