The Invisible Worker: How AI Filters Out Gig Workers, Caregivers, and Returnees — and Why It May Be Discrimination

The Invisible Worker: How AI Filters Out Gig Workers, Caregivers, and Returnees — and Why It May Be DiscriminationHave you taken time off to raise children, care for a parent, or piece together contract work to make ends meet? Many of us have. Sadly, this might make you invisible to AI-powered hiring systems. Unfortunately, over 70% of companies now use these types of tools when finding or hiring candidates. Artificial intelligence tends to favor “ideal” candidates. Algorithms may target those who have traditional, uninterrupted career trajectories.

This selection process isn’t just unfair; in some cases, it may be illegal. Algorithms may carry out systemic discrimination against protected groups. And the goal doesn’t have to be the intent to filter out those protected groups. A disparate impact is enough to violate the law.

How AI hiring tools are trained to discriminate — even without meaning to

AI hiring tools sort through resumes, analyze keywords, and create models of ideal career trajectories to select candidates for interviews. The data sets used to train these algorithms may look at ideal past historical data, which could reflect previous biases. There could be several reasons that the algorithms discriminate, including:

  • Data bias (the existing data favors dominant demographics)
  • Developer bias (those creating the code have assumptions that make it into filtering criteria)
  • Proxy bias (the appearance of certain universities, clubs, or even zip codes could create economic class or race-based biases)

Gaps on resumes, periods of engaging in gig work, and working part-time roles all may lead to an algorithm filtering out a person’s resume. This practice by the hiring tools disproportionately affects women, older workers, caregivers, and economically disadvantaged applicants.

Who gets filtered out by AI hiring systems?

Hiring systems may filter out some groups unfairly. Some groups that might be excluded:

Caregivers and parents

Resume gaps for maternity/paternity leave, family caregiving, or bereavement might be easy for a human to understand. However, an algorithm may not have the same ability to understand these choices a person might make. The “Mom penalty” effect can even be magnified by bots that lack human empathy.

Gig and contract workers

Multiple short-term roles can appear like “job hopping” or a lack of stability. While many people work freelance or independent contractor roles or find lucrative gig work (short-term assignment) options, resume checking algorithms might hold outdated interpretations of these common positions. Freelancers’ resumes may not have traditional formatting, with employer names or traditional job titles. As a result, they might be disfavored by bots.

Older workers and returnees

Long careers with early experience may lead to age-related discrimination. Those who took long breaks for caregiving, military service, or illness may be unfairly downgraded. Workers who couldn’t find replacement jobs (after leaving a company or being terminated) may be penalized.

Algorithms may also discriminate against people with disabilities or other minorities based on indirect criteria.

The legal framework: Disparate impact under federal law

Even “neutral” hiring systems can still violate the law. If the outcome is discriminatory, it may violate:

Disparate impact occurs when a neutral practice disproportionately harms a protected group. There is no intent requirement in these laws — only proof of discriminatory effect. Both the companies using these tools and the vendors who created them may share liability if the AI causes unlawful hiring outcomes.

Real-world examples and data

Biased hiring caused by algorithms isn’t just theoretical—it’s already been happening. Amazon scrapped an AI recruiting tool after realizing that it downgraded resumes including the word “women’s.” A lawsuit against Workday involved plaintiffs who alleged that the company’s AI hiring system discriminated based on race, age, and disability. Workday argued that humans still control decisions, but the plaintiffs stated that discrimination occurred during AI screening. Another study by the University of Washington found evidence of racial and gender bias in AI resume filters and emphasized the risks of bias in proxies like college attended or job seekers’ hobbies.

Nonlinear careers deserve equal consideration

It’s important to normalize resume gaps and non-traditional work. Caregiving might be unpaid labor, but it’s vital work that sustains society. Parents who care for children and those who take care of relatives who are ill provide a critical safety net for families and society at large.

Additionally, gig work and contract roles are now a mainstay of the US workforce. It also offers opportunities to gain important work-related skills and to earn income on a flexible schedule—which can allow people to care for their loved ones or work around their courses. Penalizing these experiences undermines equity and worsens existing inequality. After all, if it penalizes someone taking on gig work while pursuing their degree, it may prefer those who didn’t have to work while going to school because of family wealth.

Essentially, AI may be quietly reinforcing outdated work norms. The result could be that vulnerable populations like women, those with less economic resources, and minorities could be excluded en masse—often without even knowing why they were rejected.

Employers and companies that create these tools need to be accountable for these practices, which may save them time and money while discriminating unfairly and possibly illegally. They should demand transparency from AI vendors, audit AI systems for bias, avoid using proxy filters like school names, leadership terms, or rigid timelines, maintain human oversight in final hiring decisions, and consider flexible screening systems that accommodate non-traditional resumes.

What can job seekers do if they suspect discrimination?

It can be challenging to identify signs of AI filtering. Some possible indications include receiving immediate rejections without feedback or an interview and an ongoing pattern of being ignored despite strong qualifications.

A few ways to protect your rights include:

  • Keeping records: Take screenshots, timestamps, records of resumes you submitted, and any confirmation emails.
  • Request information: Ask the employer about their use of automated hiring tools.
  • Get legal help: Connect with an employment lawyer to learn more about your options and to investigate whether your experience constitutes a civil rights violation.

Reach out to Buckley Bala Wilson Mew LLP Today

At Buckley Bala Wilson Mew LLP, we hold employers accountable when they violate their employees’ rights. Today, our work also involves holding companies liable for technology that discriminates and violates the law. With the rise of AI hiring tools, discrimination might be more likely to come at the hands of a bot.

We’re taking a leadership role in taking on AI-related discrimination, bias, and unfairness in the workplace. If you were passed over because of a gap in your resume, a caregiving role, or non-traditional work history, you might not be the problem — the system might be. Reach out to us for a confidential consultation to learn whether you have a claim.