Did a Bot Derail Your Job Prospects? What You Can Do About Algorithm Bias

Did a Bot Derail Your Job Prospects? What You Can Do About Algorithm BiasSo, you applied for that job, and you think you are entirely qualified—but you never get a call back. Perhaps they got too many applications or have already found someone to fill the role. Or, maybe a computer made the decision, and possibly unfairly. While more employers turn to artificial intelligence to screen applications, these tools can sometimes reflect, or even exaggerate, existing biases among humans.

This isn’t just a hypothetical anymore, either. As far back as 2018, Amazon was using an AI-resume screening tool, which exhibited biases against the word “women’s.” That type of screening is unlawful discrimination, and employers who use these tools without care may find themselves in violation of laws like Title VII, ADEA, ADA, and more. But if you’re on the losing end of unfair and illegal AI employment practices, what are your rights? As AI becomes a standard step in job applications, it will be important to know.

In one high-profile case, job candidates were rejected automatically because they were over 60. (EEOC v. iTutorGroup)

What is algorithm bias, and how does it affect job seekers?

Algorithm bias happens when an algorithm produces results that disadvantage people based on protected characteristics like age, race, gender, or disability. Importantly, it’s often not a bias that anyone intentionally instilled in the technology. It’s often an unfortunate reflection of existing human bias and a failure of the technology to recognize when it’s discriminating.

Even prior to AI, people often submitted applications and never heard back. It’s not uncommon to apply without ever knowing if your resume even made it to a human’s hands. Add AI to the mix, and things get even more confusing.

Here are some ways an algorithm might unfairly discriminate.

Age discrimination in tech

Tech is often thought of as a young person’s realm. Jokes about having to help parents or grandparents with anything from a television remote to an iPad abound. But many people in today’s aging population are completely familiar with technology and use it well. Still, if a bias exists in the real world, it may be even worse with AI. While a human might see that a person over 60 who worked with computers for 30 years would have no problem handling a tech-related role, an algorithm might dismiss them out of hand.

In one case, iTutorGroup was filtering out any applicants who were over 55 for women, or 60 for men.

One-way video interviews may be unfair to minorities

When considering bias in the hiring process, most people likely think of resume review. However, AI might be reviewing your video submissions as well. One-way video interviews are unpopular with applicants who find them awkward and dehumanizing. Even worse, there is evidence that some systems might discriminate based on race. For example, in one study on facial recognition, “images of Black NBA players were respectively interpreted to be more angry or contemptuous than those of white players,” according to AI. While not directly relating to hiring, this study has concerning implications for AI video interview screening.

Failing to account for resume gaps: The “Mom Penalty”

Resume gaps aren’t necessarily a red flag. Parents often take a step back from their careers to care for their children. Children may take time to care for an elderly family member. Many human application reviewers would understand that these gaps don’t indicate something negative about an applicant, and are an entirely normal part of life. AI, on the other hand, may throw out these applicants. This form of discrimination would disproportionately impact female applicants, who are statistically more likely to take time off to care for newborn babies and elderly parents.

There are plenty of other ways that AI could discriminate against applicants. In some cases, the discrimination might not even be apparent to anyone on the surface, allowing it to subtly tip the scales against certain applicants.

Why do these biases exist?

AI biases are often just an unflattering mirror. The bots take in data from past human decisions. So if people were biased, AI will be too. In fact, it can be even worse than humans. For instance, a human might know that most engineers are male, but if an excellent female candidate crossed their path with credentials and skills they were looking for, they might not think too much about the fact that the candidate is a woman. AI, on the other hand, might dismiss her without even getting to her degrees and work experience, solely based on her name or participation in a women’s STEM organization.

Algorithms create feedback loops that cause them to reinforce their assumptions. As a result, once they exclude someone, they will continue to exclude similar people. They might not realize that the category they are basing that decision on is irrelevant to the person’s ability to do the job. The discrimination need not be related to a protected characteristic–it could seem that the employer coincidentally rejected applicants from three people named Sam. As a result, it might believe that it should reject all people named Sam. Of course, in many cases, the discrimination does involve a protected category, especially because our past hiring practices had those biases.

Some employers implement tools without ever checking for fairness, leaving the algorithm to make illogical and potentially illegal decisions.

What are the legal protections for workers and applicants?

According to Title VII, job applicants are protected from discrimination based on race, color, sex, and national origin. Those protections apply even when the discrimination is coming from an algorithm. Other laws provide protections based on age (ADEA) and disability status (ADA).

Even a tool that is “neutral” may violate the law if it has a discriminatory impact. And, to prove that you are a victim of illegal bias, you don’t need to show any intent, just that the person or algorithm in charge of hiring is creating unfair outcomes that disproportionately affect protected groups. Lawsuits involving unfair AI hiring practices are already happening.

How to spot signs of algorithmic discrimination

While the law provides certain rights, proving discrimination can be more challenging. You might suspect that you were screened by a bot, but that doesn’t mean you’ll have evidence. If you notice any of these signs, a bot might have been involved in rejecting your application:

  • You’re instantly rejected with no interview or human contact.
  • You’ve applied multiple times but never get follow-ups, even when qualified.
  • You’re asked to complete a video interview and receive no feedback.
  • You notice job ads disappearing or changing once you enter personal information (which could suggest targeted screening—though it may be difficult to prove discrimination based on this alone).

If the bot is making those decisions based on your inclusion in a protected group, you could be the victim of illegal discrimination. Protected groups include:

  • Workers ages 40+
  • Job seekers with disabilities
  • Women
  • Racial or ethnic minority applicants flagged by name, accent, or location

Some steps you might take if you suspect that automated tools were used include:

  • Request more information from the company- specifically, ask if they use automated tools to screen applicants.
  • Save documents, screenshots, emails, and timestamps related to the application.
  • Track patterns by keeping a log to see if you’re consistently rejected without explanation.
  • Contact our lawyers at Buckley Bala Wilson Mew to share your suspicions.

How job seekers can push for change

The first step in protecting yourself and pushing for better hiring practices is to know your rights. You should have an equal opportunity in the job market, regardless of who—or what—is making hiring decisions.

You can also ask about transparency during interviews and share your experiences with legal advocates or watchdog groups. You can also push lawmakers to regulate AI hiring tools and to demand more transparency when using these tools.

Employers should audit their tools, test for bias, and use human oversight. Workers shouldn’t have to fight invisible discrimination alone.

Contact Buckley Bala Wilson Mew LLP

Algorithms are going to be involved in more decisions. While they might be behind the scenes, they have real—world consequences. If you believe that an AI tool rejected you in a way that violates anti-discrimination laws, you need to speak to an employment lawyer right away. Contact Buckley Bala Wilson Mew LLP today to discuss your potential case.