AI professional: job websites should show they are not ‘exacerbating inequality’

  • Data science consultant Cathy O’Neil said hiring discrimination begins with job ads like LinkedIn, Monster.com, Facebook and ZipRecruiter.
  • In an interview with Business Insider, O’Neil said that by focusing on demographic data, algorithms used by construction sites often prevent qualified individuals from ever seeing open positions.
  • According to O’Neil, algorithms are by definition biased. Instead of making them “color-blind”, companies must continuously check whether they are disproportionately filtering out candidates from marginalized groups.

Data science consultant Cathy O’Neil helps companies validate their algorithms to make a living. And when it comes to how algorithms and artificial intelligence can create bias in hiring jobs, she said the biggest problem isn’t even with the employers themselves.

A new law in Illinois aimed at helping job seekers understand how to use AI tools to assess them in video interviews recently re-opened the debate about the role of AI in recruiting. But O’Neil believes the law is too late in trying to fight the bias.

“The problem actually lies before the application is received. The problem is in the pipeline for connecting job seekers to jobs,” said O’Neil, founder and CEO of O’Neil Risk Consulting & Algorithmic Auditing.

That pipeline begins with sites like LinkedIn, Monster.com, Facebook, and ZipRecruiter, where algorithms can play an important role in determining which candidates see which job postings and filtering out those who are considered unqualified.

“”[Algorithms] supposed to discriminate, trying to differentiate between someone who will be good at the job and someone who will not be good at the job, “said O’Neil, adding,” the question is whether it’s legal or illegal Discrimination. “

O’Neil has written extensively on the role of algorithms in combating inequality in both her book Weapons of Math Destruction and her blog mathbabe.org. In an interview with Business Insider, she talked about how bias shows up in the hiring process and what employers – as well as platforms like LinkedIn – should be doing to stamp it out.

AI adjustment tools are far from perfect

Federal laws like the Civil Rights Act of 1964 and the Americans with Disabilities Act prohibit discrimination in the workplace based on categories such as race, religion, gender, national origin, disability status, genetic information, and other categories.

While algorithms can speed up the process of narrowing the pool of job candidates, they are often unable to find the most qualified and instead filter out disproportionately large numbers of people in precisely these categories.

“I really don’t think most hiring algorithms are that powerful,” said O’Neil, arguing that in many cases they are no better than “random number generators” at identifying qualified candidates.

In 2018, Amazon discontinued a tool designed to automate the hiring process using artificial intelligence because it was biased against women. Researchers have also shown how AI tools that analyze video interviews are often biased against people with disabilities.

But it’s not just employers who have encountered issues with biased AI hiring tools, it’s also a problem for the companies helping them find candidates.

A 2019 study found that ads on Facebook for jobs at taxi companies – even if they were aimed at a large audience – were seen by an audience that was 75% black. Research by ProPublica and the New York Times found that Facebook allowed employers to explicitly exclude users based on age.

Bad data in, bad data out

There are several reasons why algorithms can discriminate against certain groups. One is the “bad data in, bad data out” problem.

Programmers “train” an algorithm by showing it a huge set of historical data. In the case of a construction site, they display information about previous candidates and ask them to look for patterns in people who ultimately got a job. Potential candidates with the same qualities are then identified. However, this can cause problems if the data set is already skewed.

“If they’re trained on historical data, which they all are, they’ll be just as racist and sexist and classic as human society,” said O’Neil.

This is exactly what happened at Amazon. Because men had mostly applied for (and obtained) jobs in the past, the tool found that men were preferable and, as a result, penalized women’s résumés.

Big data means biased noise

A second problem deals with why O’Neil sees biased job sites as particularly problematic: they take into account information that may not affect a candidate’s ability to get a job done, rather than just focusing on relevant details.

Sites like Facebook, LinkedIn, ZipRecruiter, and Monster.com use a variety of demographic information to train their algorithms. These algorithms then help determine which job advertisements are displayed for which candidates and which candidates appear in the search results of the recruiters.

Businesses collect as much data as possible because they believe it will give them “a bigger picture of the person,” said O’Neil, “but what it really does is take in all kinds of distracting and biased information.”

Even if this information does not explicitly concern a candidate’s race or gender, it can still lead to racist or sexist outcomes.

How businesses – and construction sites – can reduce bias

Some construction sites have tried to combat this problem by not collecting or considering any information that could distort their algorithm.

ZipRecruiter said in a statement to Business Insider that its algorithms may not “take into account any explicit markers (e.g. age, race, gender, etc.) or implicit markers (e.g. last name, specific residential address, etc.) of status within an protected class. “ZipRecruiter also prevents these models from distinguishing between genders in titles or job postings.

In a similar statement, LinkedIn said, “We are proactively identifying potentially discriminatory languages ​​and reviewing / blocking the publication of these jobs.” In addition, advertisers who place job advertisements must “certify that they are not using the ad to discriminate on the basis of gender or other protected characteristics”.

O’Neil said these steps do not necessarily address the problem, however.

“It won’t convince you to say that we don’t collect this information, so we’re color blind,” she said. “There is no way to get rid of proxies – everything is a proxy for races because races affect everything in our country.”

Rather than trying to make AI hiring tools “color blind” by blocking explicitly or implicitly biased data points, O’Neil said they need to be more focused on the information they consider when filtering out applicants.

“We should define what it means to be qualified and then ignore other things,” she said.

An example she cited is the use of “blind auditions” by large orchestras where they reduced gender bias by letting auditioners play behind a curtain. In deciding that “being qualified” really means “sounding good,” they were able to structure the hiring process to highlight the candidates’ qualifications. Equally important was that they were blind to other factors such as appearance, last name, or hometown.

“This is something that none of these AI tuning algorithms do,” said O’Neil.

However, your ultimate concern isn’t even how these tools were developed, although that is still important. O’Neil’s main point is that companies are more transparent about how they do things and what the end result is. That means they continuously test their algorithms to see which candidates end up seeing jobs and then correct any unwanted bias.

LinkedIn should “be forced to prove that what they do doesn’t exacerbate inequality,” said O’Neil.

LinkedIn has taken small steps in that direction, telling Business Insider, “We’re making sure our recruiter search is gender balanced and providing gender-based reporting to help employers understand the dynamics in their jobs and procurement funnels can.” The company also prohibits advertisers from targeting job ads based on age.

However, gender and age are just two of the many dimensions in which people are discriminated against. LinkedIn and ZipRecruiter stated that they did not tolerate any discrimination against a protected class on their platforms, nor did they provide any information about how they test the results of their algorithms to make sure it actually does.

Facebook and Monster.com didn’t immediately respond to questions about bias and the use of algorithms on their platforms.

Comments are closed.