cornell, hiring, algorithm
Research by scholars at the Cornell University Department of Computing and Information Science found the way many hiring algorithms work to prevent bias and ensure fairness is unclear and subjective. (Photo credit: everything possible/

Cornell Study Shows Companies Prefer to Keep Hiring Algorithms a Black Box

To combat time constraints and attempt to eliminate human bias, many companies have taken to entrusting at least part of their hiring processes to outside companies that use machine-learning algorithms to weed out applicants. However, with little known about how these algorithms work, they, too, may be perpetuating bias. New research from a Cornell University Computing and Information Science team found companies prefer obscurity over transparency when it comes to this emerging technology.

Mitigating Bias in Algorithmic Hiring: Evaluating Claims and Practices by Manish Raghavan, Solon Barocas, Jon Kleinberg and Karen Levy found that tech companies have been able to define, and therefore address, algorithmic bias subjectively. For starters, terms like “bias” and “fairness,” as they relate to these algorithms, have not been universally defined. Therefore, tech companies can be vague about how they handle these issues.

As part of the study, the researchers looked into 19 companies that create algorithmic pre-employment screenings. These screenings typically include video interviews, questions and games. The researchers looked at company sites to find information on how these algorithms work, scouring websites for webinars, pages or other documents that lay out practices and logistics surrounding the algorithms.

They found very few companies share any information on what they specifically do to prevent employment bias. The study found that those that have mentions of “bias” and “fairness” on their sites fail to explain exactly how they mitigate bias and achieve fairness.

Raghavan explained to the Cornell Chronicle that these definitions are similar in obscurity to that of the term “free-range” as it applies to animal products being marketed as ethically sourced. These products must meet minimum standards to be considered “free-range,” which might not line up with the commonly-pictured image of these animals grazing happily in acres of grass.

“In the same way, calling an algorithm ‘fair’ appeals to our intuitive understanding of the term while only accomplishing a much narrower result than we might hope for,” he told the Chronicle.

The study also says that under Title VII, employers bear legal responsibility in the outcomes of their hiring practices. Therefore, an employer can be held liable for the effects of an algorithm it uses, regardless of what the tech vendor claims the algorithm does.

Algorithms work based on knowledge backed up by research that certain traits correlate with desirable outcomes. Machine learning discovers relationships between traits and outcomes, but sometimes exactly how these connections are made is obscure.

The study asks: “When the expert is unable to explain why, for example, the cadence of a candidate’s voice is indicative of higher job performance, or why reaction time predicts employee retention, should a vendor rely on these features?”

Vendors often also outsource their use of facial-recognition technology to third-party companies. Recent coverage of racial bias in this technology makes its use a fraught issue. Additionally, the study says, artificial intelligence emotion recognition technology also could be a disadvantage for people with disabilities.

The researchers told the Chronicle they maintain that algorithms have the potential to be used to prevent human bias. The question, they said, is whether — and how — they can be made perfect.

The Cornell researchers will present their findings in January at the Association for Computing Machinery Conference on Fairness, Accountability and Transparency in Barcelona.

Related Story: Apple Card Algorithm Accused of Gender Discrimination

Related Story: Google May Have Targeted Black Homeless People in Facial Recognition Research, New York Daily News Report Says

Latest News

asian-american bias

New Study Reveals That 80% of Asian Americans Feel Regularly Discriminated Against

Even in the midst of AAPI Heritage Month, a new study reveals that 8 in 10 Asian Americans believe they are regularly discriminated against in the United States. NPR’s Dustin Jones has reported that in a recent survey commissioned by the new nonprofit, Leading Asian Americans to Unite for Change…


4 Maryland HBCUs Awarded $577 Million Settlement in State Discrimination Case

Following a 15-year court battle, the state of Maryland has reached a settlement in a case alleging that it had discriminated against four historically Black universities and colleges, segregating and making it harder for them to compete with other nearby predominately white schools.  Bryn Stole of the Baltimore Sun has…

Kerby Jean-Raymond

First Black American Designer to Show Collection at Paris Couture Week 

When Kerby Jean-Raymond’s looks walk the fashion runway this July at Paris Couture Week, the designer will be making history, becoming the first-ever Black American designer to present a collection at the acclaimed event — often described at the absolute pinnacle of global high fashion. Kristen Rogers of CNN has…

Southern Company Gas’ Kim Greene in Conversation With the Coca-Cola Company’s Lisa Chang for AAPI Heritage Month

Originally published on Southern Company ranked No. 20 on The DiversityInc Top 50 Companies for Diversity list in 2021.   As part of Asian American and Pacific Islander Heritage Month, Southern Company Gas CEO Kim Greene sat down with Lisa Chang, the Global Chief People Officer for The Coca-Cola Company for the latest…