ADP on Creating the Case for Ethical Artificial Intelligence Solutions

Originally published at ADP ranked No. 14 on The DiversityInc Top 50 Companies for Diversity list in 2022.


Artificial intelligence (AI) is a transformative force with the potential to reshape how organizations of all sizes operate. While there’s a pressing need for ethical artificial intelligence solutions to assist organizations with sourcing, interviewing, candidate selection and career movement decisions, such solutions come with risks that must be addressed. For example, an AI solution should help to ensure that every candidate receives equal consideration regardless of their race, color, national origin, religion or sex.

To mitigate the risks associated with AI solutions, organizations need to prioritize ethics as an essential component of their implementation plans. This means that AI must reflect an organization’s commitment to ethical business practices and help facilitate regulatory compliance.

Here are some guiding questions to help organizations prioritize ethics in deploying AI solutions.

How Can AI Solutions Support Ethical Business Practices?

Before you adopt an AI solution to help with talent selection and management, document how the solution will help your organization act ethically. If you’ve already deployed a solution, focus on how it has helped you do so to date. If your organization uses a third-party solution, ask the provider to explain how their technology supports ethical hiring decisions.

To get the answers you need, you’ll have to analyze your organization’s data as well. For example, according to your hiring statistics, would your organization be better positioned to meet its diversity and inclusion goals for new hires if it used AI?

“If we look at ethnicity just as one example — the mix of the available workforce in various regions of the country — it’s going to differ,” says Jack Berkowitz, ADP’s Chief Data Officer. “That doesn’t mean that you shouldn’t be able to strive for diversity that works for your company. Today’s technology can help organizations easily evaluate their metrics against those aggregated from others in their region. Think of the actions you can take if you know where your company stands and how it compares to nearby peers and competitors.”

What Mechanisms Exist to Detect Biased Decision-Making?

Before its initial deployment, and during operation, every AI solution requires a degree of human oversight to ensure it enshrines ethical hiring practices and functions as designed. Without such oversight, errors and bias can appear.

For example, a solution that does not recognize degrees from foreign institutions might reject qualified candidates, regardless of their experience or command of the English language. Uncovering unintended bias in an AI solution requires a willingness to scrutinize its performance frequently. How often does a human review the AI solution’s output for errors to prevent these mistakes from occurring? A monthly review could find an error quickly, perhaps even while a position remains unfilled, allowing your organization to re-engage with rejected candidates.

Furthermore, is there a channel for employees or candidates to submit concerns regarding how the solution operates? Creating a web-based form for employees and candidates to submit concerns is the easiest and most practical solution.

“We can use the data from AI to make better decisions, but we have to remain vigilant, recognizing what’s going into the system and using it in a way that makes sense for our organizations without bias,” says Meg Ferrero, ADP’s Vice President and Assistant General Counsel.

Is There a Cross-Divisional Team To Critique and Support the Solution?

Ensuring ethical artificial intelligence solutions function as designed and do not present a compliance risk requires a multidisciplinary approach. For example, incorporating privacy by design, which prioritizes privacy at every stage of an organization’s operations, may require input from data privacy professionals. Similarly, complying with employment law will require assistance from suitably qualified employment counsel.

Operational leaders should also provide feedback on the performance of employees that the solution identified as candidates. If your organization’s hiring activity increases, the social climate changes or the regulatory environment evolves, the team should increase its oversight of the solution.

Ethical artificial intelligence can transform your organization’s hiring process, enabling human resources professionals to focus more time on nurturing potential candidates and less time on the administrative elements of their role. However, critical errors, such as mistakes in how a solution scans resumes and captures keywords, could violate your organization’s commitment to ethical hiring practices.

Ethical use of AI, therefore, requires businesses to ensure that errors and implicit or explicit bias do not result in hiring decisions that a reasonable person would view as unethical and contrary to the organization’s values. “People will not use technology they don’t trust. We need data to power AI. That’s how we gain insight,” says Jason Albert, ADP’s Global Chief Privacy Officer. “They are not going to trust the technology if they don’t have some say in how their data is being used or understand how it is being protected.”

Latest News

Video: How Companies Are Ensuring Equity for People With Disabilities

The National Organization on Disability held its annual forum in Washington, D.C. last week, bringing together community leaders, advocates, government officials and corporate leaders and influencers to focus on the advancement of people with disabilities in the workplace. DiversityInc also met up with leaders from Capital One Financial (No. 22…

Validated Allies wallpaper

DiversityInc Announces 2022 Validated Allies

Allyship is a journey rather than a destination. The work of an ally never really ends and allies understand they are not necessarily always working toward a goal, but rather, serving a greater purpose. Each year, as part of our Women of Color and Their Allies event, DiversityInc recognizes a…

CDO Series: Sysco’s Adrienne Trimble

Following the murder of George Floyd, the role of Chief Diversity Officers has become more important as companies started to be more intentional with their diversity, equity and inclusion efforts, which has made the last few years tumultuous for many CDOs. In the latest installment of a series of articles…

NOD Forum: Honoring the Disability Rights Movement

The road to disability rights has been a long one. One that started long before the Americans with Disabilities Act (ADA) of 1990 was passed. In celebration of 40 years of the National Organization on Disability, disability leaders, supporters and activists gathered in Washington, D.C last week to discuss the…

5 Biggest News Stories of the Week: September 29

As the saying goes, the news never stops — but there’s a lot of it out there, and all of it doesn’t always pertain to our readers. In this weekly news roundup, we’ll cover the top news stories that matter most to our diversity focused audience. 1. Research Outlines Lack…

Humana Ranks Highly in the Hispanic Association on Corporate Responsibility Inclusion Index

Originally published at Humana ranked No. 9 on The DiversityInc Top 50 Companies for Diversity list in 2022.   Humana Inc. celebrates the high rankings it recently received in the 2022 Hispanic Association on Corporate Responsibility (HACR) Corporate Inclusion Index (CII), a research initiative that measures Hispanic inclusion in the…