The small agency tasked with enforcing workplace civil rights is readying itself for a big role in policing artificial intelligence. And a Republican-appointed commissioner has made it his mission to ensure the agency is ahead of the issue.
“The difference with AI is the scalability of it,” said Keith Sonderling, a member of the Equal Employment Opportunity Commission. “Before, you have one person potentially making a biased hiring decision. AI, because it can be done at scale, can impact hundreds of thousands or millions of applicants.”
A Society for Human Research Management survey last year suggested that 79 percent of AI's use in the workplace is already focused on hiring and recruitment. And while Sonderling is in the minority of the five-person commission, he’s been one of its driving forces with respect to AI, taking an interest since 2021 and often speaking out publicly about employers’ obligations in using the new technology.
Like EEOC Chair Charlotte Burrows, he’s emphasized that existing civil rights laws still apply to AI. He wants the EEOC and the human resources sector to take a leading role in showing how the government can deal with the new technology in different settings — and he wants to figure it out quickly.
“You’re dealing with civil rights,” Sonderling, who’s also a former acting head of the Labor Department's Wage and Hour Division, said. “The stakes are going to be higher.”
In a conversation with POLITICO, the commissioner discussed how taking on AI has shaped his role on the EEOC, the commission’s new Silicon Valley focus, and whether you’ll know if a robot unlawfully rejects you from your next job.
This interview has been edited for length and clarity.
The EEOC is a small agency, and all of a sudden you're managing a pretty major part of this technological revolution and its implementation. To what extent has the introduction of new AI been disruptive to the EEOC?
It's having a tremendous impact. A critical function of my job as a commissioner is to make all of the parties aware. What I’ve been doing is saying, “Whatever use of AI you're using, here are the laws that are going to apply. Here are the standards that the EEOC is going to hold you to if we have an investigation.”
And you know, for a lot of people who are unfamiliar with EEOC, with employment law, it can have a significant impact to raise compliance. Just because the enforcement hasn't started yet, that doesn't mean the agency doesn't have a role.
The difference with AI is the scalability of it. Before, you have one person potentially making a biased hiring decision.
AI, because it can be done at scale, can impact hundreds of thousands or millions of applicants.
Who are you talking with most about AI? What are those conversations like?
Since I started looking at this in early 2021, I’ve had an open door that anyone can reach out to us to discuss it because the ecosystem now with AI is much different than what the EEOC is used to.
Before AI, the EEOC was very familiar with four groups, the ones we have jurisdiction over: employers, employees, unions and staffing agencies. That's been our world since the 1960s.
But now with [AI] technology coming in, we have all these different groups: venture capitalists and investors who want to invest in technology to change the workplace, highly sophisticated computer programmers and entrepreneurs who want to build these products. And then you have companies who are looking to deploy these [products] and employees who are going to be subject to this technology.
At the end of the day, nobody wants to invest in a product that's going to violate civil rights. Nobody wants to build a product that violates civil rights. Nobody's going to want to buy and use a product that violates civil rights, and no one's gonna want to be subjected to a product that's going to violate their civil rights.
It’s just a much different scenario now for agencies like ours, who didn't really have that technological innovative component to it prior to this technology being used.
The second part is on the Hill. A lot of legislators are not familiar with how this technology works. I think it's pretty important that individual agencies like the EEOC are constantly working with and providing that assistance to the Hill.
Does the EEOC have the resources to deal with the emergence of AI? Especially given, as you said, the possibility of discrimination being scaled up?
I always do qualify — it's not going to just automatically discriminate by itself. It's on the design of the systems and the use of the systems.
Right now, we know how to investigate employment decisions. We know how to investigate bias in employment. And it doesn't matter if it's coming from an AI tool or if it's coming from a human.
Whether we can ever have the skills and the resources to actually investigate the technology and investigate algorithms — [that] would be a much broader discussion for Congress, for all agencies. Congress [would be the one] to give us more authority. Or more funding to hire more investigators or hire tech-specific experts — that is one thing that all agencies would welcome. Or if they’re going to create a new agency that's going to work side by side with other agencies, that's really the prerogative of Congress, of which direction they're gonna go to skill these law enforcement agencies to deal with the changing technology.
But right now, I feel very confident that if we got any kind of discrimination, whether it’s AI or by human, we can get to the bottom of it. We can use the long-standing laws.
OK, speaking as an employee — because I know one of the places we’re seeing AI the most is in hiring decisions — is there any way for me to know right now if I didn’t get a job because of AI in hiring discrimination?
Without consent requirements, without employers saying, “You’re going to be subject to this tool, and here's what the tool is going to be doing during the interview,” you have no idea, right? I mean, you just have no idea what's being run in an interview. Especially now with interviews going online, you're on Zoom. You have no idea what's going on in the background, if your face is being analyzed, if your voice is being analyzed.
Take a step back, this is how it's been for a long time. You don't know who's making an employment decision, generally. You don't know what factors are going on when a human makes an employment decision and what's actually in their brain.
We've been dealing with the black box of human decisionmaking since we've been around, since the 1960s. You don't really know what factors are going into lawful or unlawful employment decisions or when there is bias. Those are hard to discern to begin with.
It’s the same thing with AI now. That’s why you're seeing some of these proposals saying you need consent, you need to have the employees understand what their rights are, if they're being subjected to an algorithmic interview.
Should employers be disclosing if they’re using these tools?
That's something for them to decide.
You can make an analogy: Should employers be required to have pay transparency? The federal government does not require pay transparency in job advertising, but you've seen a lot of states push for pay transparency laws. And what you've seen is a lot of employers voluntarily disclose pay in states they don’t have to. It becomes more of a policy decision for multi-state, multinational employers that are going to have to start dealing with this patchwork of AI regulatory laws.
With the pay transparency analogy, you’re starting to see that a lot of companies across states are saying, “We’re going to do it everywhere.” And you may see that down the road with these AI tools. That's more of a business decision, a state and local policy decision, than it is the EEOC.
Right now, AI vendors aren’t accountable for hiring decisions made by their products that might violate the law. It’s all on employers. Do you see that changing?
It's another complicated question. Of course, there's no definitive answer, because it's never been tested before in a significant litigation in the courts.
From the EEOC's perspective, from a law enforcement perspective, we're going to hold the employer liable if somebody is terminated because of bias, whether or not it was AI that terminated them for bias. From our perspective, liability is going to be the same either way.
But that does not in any way minimize the potential debate about vendors' liability with some of these state or foreign law proposals, or private litigation. We just haven’t seen that yet.
Should all federal agencies be doing more on AI?
The more guidance, the more we can do to help employers who are willing to comply is really all we can do. Every agency needs to be doing that, no matter what the context is. [The Department of Housing and Urban Development] using AI in housing, they should put out information for vendors and housing developments using this, and they should also put out information for those who are going to be applying for housing. Same in finance, in credit, OSHA, Wage and Hour for how it's gonna affect compensation — all existing agencies now can be doing more where the technology is already being used.
Regardless of the legislation of this technology moving forward on the Hill, there's still use cases right now. And there's still long-standing laws in the various agencies on how it is going to apply. A lot of agencies are doing that, like the EEOC, like the [Consumer Financial Protection Bureau], the [Federal Trade Commission].
Is there a political divide on that?
It's bipartisan. Ensuring that violations of the law don't happen is a good thing.
The less enforcement we have on this is a good thing because employees aren’t having their rights violated and employers aren't violating these laws. Everyone can agree on that. Where the debate is on this politically is should we lead with enforcement and make our guidance in the court systems?
I’ve always said we should lead with compliance first. Nobody wants people to be harmed.