Can companies police the biases found in artificial intelligence?
AUDIE CORNISH, HOST:
Artificial intelligence has seeped into almost every corner of our lives, including how people are hired for work. AI is used to screen and evaluate applicants, but there's a problem with that. Research has shown that AI can produce biased results, especially against women and minorities. That's something that Kenneth Chenault, chairman and managing director at the venture capital firm General Catalyst, is trying to address with his Data and Trust Alliance. Chenault is the co-chair of the organization. He joins me now. Welcome to ALL THINGS CONSIDERED.
KENNETH CHENAULT: Thank you. Great to be here.
CORNISH: Now, you have announced that your organization has signed up major companies like CVS, Deloitte, Humana, Meta, Walmart to work together to detect and combat algorithmic bias. Help us understand how data is being used in the human resources process. If you're a top executive, what is HR handing you in the way of data that somehow is, like, helpful in finding talent?
CHENAULT: You look at different indicators. What are people's experiences? What schools did they go to? How long were they in a job? Where do they live? So there are literally hundreds of variables that go into determining if someone is selected for an interview. And given the sheer volume that major companies have to deal with in sorting through applications, you want to winnow down the pool to say, who are the prospects? The problem is if you have some criteria that, in fact, is biased. And what I would like to hope is that most companies do not create standards that contain bias. But given some of the algorithms that can be put in place, there are situations where there are unintended biases that unfairly disadvantage certain groups.
CORNISH: It's interesting. You're saying unintended, but when I think of some of the criteria, you could easily see how this happens, right? If a company decides, draw a circle around everybody who went to this kind of school, did this kind of X, Y, it's sort of like bad data in, bad data out.
CHENAULT: That's right. But what I'm saying is that is something upfront that companies can screen out. But then there are other variables that when they're connected, they create bias. And so what I think is very, very important here is you had 10-plus companies, large companies, coming together, not just issuing a principle but, in fact, agreeing to take concrete actions to ensure that they're preventing unintended bias.
CORNISH: So I understand this involves a questionnaire. Talk about how that works and if you get a sense that companies are willing to use it.
CHENAULT: Well, here's the point. Every company that is involved in this has agreed to follow every step, and this is something that we will measure with these companies.
CORNISH: At the end of the day, is this really work that can be outsourced to a machine?
CHENAULT: Yes, it can. I think there are a number of process steps that absolutely can be outsourced. But what is important is that we have the right controls and approaches and processes in place. The reality is that no matter what business companies are in, they're all becoming data enterprises. And so what we have to do is we have to use this data responsibly. That's the focus.
CORNISH: Kenneth Chenault is chairman and managing director of the venture capital firm General Catalyst. He's also a former chairman and CEO of American Express. Thank you so much for speaking with us.
CHENAULT: Thank you very much. Transcript provided by NPR, Copyright NPR.