What if an Artificial Intelligence program actually becomes sentient?
A MARTINEZ, HOST:
Silicon Valley is abuzz about artificial intelligence - software programs that can draw or illustrate or chat almost like a person. One Google engineer actually thought a computer program had gained sentience. A lot of AI experts, though, say there is no ghost in the machine. But what if it were true? That would introduce many legal and ethical questions. Ifeoma Ajunwa thinks about these what-if questions for a living as a law professor at the University of North Carolina Chapel Hill. We engaged in a little thought experiment together, and I started by asking how lawyers might determine if a computer program is sentient.
IFEOMA AJUNWA: We have things like, you know, recognition of art or ability to sort of have an imagination or to hold a conversation that is sort of impromptu - right? - not scripted. And we have AI now that is sort of pushing the limits of that. But I think the consensus among most AI researchers is that we're still not quite there yet. Even the best chat bots are still running on scripts. They're still sort of basing their responses on predetermined scripts.
MARTINEZ: Have legal scholars even started to look into what that criteria might be?
AJUNWA: It really has focused on robots, right? It's actually pushing the envelope further when you have something that's wholly existing in a cyberspace and asking if that could be sentient and what that would mean legally - right? - in terms of having a personhood that would be recognized by law. So the question is, if we are trying to recognize AI as sentient beings, what type of personhood, really, would we accord them? Would it be the same as an actual person, or would it be something more sort of limited, like, you know, in the case of a corporation? It would also affect the issue of whether that AI is being held in involuntary servitude...
MARTINEZ: Wow. Yeah.
AJUNWA: ...because part of what we have in the U.S. - right? - is a prohibition against slavery, against any kind of involuntary servitude except - right? - as punishment, right? So prison systems are excluded. But once you do say the AI is sentient, then the next question is, does the AI want to do the kind of job that you're asking it to do?
MARTINEZ: This can of worms keeps getting bigger and bigger, professor. Yeah.
AJUNWA: Right. Exactly. Exactly. If it's determined - right? - that they are sentient - right? - and also of a lower sort of mental capacity, akin to a child, then somebody would need to have guardian rights. If we were to recognize AI as sentient, then it would sort of push the envelope or really open the threshold of, you know, what could be recognized as a sentient being.
MARTINEZ: How do you think corporations, professor, have been preparing for legal questions like these? Because technology moves fast, and the future can be upon us very quickly - maybe even quicker than any of us think. So do you think corporations would fight claims of personhood?
AJUNWA: I do think they would because, you know, if you think about it, it does benefit them not to have to grapple with these questions of personhood because it does raise questions of labor rights. It does raise questions of ethics. Because if you think about it, if an AI is sentient, then could basically constraining it to one computer - could that be deemed, really, isolation?
MARTINEZ: That's Ifeoma Ajunwa, a law professor at the University of North Carolina at Chapel Hill. Thank you very much.
AJUNWA: Thank you so much. It's been a pleasure.
(SOUNDBITE OF MUSIC) Transcript provided by NPR, Copyright NPR.