Navigating trust in AI
14 March 2024
An interview with Dr Brennan Jacoby, founder of Philosophy at Work, an organisation helping businesses think their best.
“Last year, the conversation was ‘Gee whiz,'” Chris Padilla, IBM’s VP of government and regulatory affairs, said in an interview with The Washington Post. “Now, it’s ‘What are the risks? What do we have to do to make AI trustworthy?'”
But what does it mean for artificial intelligence to be trustworthy? Can technology ever be trustworthy? To help answer these questions, I spoke to the philosopher and trust expert, Dr Brennan Jacoby.
“The philosophical literature on trust provides a foundation for understanding this question. Trust is not merely about predictability and reliability; it is about social norms and betrayal. When we trust, we are not just vulnerable to being let down – we risk being betrayed,” argues Jacoby.
This distinction is critical.
“Reliance and trust both involve predictive expectations – that is, expectations about what something or someone will do. But trust also involves normative expectations – beliefs about what others ought to do. When our expectations are betrayed, our sense of trust is shattered, not merely because our prediction was wrong, but because our normative expectations were violated,” explains Jacoby.
“For instance, if the bench you were sitting on breaks, it would be odd to say ‘how could the bench do that to me? We had an understanding!’ as that implies the bench has not met a social norm. The bench has not betrayed you. It was merely unreliable.”
This discussion leads to a crucial question: can technology, like AI, betray us, or is our interaction with it just a matter of reliance? To answer this question, trustworthiness is key.
In the philosophy of trust, ‘trustworthiness’ usually applies to people, not machines. But this may become a grey area as AI continues to push the boundaries of the person-machine distinction. Just look at the study that shows that AI has a better bedside manner than human doctors.
Moreover, trust is context-sensitive, and what one individual or group views as a behaviour indicative of good character, will be the very thing that makes another group sceptical. “As a result, a key component of trustworthiness is responsiveness to individuals and context,” explains Jacoby.
“Tech can be responsive, but developers must teach it what is salient – and the teaching of the tech has, notoriously, been impacted by human biases.”
“If AI can learn social norms well enough to become a responsible member of the moral community, then it has the potential to be (un)trustworthy. Otherwise, it is conceptually unclear to refer to AI as 'trustworthy'.”
So, for AI to be trustworthy, it may need to be given a context-specific moral education.
Jacoby continues, “The distinction between descriptive facts and normative values in computer science becomes significant here. What moral education should we feed into AI? Whose morality should it reflect?”
The challenge is if and how we can train AI to be moral. Human moral wisdom is often an amalgam of experiences from grey situations. It’s hard enough for people to know how to make a moral choice when presented with ethically challenging situations. This is the thinking behind the Ethics Lab that Dr. Jacoby’s collective Philosophy at Work delivers, helping groups of professionals develop practical skill in ethical reasoning.
“Both the situation and our response can be vague. We may rely on gut instincts. Understanding our ethical reasoning is challenging enough. How do we translate that into something that AI will understand and can apply? That said, if we can teach AI practical wisdom, it would help the technology to be closer to something we could call trustworthy,” says Jacoby.
As we can see, the integration of AI into life and society is not just a technical challenge but an ethical one. It is about developing AI that helps solve problems in ways that are also responsible and aligned with ever-evolving values. So, let's hope we hear more from experts such as Dr Brennan Jacoby.
You can find out more about Dr Jacoby, Philosophy at Work’s thinking skills workshops and Ethics Labs at www.philosophyatwork.co.uk
James Boyd-Wallis is co-founder of the Appraise Network.