Past Seminars 2024
Wednesday 6 March
Trust, Explaination and AI
Sam Baron (Philosophy, University of Melbourne)
The use of AI systems for decision-making is widespread. Many of these systems are opaque: no one understands how they work. This has led to a call for explainable AI. One of the reasons cited in favour of explainability is trust: explainability is thought to be necessary for trust in AI. I argue against this claim: for a range of different types of trust, either explanation is not necessary or, if it is, the type of trust that calls for explainability is not appropriate for AI.