|
|
Trust and Explanation in Artificial Intelligence Systems: A Healthcare Application in Disease Detection and Preliminary Diagnosis
Retno Larasati
This event took place on 22nd November 2022 at 11:30am (11:30 GMT)
Knowledge Media Institute, Berrill Building, The Open University, Milton Keynes, United Kingdom, MK7 6AA
The way in which Artificial Intelligence (AI) systems reach conclusions is not always transparent to end-users, whether experts or non-experts. This creates serious concerns about the trust that people would place in such systems if they were to be adopted in real-life contexts. These concerns become even bigger when individuals’ well-being is at stake, as in the case of AI technologies applied to healthcare. Moreover, there are also the over-trusting and under-trusting issues that need to be addressed. Non-expert users have been shown to often over-trust or under-trust AI systems, even when having very little knowledge of the technical competence of the system. Over-trust can have dangerous societal consequences when trust is placed in systems of low or unclear technical competence. Meanwhile, under-trust can hinder AI systems adaptation in our everyday life. This research studies the extent to which explanations and interactions can help non-expert users properly calibrate trust in AI systems, specifically AI for disease detection and preliminary diagnosis. This means reducing trust when users tend to over-trust an unreliable system and increasing trust if the system can be shown to work well. The three fundamental contributions to knowledge are, first, informing how to construct explanations that non-expert users can make sense of (meaningful explanations). Second, it contextualises current AI explanation research in healthcare, informing how explanations should be designed for AI-assisted disease detection and preliminary diagnosis systems (Explanation Design Guidelines). Third, it provides preliminary insights into the importance of the interaction modality of explanations in influencing trust. These preliminary findings can inform and promote future research on XAI by shifting the focus of current research from explanation content design to explanation delivery and interaction design.
This replay can only be watched on FaceBook - https://fb.watch/rPK-C-1c7Q/ |
The webcast was open to 300 users
|