A Systematic Review on Fostering Appropriate Trust in Human-AI Interaction


Journal article


Siddharth Mehrotra, Chadha Degachi, Oleksandra Vereschak, Catholijn M. Jonker, Myrthe L. Tielman
ACM Journal on Responsible Computing, In Submission, 2024

Full-text
Cite

Cite

APA   Click to copy
Mehrotra, S., Degachi, C., Vereschak, O., Jonker, C. M., & Tielman, M. L. (2024). A Systematic Review on Fostering Appropriate Trust in Human-AI Interaction. ACM Journal on Responsible Computing, In Submission.


Chicago/Turabian   Click to copy
Mehrotra, Siddharth, Chadha Degachi, Oleksandra Vereschak, Catholijn M. Jonker, and Myrthe L. Tielman. “A Systematic Review on Fostering Appropriate Trust in Human-AI Interaction.” ACM Journal on Responsible Computing In Submission (2024).


MLA   Click to copy
Mehrotra, Siddharth, et al. “A Systematic Review on Fostering Appropriate Trust in Human-AI Interaction.” ACM Journal on Responsible Computing, vol. In Submission, 2024.


BibTeX   Click to copy

@article{siddharth2024a,
  title = {A Systematic Review on Fostering Appropriate Trust in Human-AI Interaction},
  year = {2024},
  journal = {ACM Journal on Responsible Computing},
  volume = {In Submission},
  author = {Mehrotra, Siddharth and Degachi, Chadha and Vereschak, Oleksandra and Jonker, Catholijn M. and Tielman, Myrthe L.}
}

Abstract:
Appropriate Trust in Artificial Intelligence (AI) systems has rapidly become an important area of focus for both researchers and practitioners. Various approaches have been used to achieve it, such as confidence scores, explanations, trustworthiness cues, or uncertainty communication. However, a comprehensive understanding of the field is lacking due to the diversity of perspectives arising from various backgrounds that influence it and the lack of a single definition for appropriate trust. To investigate this topic, this paper presents a systematic review to identify current practices in building appropriate trust, different ways to measure it, types of tasks used, and potential challenges associated with it. We also propose a Belief, Intentions, and Actions (BIA) mapping to study commonalities and differences in the concepts related to appropriate trust by (a) describing the existing disagreements on defining appropriate trust, and (b) providing an overview of the concepts and definitions related to appropriate trust in AI from the existing literature. Finally, the challenges identified in studying appropriate trust are discussed, and observations are summarized as current trends, potential gaps, and research opportunities for future work. Overall, the paper provides insights into the complex concept of appropriate trust in human-AI interaction and presents research opportunities to advance our understanding on this topic.

Share



Follow this website


You need to create an Owlstown account to follow this website.


Sign up

Already an Owlstown member?

Log in