Modelling Trust (ongoing PhD research)
Trust is essential to cooperation, which produces positive-sum outcomes that strengthen society and benefit its individual members. As intelligent robots (and other AI agents) have increasing roles in human society and thus should be trustworthy, it is important to understand how appropriate trust can contribute to the success of human society. This motivates me to look at it from a social as well as a formal lens. Taking this approach forward in AI agent research, this project examined the effect of (dis)-similarity (of human & agent's values) on a human's trust in that agent.
Siddharth Mehrotra, Catholijn M Jonker, Myrthe L Tielman
AIES '21: Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society, 2021, pp. 777--783
AAMAS: Proceedings of the 20th International Conference on Autonomous Agents and MultiAgent Systems, Doctoral Consortium, 2021, pp. 1826--1828