Modelling Trust (ongoing PhD research)


Trust can define the way people interact with technology (Li et al., 2008). It can be viewed as: (1) a set of specific beliefs dealing with benevolence, competence, integrity, and predictability (trusting beliefs); (2) the willingness of one party to depend on another in a risky situation (trusting intention); or (3) the combination of these elements (Siau et al., 2018). Trust is an important component of any interaction, but especially when you are interacting with a piece of technology which does not think like you do.  Therefore, AI systems need to understand how humans trust them, and what to do to elicit appropriate trust.

Trust is essential to cooperation, which produces positive-sum outcomes that strengthen society and benefit its individual members. As intelligent robots (and other AI agents) have increasing roles in human society and thus should be trustworthy, it is important to understand how appropriate trust can contribute to the success of human society. This motivates me to look at it from a social as well as a formal lens. Taking this approach forward in AI agent research, this project examined the effect of (dis)-similarity (of human & agent's values) on a human's trust in that agent.
Illustration of a conversational AI agent reasoning with human values based on Schwartz Theory of Human Values.
We designed five different agents with varying value profiles so that for any human, some of these are more similar and some less similar to the value profile of that human. Our study shows that value similarity between an agent and a human is positively related to how much that human trusts the agent. An agent with similar values to the human will be trusted more which can be very important in any risk-taking scenario.

Publications


Trust should correspond to Trustworthiness: a Formalization of Appropriate Mutual Trust in Human-Agent Teams


Carolina Centeio Jorge, Siddharth Mehrotra, Catholijn M. Jonker, Myrthe L. Tielman

AAMAS Proceedings of the 22nd International Workshop on Trust in Agent Societies, London, UK, D. Wang R. Falcone J. Zhang, vol. 3022, CEUR-WS.org, 2021


Shaping a multidisciplinary understanding of team trust in human-AI teams: a theoretical framework


Anna-Sophie Ulfert, Eleni Georganta, Carolina Centeio Jorge, Siddharth Mehrotra, Myrthe Tielman

European Journal of Work and Organizational Psychology, 2023 Apr, p. 14


More Similar Values, More Trust?--the Effect of Value Similarity on Trust in Human-Agent Interaction


Siddharth Mehrotra, Catholijn M Jonker, Myrthe L Tielman

AIES '21: Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society, 2021, pp. 777--783


Modelling Trust in Human-AI Interaction 🎓


Siddharth Mehrotra

AAMAS: Proceedings of the 20th International Conference on Autonomous Agents and MultiAgent Systems, Doctoral Consortium, 2021, pp. 1826--1828

Share



Follow this website


You need to create an Owlstown account to follow this website.


Sign up

Already an Owlstown member?

Log in