Trust can define the way people interact with technology (Li et al., 2008). It can be viewed as: (1) a set of specific beliefs dealing with benevolence, competence, integrity, and predictability (trusting beliefs); (2) the willingness of one party to depend on another in a risky situation (trusting intention); or (3) the combination of these elements (Siau et al., 2018). Trust is an important component of any interaction, but especially when you are interacting with a piece of technology which does not think like you do. Therefore, AI systems need to understand how humans trust them, and what to do to elicit appropriate trust.
Trust is essential to cooperation, which produces positive-sum outcomes that strengthen society and benefit its individual members. As intelligent robots (and other AI agents) have increasing roles in human society and thus should be trustworthy, it is important to understand how appropriate trust can contribute to the success of human society. This motivates me to look at it from a social as well as a formal lens. Taking this approach forward in AI agent research, this project examined the effect of (dis)-similarity (of human & agent's values) on a human's trust in that agent.
Trust is essential to cooperation, which produces positive-sum outcomes that strengthen society and benefit its individual members. As intelligent robots (and other AI agents) have increasing roles in human society and thus should be trustworthy, it is important to understand how appropriate trust can contribute to the success of human society. This motivates me to look at it from a social as well as a formal lens. Taking this approach forward in AI agent research, this project examined the effect of (dis)-similarity (of human & agent's values) on a human's trust in that agent.
We designed five different agents with varying value profiles so that for any human, some of these are more similar and some less similar to the value profile of that human. Our study shows that value similarity between an agent and a human is positively related to how much that human trusts the agent. An agent with similar values to the human will be trusted more which can be very important in any risk-taking scenario.