Just a second....

Towards Hybrid Intelligence workflows: integrating interface design and scalable deployment – #6562

By Janet Rafner, Christian Bantle , Dominik  Dellermann, Matthias Söllner, Michael A. Zaggl, Jacob Sherson

Full article: Poster Paper 6562

Using a Virtual Reality house-search task to measure Trust during Human-Agent Interaction – #2664

by Esther Kox, Jonathan Barnhoorn, Lucía Rábago Mayer, Arda Temel and Tessa Klunder

How can we observe how people respond to consequential errors by an artificial agent in a realistic yet highly controllable environment? We created a threatdetection housesearch task in virtual reality in which participants form a HumanAgent Team (HAT) with an autonomous drone. By simulating risk, we amplify the feeling of reliance and the importance of trust in the agent. This paradigm allows for ecologically valid research that provides more insight into crucial humanagent team dynamics such as trust and situational awareness.

Full article: Demo paper #2664

A “Mock App Store” Interface for Virtual Privacy Assistants – #2111

by Sarah E. Carter, Ilaria Tiddi, and Dayana Spagnuelo

Privacy assistants aim to provide meaningful privacy recommendations to users. Here, we describe a webbased testing environment for smartphone privacy assistants called the “Mock App Store” (MAS). The MAS was developed to test a particular privacy assistant the valuecentered privacy assistant (VcPA) which assists users in selecting applications based on their value profile. While the MAS was designed with the VcPA in mind, it could also be utilized to test other stateoftheart privacy assistant technology.

Full article: Demo Paper #2111

Landmarks in Case-based Reasoning: From Theory to Data – #81

by Wijnand van Woerkom, Davide Grossi, Henry Prakken and Bart Verheij

Widespread application of uninterpretable machine learning systems for sensitive purposes has spurred research into elucidating the decision making process of these systems. These efforts have their background in many different disciplines, one of which is the feld of AI & law. In particular, recent works have observed that machine learning training data can be interpreted as legal cases. Under this interpretation the formalism developed to study case law, called the theory of precedential constraint, can be used to analyze the way in which machine learning systems draw on training data – or should draw on them – to make decisions. These works predominantly stay on the theoretical level, hence in the present work the formalism is evaluated on a real world dataset. Through this analysis we identify a signifcant new concept which we call landmark cases, and use it to characterize the types of datasets that are more or less suitable to be described by the theory.

Full article: Paper 81

Can AI reduce motivated reasoning in news consumption? – #59

by Magdalena Wischnewski and Nicole Krämer

A central role in understanding the interaction between humans and AI plays the notion of trust. Especially research from social and cognitive psychology has shown, however, that individuals’ perceptions of trust can be biased. In this empirical investigation, we focus on the single and combined effects of attitudes towards AI and motivated reasoning in shaping such biased trust perceptions in the context
of news consumption. In doing so, we rely on insights from works on the machine heuristic and motivated reasoning. In a 2 (author) x 2 (congruency) between-subjects online experiment, we asked N = 477 par-
ticipants to read a news article purportedly written either by AI or a human author. We manipulated whether the article represented pro or contra arguments of a polarizing topic, to elicit motivated reasoning. We also assessed participants’ attitudes towards AI in terms of competence and objectivity. Through multiple linear regressions, we found that (a) increased perceptions of AI as objective and ideologically unbiased increased trust perceptions, whereas (b), in cases where participants were swayed by their prior opinion to trust content more when they agreed with the content, the AI author reduced such biased perceptions. Our results indicate that it is crucial to account for attitudes towards AI and motivated reasoning to accurately represent trust perceptions.

Full article: Paper 59