by EstherKox,Jonathan Barnhoorn,Lucía Rábago Mayer,ArdaTemel and Tessa Klunder
How can we observe howpeople respond to consequential errors by anartificial agent in arealistic yet highly controllableenvironment? We created athreat–detection house–search task in virtual realityin whichparticipantsform aHuman–Agent Team(HAT)with an autonomous drone.By simulating risk,weamplifythefeeling of reliance and theimportance of trust in theagent.Thisparadigm allows for ecologically valid research that provides moreinsightinto crucial human–agent team dynamics such as trust and situational awareness.
by Sarah E. Carter, IlariaTiddi, and Dayana Spagnuelo
Privacy assistants aim to providemeaningful privacyrecommendations to users. Here, we describe a web–based testing environment for smartphone privacy assistants called the “Mock App Store” (MAS).The MAS was developed to test aparticular privacy assistant–the value–centered privacy assistant (VcPA)–which assists users in selecting applications based on their value profile. While the MAS was designed with the VcPA in mind, itcouldalsobe utilized to test other state–of–the–artprivacyassistant technology.
by Wijnand van Woerkom, Davide Grossi, Henry Prakken and Bart Verheij
Widespread application of uninterpretable machine learning systems for sensitive purposes has spurred research into elucidating the decision making process of these systems. These efforts have their background in many different disciplines, one of which is the feld of AI & law. In particular, recent works have observed that machine learning training data can be interpreted as legal cases. Under this interpretation the formalism developed to study case law, called the theory of precedential constraint, can be used to analyze the way in which machine learning systems draw on training data – or should draw on them – to make decisions. These works predominantly stay on the theoretical level, hence in the present work the formalism is evaluated on a real world dataset. Through this analysis we identify a signifcant new concept which we call landmark cases, and use it to characterize the types of datasets that are more or less suitable to be described by the theory.
A central role in understanding the interaction between humans and AI plays the notion of trust. Especially research from social and cognitive psychology has shown, however, that individuals’ perceptions of trust can be biased. In this empirical investigation, we focus on the single and combined effects of attitudes towards AI and motivated reasoning in shaping such biased trust perceptions in the context of news consumption. In doing so, we rely on insights from works on the machine heuristic and motivated reasoning. In a 2 (author) x 2 (congruency) between-subjects online experiment, we asked N = 477 par- ticipants to read a news article purportedly written either by AI or a human author. We manipulated whether the article represented pro or contra arguments of a polarizing topic, to elicit motivated reasoning. We also assessed participants’ attitudes towards AI in terms of competence and objectivity. Through multiple linear regressions, we found that (a) increased perceptions of AI as objective and ideologically unbiased increased trust perceptions, whereas (b), in cases where participants were swayed by their prior opinion to trust content more when they agreed with the content, the AI author reduced such biased perceptions. Our results indicate that it is crucial to account for attitudes towards AI and motivated reasoning to accurately represent trust perceptions.