Just a second....

Identifzing Context-Specific Vaues via Hybrid Intelligence – #7426

 

by: Enrico Liscio, Catholijn M. Jonger and Pradeep K. Murukannaiah

Poster Paper 7426

Landmarks in Case-based Reasoning: From Theory to Data – #81

by Wijnand van Woerkom, Davide Grossi, Henry Prakken and Bart Verheij

Widespread application of uninterpretable machine learning systems for sensitive purposes has spurred research into elucidating the decision making process of these systems. These efforts have their background in many different disciplines, one of which is the feld of AI & law. In particular, recent works have observed that machine learning training data can be interpreted as legal cases. Under this interpretation the formalism developed to study case law, called the theory of precedential constraint, can be used to analyze the way in which machine learning systems draw on training data – or should draw on them – to make decisions. These works predominantly stay on the theoretical level, hence in the present work the formalism is evaluated on a real world dataset. Through this analysis we identify a signifcant new concept which we call landmark cases, and use it to characterize the types of datasets that are more or less suitable to be described by the theory.

Full article: Paper 81

Can AI reduce motivated reasoning in news consumption? – #59

by Magdalena Wischnewski and Nicole Krämer

A central role in understanding the interaction between humans and AI plays the notion of trust. Especially research from social and cognitive psychology has shown, however, that individuals’ perceptions of trust can be biased. In this empirical investigation, we focus on the single and combined effects of attitudes towards AI and motivated reasoning in shaping such biased trust perceptions in the context
of news consumption. In doing so, we rely on insights from works on the machine heuristic and motivated reasoning. In a 2 (author) x 2 (congruency) between-subjects online experiment, we asked N = 477 par-
ticipants to read a news article purportedly written either by AI or a human author. We manipulated whether the article represented pro or contra arguments of a polarizing topic, to elicit motivated reasoning. We also assessed participants’ attitudes towards AI in terms of competence and objectivity. Through multiple linear regressions, we found that (a) increased perceptions of AI as objective and ideologically unbiased increased trust perceptions, whereas (b), in cases where participants were swayed by their prior opinion to trust content more when they agreed with the content, the AI author reduced such biased perceptions. Our results indicate that it is crucial to account for attitudes towards AI and motivated reasoning to accurately represent trust perceptions.

Full article: Paper 59

Open, multiple, adjunct. Decision support at the time of relational AI – #37

by Federico Cabitza and Chiara Natali

In this paper, we consider some key characteristics that AI should exhibit to enable hybrid agencies that include subject-matter experts and their AI-enabled decision aids. We will hint at the design requirements of guaranteeing that AI tools are: open, multiple, continuous, cautious, vague, analogical and, most importantly, adjunct with respect to decision-making practices. We will argue that especially adjunction is an important condition to design for. Adjunction entails the design and evaluation of human-AI interaction protocols aimed at improving AI usability, while also guaranteeing user satisfaction and human and social sustainability. It does so by boosting people’s cognitive motivation for interacting analytically with the outputs, reducing overreliance on AI and improving performance.

Full article: Paper 37

POMDP-based adaptive interaction through physiological computing – #18

by Gaganpreet Singh, Raphaëlle N. Roy and Caroline P. C. Chanel

In this study, a formal framework aiming to drive the interaction between a human operator and a team of unmanned aerial vehicles (UAVs) is experimentally tested. The goal is to enhance human performance by controlling the interaction between agents based on an online monitoring of the operator’s mental workload (MW) and performance. The proposed solution uses MW estimation via a classifier applied to cardiac features. The classifier output is introduced as a human MW state observation in a Partially Observable Markov Decision Process (POMDP) which models the human-system interaction dynamics, and aims to control the interaction to optimize the human agent’s performance. Based on the current belief state about the operator’s MW and performance, along with the mission phase, the POMDP policy solution controls which task should be suggested -or not- to the operator, assuming the UAVs are capable of supporting the human agent. The framework was evaluated using an experiment in which 13 participants performed 2 search and rescue missions (with/without adaptation) with varying workload levels. In accordance with the literature, when the adaptive approach was used, the participants felt significantly less MW, physical and temporal demands, frustration, and effort, and their flying score was also significantly improved. These findings demonstrate how such a POMDP-based adaptive interaction control can improve performance while reducing operator workload.

Full article: Paper 18

Neural Prototype Trees for Interpretable Fine-grained Image Recognition – #17

Interpretable machine learning addresses the black-box nature of deep neural networks. Visual prototypes have been suggested for intrinsically interpretable image recognition, as alternative to post-hoc explanations that only approximate a trained model. Aiming for better interpretability and fewer prototypes to not overwhelm a user, we propose the Neural Prototype Tree (ProtoTree), a deep learning method that includes prototypes in a hierarchical decision tree to faithfully visualize the entire model. In addition to global interpretability, a path in the tree explains a single prediction. Each node in our binary tree contains a trainable prototypical part. The presence or absence of this learned prototype in an image determines the routing through a node. Decision making is therefore similar to human reasoning: Does the bird have a red throat? And an elongated beak? Then it’s a hummingbird! We tune the accuracy-interpretability trade off using ensembling and pruning. We apply pruning without sacrificing accuracy, resulting in a small tree with only 8 learned prototypes along a path to classify a bird from 200 species. An ensemble of 5 ProtoTrees achieves competitive accuracy on the CUB-200-2011 and Stanford Cars data sets. Code is available at https://github.com/M-Nauta/ProtoTree. Full paper published at CVPR 2021.

Full article: Paper 17