Just a second....

Neural Prototype Trees for Interpretable Fine-grained Image Recognition – #17

Interpretable machine learning addresses the black-box nature of deep neural networks. Visual prototypes have been suggested for intrinsically interpretable image recognition, as alternative to post-hoc explanations that only approximate a trained model. Aiming for better interpretability and fewer prototypes to not overwhelm a user, we propose the Neural Prototype Tree (ProtoTree), a deep learning method that includes prototypes in a hierarchical decision tree to faithfully visualize the entire model. In addition to global interpretability, a path in the tree explains a single prediction. Each node in our binary tree contains a trainable prototypical part. The presence or absence of this learned prototype in an image determines the routing through a node. Decision making is therefore similar to human reasoning: Does the bird have a red throat? And an elongated beak? Then it’s a hummingbird! We tune the accuracy-interpretability trade off using ensembling and pruning. We apply pruning without sacrificing accuracy, resulting in a small tree with only 8 learned prototypes along a path to classify a bird from 200 species. An ensemble of 5 ProtoTrees achieves competitive accuracy on the CUB-200-2011 and Stanford Cars data sets. Code is available at https://github.com/M-Nauta/ProtoTree. Full paper published at CVPR 2021.

Full article: Paper 17

Moral Reflection with AI-Necessary or Redundant? – #2486

by Aishwarya Suresh Iyer

Moral support with AI has been gaining traction. The proponents of moral support with AI claim that some of the more problematic behavioural patterns of humans can be resolved with the help of AI, such as inability to extend moral concern to global level problems like climate change and refugee crisis. They offer a variety of ways of doing so- through provision of more information, or helping them work through the procedural aspect of moral decision-making, or help them work through their normative positions. I disagree with the solution being offered because I don’t see this as a problem which can be solved at an individual level. As the problems they want to fix are deep, systemic, institutional, socio-political problems, which may not be fixed by a moral support with AI system.

Full article: Poster Paper 2486

Annotating sound events through interactive design of interpretable features – #7726

Professionals of all domains of expertise expect to take part in the benefits of the machine
learning (ML) revolution, but realisation is often slowed down by lack of familiarity
with ML concepts and tools, as well as low availability of annotated data for supervised
methods. Inspired by the problem of assessing the impact of human-generated activity on
marine ecosystems through passive acoustic monitoring [1], we are developing Seadash,
an interactive tool for event detection and classification in multivariate time series.

Full article: Poster Paper 7726

Learning to Cooperate with Human Evaluative Feedback and Demonstrations – #19

by Mehul Verma and Erman Acar

Cooperation is a widespread phenomenon in nature that has also been a cornerstone in the development of human intelligence. Understanding cooperation, therefore, on matters such as how it emerges, develops, or fails is an important avenue of research, not only in a human context, but also for the advancement of next generation artificial intelligence paradigms which are presumably human-compatible. With this motivation in mind, we study the emergence of cooperative behaviour between two independent deep reinforcement learning (RL) agents provided with human input in a novel game environment. In particular, we investigate whether evaluative human feedback (through interactive RL) and expert demonstration (through inverse RL) can help RL agents to learn to cooperate better. We report two main findings. Firstly, we find that the amount of feedback given has a positive impact on the accumulated reward obtained through cooperation. That is, agents trained with a limited amount of feedback outperform agents trained with out any feedback, and the performance increases even further as more feedback is provided. Secondly, we find that expert demonstration also helps agents’ performance, although with more modest improvements compared to evaluative feedback. In conclusion, we present a novel game environment to better understand the emergence of cooperative behaviour and show that providing human feedback and demonstrations can accelerate this process.

Full article: Paper 19