Just a second....

Interactively Providing Explanations for Transformer Language Models – #1704

by Felix Friedrich, Patrick Schramowski, Christopher Tauchmann and Kristian Kersting

Full article: Poster Paper 1704

Moral Reflection with AI-Necessary or Redundant? – #2486

by Aishwarya Suresh Iyer

Moral support with AI has been gaining traction. The proponents of moral support with AI claim that some of the more problematic behavioural patterns of humans can be resolved with the help of AI, such as inability to extend moral concern to global level problems like climate change and refugee crisis. They offer a variety of ways of doing so- through provision of more information, or helping them work through the procedural aspect of moral decision-making, or help them work through their normative positions. I disagree with the solution being offered because I don’t see this as a problem which can be solved at an individual level. As the problems they want to fix are deep, systemic, institutional, socio-political problems, which may not be fixed by a moral support with AI system.

Full article: Poster Paper 2486

Annotating sound events through interactive design of interpretable features – #7726

Professionals of all domains of expertise expect to take part in the benefits of the machine
learning (ML) revolution, but realisation is often slowed down by lack of familiarity
with ML concepts and tools, as well as low availability of annotated data for supervised
methods. Inspired by the problem of assessing the impact of human-generated activity on
marine ecosystems through passive acoustic monitoring [1], we are developing Seadash,
an interactive tool for event detection and classification in multivariate time series.

Full article: Poster Paper 7726

Learning to Cooperate with Human Evaluative Feedback and Demonstrations – #19

by Mehul Verma and Erman Acar

Cooperation is a widespread phenomenon in nature that has also been a cornerstone in the development of human intelligence. Understanding cooperation, therefore, on matters such as how it emerges, develops, or fails is an important avenue of research, not only in a human context, but also for the advancement of next generation artificial intelligence paradigms which are presumably human-compatible. With this motivation in mind, we study the emergence of cooperative behaviour between two independent deep reinforcement learning (RL) agents provided with human input in a novel game environment. In particular, we investigate whether evaluative human feedback (through interactive RL) and expert demonstration (through inverse RL) can help RL agents to learn to cooperate better. We report two main findings. Firstly, we find that the amount of feedback given has a positive impact on the accumulated reward obtained through cooperation. That is, agents trained with a limited amount of feedback outperform agents trained with out any feedback, and the performance increases even further as more feedback is provided. Secondly, we find that expert demonstration also helps agents’ performance, although with more modest improvements compared to evaluative feedback. In conclusion, we present a novel game environment to better understand the emergence of cooperative behaviour and show that providing human feedback and demonstrations can accelerate this process.

Full article: Paper 19