Just a second....

Thursday, 16. June (Day 2)

Registration
-
admin

Paper Presentations
-
Session #4
admin

Capable but Amoral? Comparing AI and Human Expert Collaboration in Ethical Decision Making (#3)
-
Suzanne Tolmeijer, Markus Christen, Serhiy Kandul, Markus Kneer and Abraham Bernstein
admin

Neural Prototype Trees for Interpretable Fine-grained Image Recognition (#17)
-
Meike Nauta, Ron van Bree and Christin Seifert
admin

Privacy risk of global explanainers (#69)
-
Francesca Naretto, Anna Monreale and Fosca Giannotti
admin

Abstracting Minds: Computational Theory of Mind for Human-Agent Collaboration (#70) **
-
Emre Erdogan, Frank Dignum, Rineke Verbrugge and Pinar Yolum
admin

Discovering the Rationale of Decisions (#12)
-
Cor Steging, Silja Renooij and Bart Verheij
admin

Coffee Break
-

Paper Presentations
-
Session #5
admin

Estimating Rapport in Conversations: An Interpretable and Dyadic Multi-Modal Approach (#48) *
-
Gustav Grimberg, Thomas Janssoone, Chloé Clavel and Justine Cassell
admin

Challenges of the adoption of AI in High Risk High consequence time compressed decision-making environments (#96) *
-
Bart van Leeuwen, Richard Gasaway and Gerke Spaling
admin

How Can This Be Split Into Parts? Training Intelligent Tutors on User Simulators Using Reinforcement Learning (#46)
-
Paul Bricman and Matthia Sabatelli
admin

An experiment in measuring understanding (#28) *
-
Luc Steels, Lara Verheyen and Remi van Trijp
admin

Lunch
-
admin

Keynote by Wendy Mackay
-
Creating Human-Computer Partnerships
Despite incredible advances in hardware, much of today’s software remains stuck in assumptions that date back to the 1970s. As software becomes ever more ‘intelligent’, users often find themselves in a losing battle, unable to explain what they really want. Their role can easily shift from generating new content to correcting or putting up with the system’s errors. This is partly due to the assumptions from AI that treat human users primarily as a source of data for their algorithm—the so-called “human-in-the-loop”— while traditional Human-Computer Interaction practitioners focus on creating the “user experience” with simple icon and menu interfaces, without considering the details of the user’s interaction with an intelligent system. I argue that we need to develop methods for creating human-computer partnerships that take advantage of advances in machine learning, but also leave the user in control. I illustrate how we use generative theory, especially instrumental interaction and reciprocal co-adaptation, to create interactive intelligent systems that are discoverable, appropriable and expressive. Our goal is to design robust interactive systems that augment rather than replace human capabilities, and are actually worth learning over time.
admin

Paper Presentations
-
Session #6
admin

Exosoul: ethical profiling in the digital world (#39)
-
Costanza Alfieri, Paola Inverardi, Patrizio Migliarini and Massimiliano Palmiero
admin

Effective Task Allocation in Ad Hoc Human-Agent Teams (#53)
-
Sami Abuhaimed and Sandip Sen
admin

Coffee Break
-

Paper Presentations
-
Session #7
admin

Estimating Value Preferences in a Hybrid Participatory System (#31) **
-
Luciano Cavalcante Siebert, Enrico Liscio, Pradeep Kumar Murukannaiah, Lionel Kaptein, Shannon Spruit, Jeroen Van den Hoven and Catholijn Jonker
admin

Legitimacy of what?: a call for democratic AI design (#35)
-
Jonne Maas and Juan Durán
admin

Monitoring AI systems: A Problem Analysis, Framework and Outlook (#23)
-
Annet Onnes
admin

Conference Dinner
-
Moving to location 17:00-18:00
admin