Thursday, 16. June (Day 2) |
---|
Capable but Amoral? Comparing AI and Human Expert Collaboration in Ethical Decision Making (#3)
- Suzanne Tolmeijer, Markus Christen, Serhiy Kandul, Markus Kneer and Abraham Bernstein |
Neural Prototype Trees for Interpretable Fine-grained Image Recognition (#17)
- Meike Nauta, Ron van Bree and Christin Seifert |
|
Abstracting Minds: Computational Theory of Mind for Human-Agent Collaboration (#70) **
- Emre Erdogan, Frank Dignum, Rineke Verbrugge and Pinar Yolum |
|
Estimating Rapport in Conversations: An Interpretable and Dyadic Multi-Modal Approach (#48) *
- Gustav Grimberg, Thomas Janssoone, Chloé Clavel and Justine Cassell |
Challenges of the adoption of AI in High Risk High consequence time compressed decision-making environments (#96) *
- Bart van Leeuwen, Richard Gasaway and Gerke Spaling |
How Can This Be Split Into Parts? Training Intelligent Tutors on User Simulators Using Reinforcement Learning (#46)
- Paul Bricman and Matthia Sabatelli |
|
Lunch
- |
Keynote by Wendy Mackay
- Creating Human-Computer Partnerships Despite incredible advances in hardware, much of today’s software remains stuck in assumptions that date back to the 1970s. As software becomes ever more ‘intelligent’, users often find themselves in a losing battle, unable to explain what they really want. Their role can easily shift from generating new content to correcting or putting up with the system’s errors. This is partly due to the assumptions from AI that treat human users primarily as a source of data for their algorithm—the so-called “human-in-the-loop”— while traditional Human-Computer Interaction practitioners focus on creating the “user experience” with simple icon and menu interfaces, without considering the details of the user’s interaction with an intelligent system. I argue that we need to develop methods for creating human-computer partnerships that take advantage of advances in machine learning, but also leave the user in control. I illustrate how we use generative theory, especially instrumental interaction and reciprocal co-adaptation, to create interactive intelligent systems that are discoverable, appropriable and expressive. Our goal is to design robust interactive systems that augment rather than replace human capabilities, and are actually worth learning over time. |
Exosoul: ethical profiling in the digital world (#39)
- Costanza Alfieri, Paola Inverardi, Patrizio Migliarini and Massimiliano Palmiero |
|
Estimating Value Preferences in a Hybrid Participatory System (#31) **
- Luciano Cavalcante Siebert, Enrico Liscio, Pradeep Kumar Murukannaiah, Lionel Kaptein, Shannon Spruit, Jeroen Van den Hoven and Catholijn Jonker |
|
|
Thursday, 16. June (Day 2) |
---|
Capable but Amoral? Comparing AI and Human Expert Collaboration in Ethical Decision Making (#3)
- Suzanne Tolmeijer, Markus Christen, Serhiy Kandul, Markus Kneer and Abraham Bernstein |
Thursday, 16. June (Day 2) |
---|
Neural Prototype Trees for Interpretable Fine-grained Image Recognition (#17)
- Meike Nauta, Ron van Bree and Christin Seifert |
Thursday, 16. June (Day 2) |
---|
|
Thursday, 16. June (Day 2) |
---|
Abstracting Minds: Computational Theory of Mind for Human-Agent Collaboration (#70) **
- Emre Erdogan, Frank Dignum, Rineke Verbrugge and Pinar Yolum |
Thursday, 16. June (Day 2) |
---|
|
Thursday, 16. June (Day 2) |
---|
Thursday, 16. June (Day 2) |
---|
Estimating Rapport in Conversations: An Interpretable and Dyadic Multi-Modal Approach (#48) *
- Gustav Grimberg, Thomas Janssoone, Chloé Clavel and Justine Cassell |
Thursday, 16. June (Day 2) |
---|
Challenges of the adoption of AI in High Risk High consequence time compressed decision-making environments (#96) *
- Bart van Leeuwen, Richard Gasaway and Gerke Spaling |
Thursday, 16. June (Day 2) |
---|
How Can This Be Split Into Parts? Training Intelligent Tutors on User Simulators Using Reinforcement Learning (#46)
- Paul Bricman and Matthia Sabatelli |
Thursday, 16. June (Day 2) |
---|
|
Thursday, 16. June (Day 2) |
---|
Lunch
- |
Thursday, 16. June (Day 2) |
---|
Keynote by Wendy Mackay
- Creating Human-Computer Partnerships Despite incredible advances in hardware, much of today’s software remains stuck in assumptions that date back to the 1970s. As software becomes ever more ‘intelligent’, users often find themselves in a losing battle, unable to explain what they really want. Their role can easily shift from generating new content to correcting or putting up with the system’s errors. This is partly due to the assumptions from AI that treat human users primarily as a source of data for their algorithm—the so-called “human-in-the-loop”— while traditional Human-Computer Interaction practitioners focus on creating the “user experience” with simple icon and menu interfaces, without considering the details of the user’s interaction with an intelligent system. I argue that we need to develop methods for creating human-computer partnerships that take advantage of advances in machine learning, but also leave the user in control. I illustrate how we use generative theory, especially instrumental interaction and reciprocal co-adaptation, to create interactive intelligent systems that are discoverable, appropriable and expressive. Our goal is to design robust interactive systems that augment rather than replace human capabilities, and are actually worth learning over time. |
Thursday, 16. June (Day 2) |
---|
Exosoul: ethical profiling in the digital world (#39)
- Costanza Alfieri, Paola Inverardi, Patrizio Migliarini and Massimiliano Palmiero |
Thursday, 16. June (Day 2) |
---|
|
Thursday, 16. June (Day 2) |
---|
Estimating Value Preferences in a Hybrid Participatory System (#31) **
- Luciano Cavalcante Siebert, Enrico Liscio, Pradeep Kumar Murukannaiah, Lionel Kaptein, Shannon Spruit, Jeroen Van den Hoven and Catholijn Jonker |
Thursday, 16. June (Day 2) |
---|
|
Thursday, 16. June (Day 2) |
---|
Thursday, 16. June (Day 2) |
---|
|
Thursday, 16. June (Day 2)
-
Capable but Amoral? Comparing AI and Human Expert Collaboration in Ethical Decision Making (#3)
-
Suzanne Tolmeijer, Markus Christen, Serhiy Kandul, Markus Kneer and Abraham Bernstein
-
Neural Prototype Trees for Interpretable Fine-grained Image Recognition (#17)
-
Meike Nauta, Ron van Bree and Christin Seifert
-
Privacy risk of global explanainers (#69)
-
Francesca Naretto, Anna Monreale and Fosca Giannotti
-
Abstracting Minds: Computational Theory of Mind for Human-Agent Collaboration (#70) **
-
Emre Erdogan, Frank Dignum, Rineke Verbrugge and Pinar Yolum
-
Discovering the Rationale of Decisions (#12)
-
Cor Steging, Silja Renooij and Bart Verheij
-
Coffee Break
-
-
Estimating Rapport in Conversations: An Interpretable and Dyadic Multi-Modal Approach (#48) *
-
Gustav Grimberg, Thomas Janssoone, Chloé Clavel and Justine Cassell
-
Challenges of the adoption of AI in High Risk High consequence time compressed decision-making environments (#96) *
-
Bart van Leeuwen, Richard Gasaway and Gerke Spaling
-
How Can This Be Split Into Parts? Training Intelligent Tutors on User Simulators Using Reinforcement Learning (#46)
-
Paul Bricman and Matthia Sabatelli
-
An experiment in measuring understanding (#28) *
-
Luc Steels, Lara Verheyen and Remi van Trijp
-
Lunch
-
-
Keynote by Wendy Mackay
-
Creating Human-Computer Partnerships
Despite incredible advances in hardware, much of today’s software remains stuck in assumptions that date back to the 1970s. As software becomes ever more ‘intelligent’, users often find themselves in a losing battle, unable to explain what they really want. Their role can easily shift from generating new content to correcting or putting up with the system’s errors. This is partly due to the assumptions from AI that treat human users primarily as a source of data for their algorithm—the so-called “human-in-the-loop”— while traditional Human-Computer Interaction practitioners focus on creating the “user experience” with simple icon and menu interfaces, without considering the details of the user’s interaction with an intelligent system. I argue that we need to develop methods for creating human-computer partnerships that take advantage of advances in machine learning, but also leave the user in control. I illustrate how we use generative theory, especially instrumental interaction and reciprocal co-adaptation, to create interactive intelligent systems that are discoverable, appropriable and expressive. Our goal is to design robust interactive systems that augment rather than replace human capabilities, and are actually worth learning over time.
-
Exosoul: ethical profiling in the digital world (#39)
-
Costanza Alfieri, Paola Inverardi, Patrizio Migliarini and Massimiliano Palmiero
-
Effective Task Allocation in Ad Hoc Human-Agent Teams (#53)
-
Sami Abuhaimed and Sandip Sen
-
Coffee Break
-
-
Estimating Value Preferences in a Hybrid Participatory System (#31) **
-
Luciano Cavalcante Siebert, Enrico Liscio, Pradeep Kumar Murukannaiah, Lionel Kaptein, Shannon Spruit, Jeroen Van den Hoven and Catholijn Jonker
-
Legitimacy of what?: a call for democratic AI design (#35)
-
Jonne Maas and Juan Durán
-
Monitoring AI systems: A Problem Analysis, Framework and Outlook (#23)
-
Annet Onnes
-
Conference Dinner
-
Moving to location 17:00-18:00