Just a second....

(Eco)systemic challenges in AI

Tuesday, 14. June, HG-02A33

Workshop organisers: Bogdana Rakova, Maria Perez-Ortiz, Roel Dobbe and Ana Valdivia

Time Activity

14:00-14:15 Introduction

14:15-15:00 Interactive activity

15:00-16:00 What are the (eco)systemic challenges in AIues

16:00-16:15 Coffee break

16:20-17:10 Keynotes: Paola Ricaurte and TBC

17:10-17:30 Discussion

Human-Centered Design of Symbiotic Hybrid Intelligence

Tuesday, 14. June, HG-02A24

As virtual agents and social robots with adaptive and learning capabilities enter our work and leisure environments, new opportunities arise to develop Hybrid (human-AI) Intelligent systems. Both humans and AI-agents learn, adapt, and develop over time, and, consequently, it is a challenge to imagine what these co-evolving HI systems will look like. This workshop brings together researchers who want to explore how to design co-evolving symbiotic HI-systems from a human-centered perspective, thereby using multidisciplinary methods, models and tools. At the workshop, among other things, we will apply storyboards, scenario writing and pattern engineering to identify interesting HI-patterns with accompanying research challenges (to be further processed in a position paper with a research roadmap).

Workshop organisers: Emma van Zoelen, Mark Neerincx, Luisa Damiano, Andreas Dengel and Karel van den Bosch

Schedule

Time

09:00
09:30

10:15
11:00
11:15
12:00
12:45
14:00
15:00
15:30
16:30
17:30

Activity

Start of workshop day, introduction to the program
Presentation 1: Symbiosis and Co-Evolution in Hybrid Human-Symbiosis and Co-Evolution in Hybrid Human-Machine SystemsMachine Systems
Presentation 2: The FATE project: Applied AI Research in a Human-Centered Manner
Coffee Break
Presentation 3: Human-Machine Teaming in the Fire Brigade
Presentation 4: Design Patterns
Lunch Break
Work Session part 1
Coffee Break with Visual presentations
Work Session part 2
Final Presentations and Discussions
End of the workshop

HI ESDiT Collaboration on AI, Human Values and the Law

Tuesday, 14. June, HG-02A33

Workshop organisers: Sven Nyholm, Birna van Riemsdijk and Bart Verheij

TimeActivitySpeakerCommentator
09:00Opening
09:151MatthewPinar
09:502CindyIlaria
10:253SvenBart
11:00Break
11:154PinarMatthew
11:505IlariaCindy
12:256BartSven
13:00Closing

Knowledge Representation for Hybrid-Intelligence

Tuesday, 14. June, HG-01A33

As artificial intelligence (AI) technologies are playing roles in our daily lives more than ever, designing intelligent systems which can work with humans effectively (in- stead of replacing them) is becoming a central research theme: hybrid intelligence (HI). While so many fields in AI are shaped by this demand in parallel (such as multiagent systems, computer vision, computational linguistics), this is not enough pronounced (yet) in the knowledge representation (KR) circles, which is a major sub-discipline in AI. This workshop aims to be the first international workshop to fill this gap. It is called knowledge representation for hybrid intelligence (KR4HI), and it welcomes works that focuses on the use of knowledge representation in various scenarios of hybrid- intelligence. The workshop is to be co-located with the first international conference on hybrid-human artificial intelligence.

Workshop organisers: Erman Acar, Thomas Bolander, Ana Ozaki and Rafael Penaloza

Schedule

9:15 Welcome
9:30 Keynote: Ufuk Topcu (University of Texas). Title: Autonomous systems in the intersection of reinforcement learning, controls, and formal methods
10:30 Coffee break
10:45 Full paper presentations 1
11:35 Break
11:50 Full paper presentations 2
12:40 Lunch Break
14:00 Keynote: Bart Verheij (Uni Groningen)
15:00 Break
15:10 Full Paper Presentations 3
16:00 Coffee Break
16:10 Extended Abstract Presentations 1
16:55 Break
17:05 Extended Abstract Presentations 2
17:35 Open Discussion
18:00 Closing

Common Ground Theory and Method Development workshop

Monday, HG-02A16

Numerous disciplines contribute to hybrid intelligence work environments, leading to different basic understandings of what exactly human-centered AI means. These understandings are not necessarily rooted in explicit theories, but result from theories in use that lead to a set of methods and instruments that are applied in R&D projects and transferred to practice. The aim of the workshop is to identify a common ground for human-centricity in hybrid work settings from the perspective of different disciplines and research communities involved in specific job design with hybrid intelligence. Therefore the workshop invites (1) theoretical outlines of human-centered hybrid-intelligent work settings, (2) methods, instruments, and standards as theories in use, (3) use cases describing human-centered AI in the workplace. The workshop will conclude with reflections on a special joint issue to discuss the Common Ground Theory. Submissions from tandems of researchers and practitioners are highly appreciated in this third line.

Workshop organisers: Uta Wilkens, Annette Kluge, Verena Nitsch, Steffen Kinkel and Daniel Lupp

Schedule

Time

09:00 – 09:40
09:40 – 10:20
10:20 – 10:40
10:40 – 11: 00
11:00 – 12:00
12:00 – 12:30

Activity

Welcome and opening for research line 1
Research line 2
Research line 3
Coffee Break
Fishbowl discussion on common ground theory
Agreement for further development in special issue

Imagining the AI Landscape after the AI Act

Monday, 13. June, HG-01A33

In April 2021, the EU Parliament published a proposal, the AI Act (AIA), for regulating the use of AI systems and services in the Union market. If adopted, the AIA will have a significant impact in the EU and beyond.

This workshop aims at analyzing how this new regulation will shape the AI technologies of the future. Do we already have the technology to comply with the proposed regulation? How to operationalize the privacy, fairness, and explainability requirements of the AIA? To what extent does the AIA protect individual rights? How is it possible to deliver a process that effectively certificates AI?

In April 2021, the EU Parliament published a proposal, the AI Act (AIA), for regulating the use of AI systems and services in the Union market. If adopted, the AIA will have a significant impact in the EU and beyond.

This workshop aims at analyzing how this new regulation will shape the AI technologies of the future. Do we already have the technology to comply with the proposed regulation? How to operationalize the privacy, fairness, and explainability requirements of the AIA? To what extent does the AIA protect individual rights? How is it possible to deliver a process that effectively certificates AI?

This workshop will bring together researchers and practitioners from academia, industry and anyone else with an interest in law and technology to exchange ideas on the multi-faced effects of the AI Act proposal. Paper submissions with an interdisciplinary orientation are particularly welcome, e.g., works at the boundary between AI, human-computer interaction, law, and ethics.

Workshop organisers: Francesca Pratesi, Desara Dushi, Cecilia Panigutti and Francesca Naretto

Schedule

Time

09:15 – 09:30
09:30 – 10:10
10:10 – 10:50
10:50 – 11:00
11:20 – 12:26
12:26 – 13:00
13:00 – 14:00
14:00 – 15:30
15:30 – 15:45
15:45 – 16:47
16:47 – 17:15
17:15 – 17:30

Activity

Welcome and Overview of the workshop
Fireside chat – Virginia Dignum
Fireside chat – Mireille Hildebrandt
Coffee break
Paper presentations – Session 1
Open mike
Lunch
Group activity
Coffee break
Paper presentations – Session 2
Open mike
Closing remarks

Heterodox Methods for Interpretable and Efficient Artificial Intelligence

Monday, 13. June, HG-01A32

In this workshop we will discuss (machine learning) architectures where the human involvement in the design of the model and its data ingestion process allows for both more energy efficient and more interpretable outcomes. Examples of such systems stretch from pure grammatical inference methods and probabilistic programming, where the model (family) is entirely constructed by human hands and only very specific model parameters are learned from data, to various types of interpretable neural network approaches where the specific workings of the output system is much less defined a priori. The goal is to spread knowledge about lesser known approaches to learning from data that use an increased level of human involvement, require less training data, and are tailored to achieve interpretable results in a more efficient way.

Schedule
09:0009:15Welcome and introductory information
09:1510:00Invited Talk: Prof Ole-Christoffer Granmo
The Tsetlin Machine – From Arithmetic to Logic-based AI
10:0011:30Paper session 1:


Azqa Nadeem, Sicco Verwer & Shanchieh Jay Yang:
Suffix-based Finite Automata for Learning Explainable Attacker Strategies


Petter Ericson & Anna Jonsson:
Grammatical Inference: Strengths and Weaknesses


Enrique Valero-Leal, Pedro Larrañaga & Concha Bielza:
Extending MAP-independence for Bayesian network explainability


Marco Virgolin, Eric Medvet, Tanja Alderliesten & Peter A.N. Bosman:
Less is More: A Call to Focus on Simpler Modelsin Genetic Programming for Interpretable Machine Learning
11:3012:30Panel 1: Transparent models and explaining uncertainty
12:3014:00Lunch
14:0014:45Invited Talk: Dr. Anil Yaman
On the Emergence of Collective Intelligence
14:4515:45Paper session 2:


Leila Methnani, Andreas Antoniades & Andreas Theodorou:
The AWKWARD Real-Time Adjustment of Reactive Planning


Krist Shingjergji, Deniz Iren, Felix Böttger, Corrie Urlings & Roland Klemke:
Interpretable Explainability for Face Expression Recognition


Ronald Siebes, Victor de Boer, Roberto Reda & Roderick van der Weerdt:
Learning and Reasoning over Smart Home Knowledge Graphs
15:4516:45Unconference/excursion (if the weather permits)
16:4517:45Panel 2: Transparency and efficiency in practice

Representation, sharing and evaluation of multimodal agent interaction

Monday, 13. June, HG-02A24

Interaction is a real world event that takes place in time and physical or virtual space. By definition, it only exists when it happens. This makes it difficult to observe and study interactions, to share interaction data, to replicate or reproduce them and to evaluate agent behaviour in an objective way. Interactions are also extremely complex, covering many variables whose values change from case to case. The physical circumstances are different, the participants are different, and past experiences have an impact on the actual event. Besides, the eye(s) of the camera(s) and/or experimenters are another factor with impact and the man-power needed to capture such data is high. Finally, privacy issues make it difficult to simply record and publish interaction data freely.

nteraction is a real world event that takes place in time and physical or virtual space. By definition, it only exists when it happens. This makes it difficult to observe and study interactions, to share interaction data, to replicate or reproduce them and to evaluate agent behaviour in an objective way. Interactions are also extremely complex, covering many variables whose values change from case to case. The physical circumstances are different, the participants are different, and past experiences have an impact on the actual event. Besides, the eye(s) of the camera(s) and/or experimenters are another factor with impact and the man-power needed to capture such data is high. Finally, privacy issues make it difficult to simply record and publish interaction data freely.

It is therefore not a surprise that interaction research progresses slowly. This workshop aims to bring together researchers with different research backgrounds to explore how interaction research can become more standardised and scalable. The goal of this workshop is to explore how researchers and developers can share experiments and data in which multimodal agent interaction plays a role and how these interactions can be compared and evaluated. Especially within real-world physical contexts, modelling and representing situations and contexts for effective interactions is a challenge. We therefore invite researchers and developers to share with us how and why you record multimodal interactions, whether your data can be shared or combined with other data, how systems can be trained and tested and how interaction can be replicated. Machine learning communities like vision and NLP have made a lot of fast progress by creating competitive leaderboards based on benchmark datasets. But although this is great for training unimodal perception models, obviously such datasets are not sufficient for research involving interaction where multiple modalities should be considered.

Workshop organisers: Piek Vossen, Catha Oertel, André Pereira, Dan Balliet, Hayley Hung and Sean Andrist

Schedule

Time

09:15 – 09:30
09:30 – 11:00
11:00 – 11:15
11:15 – 12:15
12:15 – 13:00
13:00 – 14:00
14:00 – 15:00
15:00 – 15:15
15:15 – 16:15
16:15- 17:15

Activity

Welcome
Oral papers
Break
Interaction session
Lunch
Oral papers
Interaction analysis & discussion
Break
Panel on sharing multimodal interaction data
Keynote by Dan Bohous (Microsoft)