Just a second....


Day 1 - Monday, 13. June

Full day workshop, webpage

Interaction is a real world event that takes place in time and physical or virtual space. By definition, it only exists when it happens. This makes it difficult to observe and study interactions, to share interaction data, to replicate or reproduce them and to evaluate agent behaviour in an objective way. Interactions are also extremely complex, covering many variables whose values change from case to case. The physical circumstances are different, the participants are different, and past experiences have an impact on the actual event. Besides, the eye(s) of the camera(s) and/or experimenters are another factor with impact and the man-power needed to capture such data is high. Finally, privacy issues make it difficult to simply record and publish interaction data freely.

It is therefore not a surprise that interaction research progresses slowly. This workshop aims to bring together researchers with different research backgrounds to explore how interaction research can become more standardised and scalable. The goal of this workshop is to explore how researchers and developers can share experiments and data in which multimodal agent interaction plays a role and how these interactions can be compared and evaluated. Especially within real-world physical contexts, modelling and representing situations and contexts for effective interactions is a challenge. We therefore invite researchers and developers to share with us how and why you record multimodal interactions, whether your data can be shared or combined with other data, how systems can be trained and tested and how interaction can be replicated. Machine learning communities like vision and NLP have made a lot of fast progress by creating competitive leaderboards based on benchmark datasets. But although this is great for training unimodal perception models, obviously such datasets are not sufficient for research involving interaction where multiple modalities should be considered.


Full day workshop, webpage.

In this workshop we will discuss (machine learning) architectures where the human involvement in the design of the model and its data ingestion process allows for both more energy efficient and more interpretable outcomes. Examples of such systems stretch from pure grammatical inference methods and probabilistic programming, where the model (family) is entirely constructed by human hands and only very specific model parameters are learned from data, to various types of interpretable neural network approaches where the specific workings of the output system is much less defined a priori. The goal is to spread knowledge about lesser known approaches to learning from data that use an increased level of human involvement, require less training data, and are tailored to achieve interpretable results in a more efficient way.


  • Silja Renooij, Utrecht University
  • Petter Ericson, Umeå University
  • Victor de Boer, Vrije Universiteit Amsterdam
  • Anna Jonsson, Umeå University
  • Adam Dahlgren Lindström, Umeå University
  • Andrew Lensen, Te Herenga Waka—Victoria University of Wellington       
  • Ronald Siebes, Vrije Universiteit Amsterdam    

Full day workshop, website

In April 2021, the EU Parliament published a proposal, the AI Act (AIA), for regulating the use of AI systems and services in the Union market. If adopted, the AIA will have a significant impact in the EU and beyond.

This workshop aims at analyzing how this new regulation will shape the AI technologies of the future. Do we already have the technology to comply with the proposed regulation? How to operationalize the privacy, fairness, and explainability requirements of the AIA? To what extent does the AIA protect individual rights? How is it possible to deliver a process that effectively certificates AI?

This workshop will bring together researchers and practitioners from academia, industry and anyone else with an interest in law and technology to exchange ideas on the multi-faced effects of the AI Act proposal. Paper submissions with an interdisciplinary orientation are particularly welcome, e.g., works at the boundary between AI, human-computer interaction, law, and ethics.

Submitted applications include regular papers, short papers, working papers and extended abstracts.


  • Desara Dushi, Vrije Universiteit Brussel, Belgium
  • Francesca Naretto, Scuola Normale Superiore, Italy
  • Cecilia Panigutti, Scuola Normale Superiore, Italy
  • Francesca Pratesi, ISTI – CNR, Italy

Half day workshop, webpage.

Numerous disciplines contribute to hybrid intelligence work environments, leading to different basic understandings of what exactly human-centered AI means. These understandings are not necessarily rooted in explicit theories, but result from theories in use that lead to a set of methods and instruments that are applied in R&D projects and transferred to practice. The aim of the workshop is to identify a common ground for human-centricity in hybrid work settings from the perspective of different disciplines and research communities involved in specific job design with hybrid intelligence. Therefore the workshop invites (1) theoretical outlines of human-centered hybrid-intelligent work settings, (2) methods, instruments, and standards as theories in use, (3) use cases describing human-centered AI in the workplace. The workshop will conclude with reflections on a special joint issue to discuss the Common Ground Theory. Submissions from tandems of researchers and practitioners are highly appreciated in this third line.


Day 2 - Tuesday, 14. June

Full day workshop, webpage

As artificial intelligence (AI) technologies are playing roles in our daily lives more than ever, designing intelligent systems which can work with humans effectively (in- stead of replacing them) is becoming a central research theme: hybrid intelligence (HI). While so many fields in AI are shaped by this demand in parallel (such as multiagent systems, computer vision, computational linguistics), this is not enough pronounced (yet) in the knowledge representation (KR) circles, which is a major sub-discipline in AI. This workshop aims to be the first international workshop to fill this gap. It is called knowledge representation for hybrid intelligence (KR4HI), and it welcomes works that focuses on the use of knowledge representation in various scenarios of hybrid- intelligence. The workshop is to be co-located with the first international conference on hybrid-human artificial intelligence.


  • Erman Acar, Leiden University & Vrije Universiteit Amsterdam
  • Thomas Bolander, Technical University of Denmark
  • Ana Ozaki, University of Bergen
  • Rafael Peñaloza, University of Milano-Bicocca

Half day workshop, morning, webpage

The workshop aims to support collaboration between two naturally connected current NWO Gravitation projects: HI and ESDiT. The HI project, for “Hybrid Intelligence”, investigates how artificial intelligence (AI) and human intelligence can be combined to augment human intellect. The ESDiT project, for “Ethics of Socially Disruptive Technologies”, explores how emerging technologies challenge our understanding of ethics and morally important concepts. The workshop is a follow up of a panel at the 4TU ethics conference in October 2021.


  • Sven Nyholm (Utrecht University)
  • Birna van Riemsdijk (University of Twente)
  • Bart Verheij (University of Groningen)

Half day workshop, afternoon, webpage.

As AI systems are increasingly embedded in the most diverse functions of our homes and cities (from vacuum cleaning homes to controlling traffic and energy flow), they require a certain degree of autonomy in order to perform their work. However, what happens when the values of those systems collide? Choices in (value) conflict resolution are not only technical but also of ethical, social and philosophical in nature. In more critical contexts, such systems may need to interact on the basis of less information and a greater variety of disagreements.

During our workshop, we will get to delve into the topic at hand. As rethinking value conflicts in smart systems requires ways of re-imagining situations and finding fresh perspectives on how to handle complicated value conflicts. Furthermore, we will showcase a variety of conflicts that can be found within this domain, through a curated project exhibitions.


  • Sietze Kuilman, TU Delft
  • Maria Luce Lupetti, TU Delft
  • Luciano Siebert, TU Delft
  • Nazli Cila, TU Delft

Full day workshop, webpage

As virtual agents and social robots with adaptive and learning capabilities enter our work and leisure environments, new opportunities arise to develop Hybrid (human-AI) Intelligent systems. Both humans and AI-agents learn, adapt, and develop over time, and, consequently, it is a challenge to imagine what these co-evolving HI systems will look like. This workshop brings together researchers who want to explore how to design co-evolving symbiotic HI-systems from a human-centered perspective, thereby using multidisciplinary methods, models and tools. At the workshop, among other things, we will apply storyboards, scenario writing and pattern engineering to identify interesting HI-patterns with accompanying research challenges (to be further processed in a position paper with a research roadmap).


  • Emma van Zoelen (Delft University of Technology, TNO)
  • Mark Neerincx (Delft University of Technology, TNO)
  • Luisa Damiano (IULM University of Milan)
  • Andreas Dengel (DFKI)
  • Karel van den Bosch (TNO)
  • Mani Tajaddini (Delft University of Technology)
  • Tjeerd Schoonderwoerd (TNO)
  • Maaike de Boer (TNO)
  • Marieke Peeters (Mooncake-AI)
  • Annette ten Teije (VU Amsterdam)
  • Ruben Verhagen (Delft University of Technology)
  • Joachim de Greeff (TNO)

Half day, afternoon, webpage

Where’s the non-human in Human-centered Artificial Intelligence?

For too long, algorithms have been perceived as simply abstract and value-neutral artifacts. However, this is far from true nowadays, as these are embedded in the world we live in and are deeply interconnected with our social, cultural, political, economical and environmental systems, both in terms of their underpinnings and their impacts. The aim of this interdisciplinary workshop at HHAI is to put a spotlight on the need to introduce broader socio-technical and socio-ecological perspectives to the field of Artificial Intelligence (AI). We will leverage a Social-Ecological-Technical Systems (SETS) approach considered in designing an algorithmic impact assessment process for a particular algorithmic system. 


  • Bogdana Rakova (Mozilla Foundation)
  • Ana Valdivia (King’s College London)
  • Roel Dobbe (Delft University of Technology)
  • María Pérez-Ortiz (University College London)