Just a second....

Representation, sharing and evaluation of multimodal agent interaction

Monday, 13. June, HG-02A24

Interaction is a real world event that takes place in time and physical or virtual space. By definition, it only exists when it happens. This makes it difficult to observe and study interactions, to share interaction data, to replicate or reproduce them and to evaluate agent behaviour in an objective way. Interactions are also extremely complex, covering many variables whose values change from case to case. The physical circumstances are different, the participants are different, and past experiences have an impact on the actual event. Besides, the eye(s) of the camera(s) and/or experimenters are another factor with impact and the man-power needed to capture such data is high. Finally, privacy issues make it difficult to simply record and publish interaction data freely.

nteraction is a real world event that takes place in time and physical or virtual space. By definition, it only exists when it happens. This makes it difficult to observe and study interactions, to share interaction data, to replicate or reproduce them and to evaluate agent behaviour in an objective way. Interactions are also extremely complex, covering many variables whose values change from case to case. The physical circumstances are different, the participants are different, and past experiences have an impact on the actual event. Besides, the eye(s) of the camera(s) and/or experimenters are another factor with impact and the man-power needed to capture such data is high. Finally, privacy issues make it difficult to simply record and publish interaction data freely.

It is therefore not a surprise that interaction research progresses slowly. This workshop aims to bring together researchers with different research backgrounds to explore how interaction research can become more standardised and scalable. The goal of this workshop is to explore how researchers and developers can share experiments and data in which multimodal agent interaction plays a role and how these interactions can be compared and evaluated. Especially within real-world physical contexts, modelling and representing situations and contexts for effective interactions is a challenge. We therefore invite researchers and developers to share with us how and why you record multimodal interactions, whether your data can be shared or combined with other data, how systems can be trained and tested and how interaction can be replicated. Machine learning communities like vision and NLP have made a lot of fast progress by creating competitive leaderboards based on benchmark datasets. But although this is great for training unimodal perception models, obviously such datasets are not sufficient for research involving interaction where multiple modalities should be considered.

Workshop organisers: Piek Vossen, Catha Oertel, André Pereira, Dan Balliet, Hayley Hung and Sean Andrist

Schedule

Time

09:15 – 09:30
09:30 – 11:00
11:00 – 11:15
11:15 – 12:15
12:15 – 13:00
13:00 – 14:00
14:00 – 15:00
15:00 – 15:15
15:15 – 16:15
16:15- 17:15

Activity

Welcome
Oral papers
Break
Interaction session
Lunch
Oral papers
Interaction analysis & discussion
Break
Panel on sharing multimodal interaction data
Keynote by Dan Bohous (Microsoft)

Event Timeslots (3)

Monday, 13. June (Pre-C. Day 1)
-
Full day workshop
admin

HG-02A24
-
09:15-09:30 Welcome 09:30-11:00 Oral papers 11:00-11:15 Break 11:15-12:15 Interaction session
admin

HG-02A24
-
13:00-14:00 Oral papers 14:00-15:00 Interaction analysis & discussion 15:00-15:15 Break 15:15-16:15 Panel on sharing multimodal interaction data 16:15-17:15 Keynote by Dan Bohous (Microsoft)
admin

Leave a Reply