Just a second....

Using a Virtual Reality house-search task to measure Trust during Human-Agent Interaction – #2664

by Esther Kox, Jonathan Barnhoorn, Lucía Rábago Mayer, Arda Temel and Tessa Klunder

How can we observe how people respond to consequential errors by an artificial agent in a realistic yet highly controllable environment? We created a threatdetection housesearch task in virtual reality in which participants form a HumanAgent Team (HAT) with an autonomous drone. By simulating risk, we amplify the feeling of reliance and the importance of trust in the agent. This paradigm allows for ecologically valid research that provides more insight into crucial humanagent team dynamics such as trust and situational awareness.

Full article: Demo paper #2664

A “Mock App Store” Interface for Virtual Privacy Assistants – #2111

by Sarah E. Carter, Ilaria Tiddi, and Dayana Spagnuelo

Privacy assistants aim to provide meaningful privacy recommendations to users. Here, we describe a webbased testing environment for smartphone privacy assistants called the “Mock App Store” (MAS). The MAS was developed to test a particular privacy assistant the valuecentered privacy assistant (VcPA) which assists users in selecting applications based on their value profile. While the MAS was designed with the VcPA in mind, it could also be utilized to test other stateoftheart privacy assistant technology.

Full article: Demo Paper #2111

pEncode: A Tool for Visualizing Pen Signal Encodings in Real-time – #6439

by Felix Céard-Falkenberg, Konstantin Kuznetsov,
Alexander Prange, Michael Barz, and Daniel Sonntag

Many features have been proposed for encoding the input signal from digital pens and touch-based interaction. They are widely used for analyzing and classifying handwritten texts, sketches, or gestures. Although they are well defined mathematically, many features are non-trivial and therefore difficult to understand for a human. In this paper, we present an application that visualizes a subset from 114 digital pen features in real-time while drawing. It provides an easy-to-use interface that allows application developers and machine learning practitioners to learn how digital pen features encode their inputs, helps in the feature selection process, and enables rapid prototyping of sketch and gesture classifiers.

Full article: Demo Paper 6439

SpellInk: Interactive correction of spelling mistakes in handwritten text – #5349

by Konstantin Kaznetsov, Micheal Barz, Daniel Sonntag

Despite the current dominance of typed text, writing by hand remains the most natural mean of written communication and information keeping. Still, digital pen input provides limited user experience and lacks flexibility, as most of the manipulations are performed on a digitalized version of the text. In this paper, we present our prototype that enables spellchecking for handwritten text: it allows users to interactively correct misspellings directly in a handwritten script. We plan to study the usability of the proposed user interface and its acceptance by users. Also, we aim to investigate how user feedback can be used to incrementally improve the underlying recognition models.

Full article: Demo Paper 5349

What would you like to visit next? – #3267

by Dou Liu, Claudia Alessandra Libbi and Delarame Javdani Rikhtehgar

Conversational agents have been recently incorporated into Virtual Her-
itage to provide more immersive and interactive user experience. However, existing
chatbot guides lack the capacity to leverage the rich background knowledge graphs
(KGs) to provide better interactions between visitors and cultural collections. In
this paper, we present a KG driven conversational museum guide that answers vis-
itor’s questions and recommend relevant art objects in a virtual exhibition, while
modelling user interest to offer personalised information and guidance.

Full article: Demo Paper 3267

Björn: An Interrogation Simulator – #2545

by Roel Leenders, Pietro Camin, Ella Velner and Mariët Theune

This work presents a conversational agent (CA) that functions as a prototype for simulating interrogations. The solution implements a cognitive model that focuses on the interpersonal relationships between the CA and the user. This model can adjust the interpersonal stance of the CA based on the sentiment and phrasing of the user’s utterances. As a result, the CA updates the friendliness and truthfulness of its responses accordingly.

Full article: Demo Paper 2545

Karen, the Interrupting Customer – #1040

by Mirre van den Bos, Gergana Dzhondzhorova, Ioana Frincu, Ella Velner, Thomas Beelen and Mariet Theune

Karen is a conversational agent taking the role of an angry customer in
a retail context. While the user (retail employee) tries to convince Karen to follow
the rules, the agent interrupts the user, and verbally and nonverbally reacts to the
user’s sentiments.

Full article: Demo Paper 1040