A workshop of SIGCHI Italy (ACM SIGCHI Italian Chapter)
in collaboration with
CINI AIIS Lab (Artificial Intelligence and Intelligent Systems Lab)
July 7th, 2020
COVID-19 note: Given the current situation, the workshop will be held online.
Compared with traditional technologies, AI-based technologies hold different expectations from a user’s perspective. Besides the established principles of user-centered design, there are further important aspects that are peculiar to this type of system and that need to be considered during design.
Intelligent system components usually have probabilistic behaviors, based on nuances of tasks and settings, which can confuse users, erode their confidence, and lead to the abandonment of AI technology. AI designs show high variability: they vary in capabilities and interaction styles, impacting user engagement and usability. High-profile reports of failures, ranging from humorous and embarrassing (e.g., autocompletion errors) to more serious situations in which users cannot effectively understand or control an AI system (e.g., collaboration with semi-autonomous cars), might harm users. These factors, among others, show that designers and developers need proper knowledge as well as proper methodologies and techniques to create effective intelligent systems that may better satisfy the users. It also highlights the need for the end users to have control over the system: this can be achieved on one side by granting transparency of the system behaviour, and on the other side by empowering the end users to configure the system behaviour.
Given this scenario, it is important that future AI specialists become aware of the potential ethical and practical issues of this type of system, as well as acquire theoretical competences and methodological skills to properly design them. This requires adopting a perspective that considers the users and their needs to let them understand and control AI-based technologies.
There is already some discussion on the need for guidelines for designers (for example, Google People+AI, Microsoft Guidelines for Human-AI Interaction and IBM research on the intersection of HCI and AI as well as some interesting research venues (explicability, algorithm aversions/appreciation, uncanny valley, etc.). Yet, we still miss a rationalized syllabus explicitly prepared for AI technical courses that might guide the teaching of HCI concepts for AI technical students. The emphasis should not only be on specific interaction techniques (such as gestures or voice), rather on understanding how HCI methods and principles can help design “human-in-the-loop” AI systems, which implies considering who AI systems are built for and evaluating how well those systems are working.
The workshop aims to bootstrap a working group to prepare a syllabus for a course for teaching HCI skills to designers of AI interactive systems. Therefore, target participants are both AI researchers with an interest in interaction aspects and HCI researchers with interests in aspects related to interaction with AI systems.
During the workshop, a short brainstorming activity will elicit important topics that need to be covered by the syllabus. Then, parallel group sessions working on a first draft will follow. The syllabus should mandate a set of core topics, i.e., valid across different study programs, and suggest additional material to accommodate the specificity of the different programs where the course could be offered. The draft of the syllabus will be finalized in a few working sessions after the workshop, with the goal of publishing it (for example as a paper in the Interaction magazine, or in major HCI or AI conference) for wider feedback.
Paper submission: Monday, 15 June 2020
Decision to authors: Monday, 22 June 2020
Workshop date: Tuesday, 7 July 2020