Human-Computer Interaction Task Classification via Visual-Based Input Modalities

Anas Samara, Leo Galway, Raymond Bond, Hui Wang

Research output: Chapter in Book/Report/Conference proceedingConference contribution


Enhancing computers with the facility to perceive and recognise the user feelings and abilities, as well as aspects related to the task, becomes a key element for the creation of Intelligent Human-Computer Interaction. Many studies have focused on predicting users’ cognitive and affective states and other human factors, such as usability and user experience, to achieve high quality interaction. However, there is a need for another approach that will empower computers to perceive more about the task that is being conducted by the users. This paper presents a study that explores user-driven task-based classification, whereby the classification algorithm used features from visual-based input modalities, i.e. facial expression via webcam, and eye gaze behaviour via eye-tracker. Within the experiments presented herein, the dataset employed by the model comprises four different computer-based tasks. Correspondingly, using a Support Vector Machine-based classifier, the average classification accuracy achieved across 42 subjects is 85.52% when utilising facial-based features as an input feature vector, and an average accuracy of 49.65% when using eye gaze-based features. Furthermore, using a combination of both types of features achieved an average classification accuracy of 87.63%.
Original languageEnglish
Title of host publicationUbiquitous Computing and Ambient Intelligence (11th International Conference, UCAmI 2017): Proceedings
Place of PublicationSwitzerland
Publication statusPublished - 15 Jun 2017

Publication series

NameLecture Notes in Computer Science
ISSN (Print)0302-9743

Bibliographical note

11th International Conference on Ubiquitous Computing and Ambient Intelligence (UCAmI) 2017 ; Conference date: 15-06-2017


  • Task Classification
  • Visual-Based Input Modalities
  • Intelligent HCI


Dive into the research topics of 'Human-Computer Interaction Task Classification via Visual-Based Input Modalities'. Together they form a unique fingerprint.

Cite this