TY - GEN
T1 - Human-Computer Interaction Task Classification via Visual-Based Input Modalities
AU - Samara, Anas
AU - Galway, Leo
AU - Bond, Raymond
AU - Wang, Hui
N1 - 11th International Conference on Ubiquitous Computing and Ambient Intelligence (UCAmI) 2017 ; Conference date: 15-06-2017
PY - 2017/6/15
Y1 - 2017/6/15
N2 - Enhancing computers with the facility to perceive and recognise the user feelings and abilities, as well as aspects related to the task, becomes a key element for the creation of Intelligent Human-Computer Interaction. Many studies have focused on predicting users’ cognitive and affective states and other human factors, such as usability and user experience, to achieve high quality interaction. However, there is a need for another approach that will empower computers to perceive more about the task that is being conducted by the users. This paper presents a study that explores user-driven task-based classification, whereby the classification algorithm used features from visual-based input modalities, i.e. facial expression via webcam, and eye gaze behaviour via eye-tracker. Within the experiments presented herein, the dataset employed by the model comprises four different computer-based tasks. Correspondingly, using a Support Vector Machine-based classifier, the average classification accuracy achieved across 42 subjects is 85.52% when utilising facial-based features as an input feature vector, and an average accuracy of 49.65% when using eye gaze-based features. Furthermore, using a combination of both types of features achieved an average classification accuracy of 87.63%.
AB - Enhancing computers with the facility to perceive and recognise the user feelings and abilities, as well as aspects related to the task, becomes a key element for the creation of Intelligent Human-Computer Interaction. Many studies have focused on predicting users’ cognitive and affective states and other human factors, such as usability and user experience, to achieve high quality interaction. However, there is a need for another approach that will empower computers to perceive more about the task that is being conducted by the users. This paper presents a study that explores user-driven task-based classification, whereby the classification algorithm used features from visual-based input modalities, i.e. facial expression via webcam, and eye gaze behaviour via eye-tracker. Within the experiments presented herein, the dataset employed by the model comprises four different computer-based tasks. Correspondingly, using a Support Vector Machine-based classifier, the average classification accuracy achieved across 42 subjects is 85.52% when utilising facial-based features as an input feature vector, and an average accuracy of 49.65% when using eye gaze-based features. Furthermore, using a combination of both types of features achieved an average classification accuracy of 87.63%.
KW - Task Classification
KW - Visual-Based Input Modalities
KW - Intelligent HCI
M3 - Conference contribution
T3 - Lecture Notes in Computer Science
BT - Ubiquitous Computing and Ambient Intelligence (11th International Conference, UCAmI 2017): Proceedings
PB - Springer
CY - Switzerland
ER -