Abstract
QUB-PHEO introduces a visual-based, dyadic dataset with the potential of advancing human-robot interaction (HRI) research in assembly operations and intention inference. This dataset captures rich multimodal interactions between two participants, one acting as a 'robot surrogate,' across a variety of assembly tasks that are further broken down into 36 distinct subtasks. With rich visual annotations - facial landmarks, gaze, hand movements, object localization, and more - for 70 participants, QUB-PHEO offers two versions: full video data for 50 participants and visual cues for all 70. Designed to improve machine learning models for HRI, QUB-PHEO enables deeper analysis of subtle interaction cues and intentions, promising contributions to the field. The dataset is available at https://github.com/exponentialR/QUB-PHEO subject to an End-User License Agreement (EULA).
Original language | English |
---|---|
Pages (from-to) | 157050-157066 |
Number of pages | 17 |
Journal | IEEE Access |
Volume | 12 |
DOIs | |
Publication status | Published - 23 Oct 2024 |
Keywords
- computer vision
- dyadic interaction
- Human-robot interaction
- multi-cue dataset
- multi-view dataset
- task-oriented interaction
ASJC Scopus subject areas
- General Computer Science
- General Materials Science
- General Engineering