QUB-PHEO: a visual-based dyadic multi-view dataset for intention inference in collaborative assembly

Samuel Adebayo*, Sean Mcloone, Joost C. Dessing

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

68 Downloads (Pure)

Abstract

QUB-PHEO introduces a visual-based, dyadic dataset with the potential of advancing human-robot interaction (HRI) research in assembly operations and intention inference. This dataset captures rich multimodal interactions between two participants, one acting as a 'robot surrogate,' across a variety of assembly tasks that are further broken down into 36 distinct subtasks. With rich visual annotations - facial landmarks, gaze, hand movements, object localization, and more - for 70 participants, QUB-PHEO offers two versions: full video data for 50 participants and visual cues for all 70. Designed to improve machine learning models for HRI, QUB-PHEO enables deeper analysis of subtle interaction cues and intentions, promising contributions to the field. The dataset is available at https://github.com/exponentialR/QUB-PHEO subject to an End-User License Agreement (EULA).

Original languageEnglish
Pages (from-to)157050-157066
Number of pages17
JournalIEEE Access
Volume12
DOIs
Publication statusPublished - 23 Oct 2024

Keywords

  • computer vision
  • dyadic interaction
  • Human-robot interaction
  • multi-cue dataset
  • multi-view dataset
  • task-oriented interaction

ASJC Scopus subject areas

  • General Computer Science
  • General Materials Science
  • General Engineering

Fingerprint

Dive into the research topics of 'QUB-PHEO: a visual-based dyadic multi-view dataset for intention inference in collaborative assembly'. Together they form a unique fingerprint.

Cite this