Deep learning of dyadic interaction visual cues for human-robot collaboration in assembly tasks

  • Samuel Adebayo

Student thesis: Doctoral ThesisDoctor of Philosophy

Abstract

This thesis examines the integration of multiple visual cues indyadic interactions through deep learning to improve interactivity andintuitiveness in human-robot interactions, particularly in assembly tasks.Serving as a preliminary effort, these bodies of work seek to address thecritical deficiencies in current dyadic human robot interaction methodologies,which typically depend on inadequate or isolated visual cues, failing tocapture the dynamic and subtle human intentions essential for effectivehuman-robot collaboration, especially within the framework of Industry 5.0.With a focus on dyadic interaction, this research establishes a foundationalframework for transferring insights from human-to-human and human-to-robotsurrogate interactions to facilitate effective human-robot collaborations, withnovel contributions such as a gaze estimation model for subtle cue recognitionand a multi-view dataset supporting advanced multi-view and intention-aware HRIframeworks.

Thesis embargoed until 31 December 2025.
Date of AwardDec 2024
Original languageEnglish
Awarding Institution
  • Queen's University Belfast
SponsorsEngineering and Physical Sciences Research Council
SupervisorSeán McLoone (Supervisor) & Joost C. Dessing (Supervisor)

Keywords

  • Dyadic Interaction
  • deep learning
  • computer vision
  • machine learning
  • task recognition
  • action recognition
  • assembly task
  • QUB-PHEO
  • gaze estimation

Cite this

'