Abstract
Optimising human robot interaction in the performance of collaborative tasks is a challenging problem. An important aspect of this problem relates to the robot’s understanding of human intentions. Empowering robots with accurate intention inference capabilities sets the stage for more natural, safe and efficient interactions and greater confidence in the Human Robot Interaction domain. Intentions can be deduced by observing human cues as they interact with the environment, but currently there is no clear-cut method for achieving this in an effective way. Here, we present a novel method for intention inference based on the integration of three visual cues, namely, hand movement, eye fixation, and object interaction, coupled with a bidirectional LSTM neural network for classification of human intention. Experimental studies evaluating our approach against two-visual cue alternatives confirm the utility of our approach.
Original language | English |
---|---|
Pages (from-to) | 174-179 |
Number of pages | 6 |
Journal | IFAC-PapersOnLine |
Volume | 55 |
Issue number | 12 |
DOIs | |
Publication status | Published - 10 Aug 2022 |
Event | 6th IFAC International Conference on Intelligent Control and Automation Sciences - Cluj-Napoca, Romania Duration: 13 Jul 2022 → 15 Jul 2022 Conference number: 6 https://icons2022.utcluj.ro/ |
Fingerprint
Dive into the research topics of 'Hand-Eye-Object Tracking for Human Intention Inference'. Together they form a unique fingerprint.Student theses
-
Deep learning of dyadic interaction visual cues for human-robot collaboration in assembly tasks
Adebayo, S. O. (Author), McLoone, S. (Supervisor) & Dessing, J. (Supervisor), Dec 2024Student thesis: Doctoral Thesis › Doctor of Philosophy