Skip to main navigation Skip to search Skip to main content

Personal profile

Research Focus

Dr Mohsen Azarmi is a Research Fellow at Queen’s University Belfast, working within the School of Electronics, Electrical Engineering and Computer Science on intelligent and sustainable computing initiatives . His research focuses on computer vision, machine learning, and multimodal artificial intelligence for intelligent transportation systems, with a particular emphasis on pedestrian intention prediction and human behaviour modelling in complex urban environments.

He completed his PhD in Transport Studies at the University of Leeds, where his work centred on developing context-aware deep learning models for predicting pedestrian behaviour in autonomous driving scenarios. His research integrates visual perception, temporal modelling, and multi-sensor data fusion to enhance the safety and reliability of autonomous vehicles.

Dr Azarmi has contributed to several major international research projects, including the EU Horizon-funded Hi-Drive programme and the EPSRC-funded MAVIS project on explainable AI. His work spans both theoretical and applied domains, including real-world deployment of perception systems, digital twin modelling, and explainable AI for safety-critical decision-making.

He has authored and co-authored multiple peer-reviewed publications in leading journals and conferences such as IEEE Transactions on Intelligent Transportation Systems and the IEEE Intelligent Vehicles Symposium. His recent work explores the use of vision-language foundation models and interactive visual analytics to improve transparency and interpretability in AI systems for autonomous vehicles .

Dr Azarmi has extensive experience working with large-scale multimodal datasets (e.g., Waymo, nuScenes) and developing advanced perception pipelines using deep neural networks. He is also actively involved in interdisciplinary collaboration, academic mentoring, and international research dissemination, including tutorial delivery at IEEE VIS.

His broader research vision is to develop trustworthy, interpretable, and human-centric AI systems that enable safer and more intelligent transportation technologies.

Research Interests

  • Computer Vision for Autonomous Systems
  • Multimodal Learning & Sensor Fusion
  • Temporal Modelling & Behaviour Understanding
  • Explainable AI (XAI) & Visual Analytics
  • Vision-Language Foundation Models
  • AI for Intelligent Transportation Systems
  • Real-World Deployment of AI Systems

Achievements

  • Published research in leading venues, including IEEE Transactions on Intelligent Transportation Systems and IEEE Intelligent Vehicles Symposium
  • Developer of PIP-Net, a deep learning framework for pedestrian intention prediction
  • Recipient of a fully funded PhD scholarship under the EU Horizon Hi-Drive project
  • Awarded Best Paper (ICAISV 2023) and Best Presentation (University of Leeds 2024)
  • Recipient of the Alan Turing Institute International Placement Grant (2025)
  • Graduated with Distinction and Top Dissertation Award (MSc in AI & Robotics)

Teaching

  • Teaching Assistant at the University of Leeds, supporting undergraduate and postgraduate module in Data Science.
  • Delivered tutorials and workshops on Explainable AI and computer vision applications to interdisciplinary audiences at IEEE VIS 2025 (Link).

Expertise related to UN Sustainable Development Goals

In 2015, UN member states agreed to 17 global Sustainable Development Goals (SDGs) to end poverty, protect the planet and ensure prosperity for all. This person’s work contributes towards the following SDG(s):

  1. SDG 11 - Sustainable Cities and Communities
    SDG 11 Sustainable Cities and Communities

Fingerprint

Dive into the research topics where Mohsen Azarmi is active. These topic labels come from the works of this person. Together they form a unique fingerprint.
  • 1 Similar Profiles

Collaborations and top research areas from the last five years

Recent external collaboration on country/territory level. Dive into details by clicking on the dots or