MVRMLM 2024: multimodal video retrieval and multimodal language modelling

Research output: Chapter in Book/Report/Conference proceedingConference contribution

1 Citation (Scopus)

Abstract

As the proliferation of video content continues, and many video archives lack suitable metadata, therefore, video retrieval, particularly through example-based search, has become increasingly crucial. Existing metadata often fails to meet the needs of specific types of searches, especially when videos contain elements from different modalities, such as visual and audio. Consequently, developing video retrieval methods that can handle multi-modal content is essential. In designing our novel video retrieval framework named Multi-modal Video Search by Examples (MVSE)1, we focused on accuracy (precision and recall), efficiency (retrieval time in seconds), interactivity, and extensibility, with key components including advanced data processing and a user-friendly interface aimed at enhancing search effectiveness and user experience. With the advent of Large Language Models (LLMs), the interaction between multimodal data, including image and audio has been transformed with a significant leap forward towards a bigger goal of artificial general intelligence. This workshop aims to bring together experts from diverse domains to explore the possibilities of developing novel ways of multimodal data search, understanding and interaction.

Original languageEnglish
Title of host publicationICMR '24: Proceedings of the 14th Annual ACM International Conference on Multimedia Retrieval
PublisherAssociation for Computing Machinery
Pages1345-1346
Number of pages2
ISBN (Electronic)9798400706196
DOIs
Publication statusPublished - 07 Jun 2024
Event14th Annual ACM International Conference on Multimedia Retrieval 2024 - Phuket, Thailand
Duration: 10 Jun 202414 Jun 2024

Conference

Conference14th Annual ACM International Conference on Multimedia Retrieval 2024
Abbreviated titleICMR'24
Country/TerritoryThailand
CityPhuket
Period10/06/202414/06/2024

Fingerprint

Dive into the research topics of 'MVRMLM 2024: multimodal video retrieval and multimodal language modelling'. Together they form a unique fingerprint.

Cite this