Abstract
As the proliferation of video content continues, and many video archives lack suitable metadata, therefore, video retrieval, particularly through example-based search, has become increasingly crucial. Existing metadata often fails to meet the needs of specific types of searches, especially when videos contain elements from different modalities, such as visual and audio. Consequently, developing video retrieval methods that can handle multi-modal content is essential. In designing our novel video retrieval framework named Multi-modal Video Search by Examples (MVSE)1, we focused on accuracy (precision and recall), efficiency (retrieval time in seconds), interactivity, and extensibility, with key components including advanced data processing and a user-friendly interface aimed at enhancing search effectiveness and user experience. With the advent of Large Language Models (LLMs), the interaction between multimodal data, including image and audio has been transformed with a significant leap forward towards a bigger goal of artificial general intelligence. This workshop aims to bring together experts from diverse domains to explore the possibilities of developing novel ways of multimodal data search, understanding and interaction.
| Original language | English |
|---|---|
| Title of host publication | ICMR '24: Proceedings of the 14th Annual ACM International Conference on Multimedia Retrieval |
| Publisher | Association for Computing Machinery |
| Pages | 1345-1346 |
| Number of pages | 2 |
| ISBN (Electronic) | 9798400706196 |
| DOIs | |
| Publication status | Published - 07 Jun 2024 |
| Event | 14th Annual ACM International Conference on Multimedia Retrieval 2024 - Phuket, Thailand Duration: 10 Jun 2024 → 14 Jun 2024 |
Conference
| Conference | 14th Annual ACM International Conference on Multimedia Retrieval 2024 |
|---|---|
| Abbreviated title | ICMR'24 |
| Country/Territory | Thailand |
| City | Phuket |
| Period | 10/06/2024 → 14/06/2024 |