March 17, 2025

ikayaniaamirshahzad@gmail.com

Unification of Video and Visual Dialog via Multimodal Experts


View a PDF of the paper titled V$^2$Dial: Unification of Video and Visual Dialog via Multimodal Experts, by Adnen Abdessaied and 3 other authors

View PDF
HTML (experimental)

Abstract:We present V$^2$Dial – a novel expert-based model specifically geared towards simultaneously handling image and video input data for multimodal conversational tasks. Current multimodal models primarily focus on simpler tasks (e.g., VQA, VideoQA, video-text retrieval) and often neglect the more challenging conversational counterparts, such as video and visual/image dialog. Moreover, works on both conversational tasks evolved separately from each other despite their apparent similarities limiting their applicability potential. To this end, we propose to unify both tasks using a single model that for the first time jointly learns the spatial and temporal features of images and videos by routing them through dedicated experts and aligns them using matching and contrastive learning techniques. Furthermore, we systemically study the domain shift between the two tasks by investigating whether and to what extent these seemingly related tasks can mutually benefit from their respective training data. Extensive evaluations on the widely used video and visual dialog datasets of AVSD and VisDial show that our model achieves new state-of-the-art results across four benchmarks both in zero-shot and fine-tuning settings.

Submission history

From: Adnen Abdessaied [view email]
[v1]
Mon, 3 Mar 2025 21:27:38 UTC (6,254 KB)
[v2]
Fri, 14 Mar 2025 12:29:29 UTC (6,183 KB)



Source link

Leave a Comment