Logo UNITE

Modality Curation: Building Universal Embeddings
for Advanced Multimodal Information Retrieval

*Equal Contributions, †Corresponding Author
1Northeastern University, 2Kuaishou Technology
overall-task

We develop a universal multimodal embedder UNITE, allowing for a unified representation of arbitrary multimodal contents.

Abstract

Multimodal information retrieval (MIR) faces inherent challenges due to the heterogeneity of data sources and the complexity of cross-modal alignment. While previous studies have identified modal gaps in feature spaces, a systematic approach to address these challenges remains unexplored. In this work, we introduce UNITE, a universal framework that tackles these challenges through two critical yet underexplored aspects: data curation and modality-aware training configurations. Our work provides the first comprehensive analysis of how modality-specific data properties influence downstream task performance across diverse scenarios. Moreover, we propose Modal-Aware Masked Contrastive Learning (MAMCL) to mitigate the competitive relationships among the instances of different modalities. Our framework achieves state-of-the-art results on multiple multimodal retrieval benchmarks, outperforming existing methods by notable margins. Through extensive experiments, we demonstrate that strategic modality curation and tailored training protocols are pivotal for robust cross-modal representation learning. This work not only advances MIR performance but also provides a foundational blueprint for future research in multimodal systems.

Method

overall-model

Overview of UNITE: (a) Model architecture utilizing LMM as the backbone, supporting multimodal inputs (text, images, videos, and their combinations). (b) Similarity matrix after applying MAMCL, which enables focused contrastive learning by restricting comparisons to samples sharing the same target modality, thus reducing inter-modal interference.

Results

Fine-grained Retrieval

overall-model

Performance comparison on fine-grained video-text benchmark (CaReBench) and image-text benchmarks (ShareGPT4V, Urban1K, DOCCI). Our UNITE achieves the overall optimal performance.

Instruction-based Retrieval

overall-model

Performance comparison on instruction-based retrieval benchmarks (left: MMEB and right: WebVid-CoVR). Our UNITE achieves leading performance on various tasks, even surpassing models with larger parameter scales.

Citation


    @article{kong2025modality,
      title={Modality Curation: Building Universal Embeddings for Advanced Multimodal Information Retrieval},
      author={Kong, Fanheng and Zhang, Jingyuan and Liu, Yahui and Zhang, Hongzhi and Feng, Shi and Yang, Xiaocui and Wang, Daling and Tian, Yu and W, Victoria and Zhang, Fuzheng and Zhou, Guorui},
      journal={arXiv preprint arXiv:2505.19650},
      year={2025}
    }