"X$^{2}$2-VLM: All-in-One Pre-Trained Model for Vision-Language Tasks."

Yan Zeng et al. (2024)

Details and statistics

DOI: 10.1109/TPAMI.2023.3339661

access: closed

type: Journal Article

metadata version: 2024-04-15

a service of  Schloss Dagstuhl - Leibniz Center for Informatics