Skip to content

Commit dbb24f4

Browse files
authored
Merge pull request #62 from ComplexData-MILA/gh-actions/auto-update-publications-1739606546
[Automatic PR] Automatically add papers from authors
2 parents a9b9a60 + 62d31d9 commit dbb24f4

File tree

2 files changed

+26
-0
lines changed

2 files changed

+26
-0
lines changed
+25
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,25 @@
1+
---
2+
title: 'AlignVLM: Bridging Vision and Language Latent Spaces for Multimodal Understanding'
3+
venue: ''
4+
names: Ahmed Masry, Juan A. Rodriguez, Tianyu Zhang, Suyuchen Wang, Chao Wang, Aarash
5+
Feizi, Akshay Kalkunte Suresh, Abhay Puri, Xiangru Jian, Pierre-Andr'e Noel, Sathwik
6+
Tejaswi Madhusudhan, M. Pedersoli, Bang Liu, Nicolas Chapados, Y. Bengio, Enamul
7+
Hoque, Christopher Pal, I. Laradji, David Vázquez, Perouz Taslakian, Spandana Gella,
8+
Sai Rajeswar
9+
tags:
10+
- ''
11+
link: https://arxiv.org/abs/2502.01341
12+
author: Aarash Feizi
13+
categories: Publications
14+
15+
---
16+
17+
*{{ page.names }}*
18+
19+
**{{ page.venue }}**
20+
21+
{% include display-publication-links.html pub=page %}
22+
23+
## Abstract
24+
25+
Aligning visual features with language embeddings is a key challenge in vision-language models (VLMs). The performance of such models hinges on having a good connector that maps visual features generated by a vision encoder to a shared embedding space with the LLM while preserving semantic similarity. Existing connectors, such as multilayer perceptrons (MLPs), often produce out-of-distribution or noisy inputs, leading to misalignment between the modalities. In this work, we propose a novel vision-text alignment method, AlignVLM, that maps visual features to a weighted average of LLM text embeddings. Our approach leverages the linguistic priors encoded by the LLM to ensure that visual features are mapped to regions of the space that the LLM can effectively interpret. AlignVLM is particularly effective for document understanding tasks, where scanned document images must be accurately mapped to their textual content. Our extensive experiments show that AlignVLM achieves state-of-the-art performance compared to prior alignment methods. We provide further analysis demonstrating improved vision-text feature alignment and robustness to noise.

records/semantic_paper_ids_ignored.json

+1
Original file line numberDiff line numberDiff line change
@@ -287,6 +287,7 @@
287287
"d1b480b7c13f340583e4506442d47bb3125c2d26",
288288
"d2005a41d025a23738d15918d90e42404ebea4b0",
289289
"d2ed783705fa0ad3ceec2a22fb1592b8d2b6cb38",
290+
"d4549b190095fe3e01b1ae2d11a524abb35fde5f",
290291
"d4550863c9b4102472a2326ab994aafdb13de7b9",
291292
"d4663195deddf3d9995294ec2cbff17f0e6431d0",
292293
"d4afdd502234473bedeb8f8edaf356dac4a46635",

0 commit comments

Comments
 (0)