AI Learns Vision and Sound Connection

Source: news.mit.edu

Published on May 22, 2025

AI Learns to Link Sight and Sound

Researchers from MIT and elsewhere have developed a new approach that improves an AI model’s ability to learn by making connections between sight and sound, similar to how humans learn. For example, humans can watch someone playing the cello and recognize that the cellist’s movements are generating the music we hear.

This development could be useful in journalism and film production, where the model could help with curating multimodal content through automatic video and audio retrieval. In the longer term, it could improve a robot’s ability to understand real-world environments, where auditory and visual information are often closely connected.

CAV-MAE Sync

The researchers created a method that helps machine-learning models align corresponding audio and visual data from video clips without human labels. They adjusted how their original model is trained, so it learns a finer-grained correspondence between a particular video frame and the audio that occurs in that moment. The researchers also made architectural tweaks that help the system balance two distinct learning objectives, which improves performance.

These improvements boost the accuracy of their approach in video retrieval tasks and in classifying the action in audiovisual scenes. For instance, the new method could automatically and precisely match the sound of a door slamming with the visual of it closing in a video clip.

Andrew Rouditchenko, an MIT graduate student and co-author of a paper on this research, said that they are building AI systems that can process audio and visual information at once and seamlessly process both modalities. He added that integrating this audio-visual technology into tools like large language models could open up new applications.

Model Training

This work builds upon a machine-learning method the researchers developed a few years ago, which provided an efficient way to train a multimodal model to simultaneously process audio and visual data without human labels. The researchers feed this model, called CAV-MAE, unlabeled video clips, and it encodes the visual and audio data separately into representations called tokens. Using the natural audio from the recording, the model automatically learns to map corresponding pairs of audio and visual tokens close together within its internal representation space.

They found that using two learning objectives balances the model’s learning process, which enables CAV-MAE to understand the corresponding audio and visual data while improving its ability to recover video clips that match user queries. However, CAV-MAE treats audio and visual samples as one unit, so a 10-second video clip and the sound of a door slamming are mapped together, even if that audio event happens in just one second of the video.

In their improved model, called CAV-MAE Sync, the researchers split the audio into smaller windows before the model computes its representations of the data, so it generates separate representations that correspond to each smaller window of audio. During training, the model learns to associate one video frame with the audio that occurs during just that frame. Edson Araujo said that doing that helps the model learn a finer-grained correspondence, which helps with performance later when they aggregate this information.

The model incorporates a contrastive objective, where it learns to associate similar audio and visual data, and a reconstruction objective which aims to recover specific audio and visual data based on user queries. In CAV-MAE Sync, the researchers introduced two new types of data representations, or tokens, to improve the model’s learning ability. They include dedicated “global tokens” that help with the contrastive learning objective and dedicated “register tokens” that help the model focus on important details for the reconstruction objective.

Araujo adds that they added a bit more wiggle room to the model so it can perform each of these two tasks, contrastive and reconstructive, a bit more independently, which benefitted overall performance. The researchers' enhancements improved the model’s ability to retrieve videos based on an audio query and predict the class of an audio-visual scene, like a dog barking or an instrument playing. Its results were more accurate than their prior work, and it also performed better than more complex methods that require larger amounts of training data.

In the future, the researchers want to incorporate new models that generate better data representations into CAV-MAE Sync, which could improve performance. They also want to enable their system to handle text data, which would be an important step toward generating an audiovisual large language model.