VideoCC: Learning Audio-Video Modalities from Image Captions


Arsha Nagrani    Paul Hongsuck Seo    Chen Sun    Cordelia Schmid
Anja Hauth    Santiago Manen    Bryan Seybold
Google Research




Abstract

There has been a recent explosion of large-scale image-text datasets, as images with alt-text captions can be easily obtained online. Obtaining large-scale, high quality data for video in the form of text-video and text-audio pairs however, is more challenging. To close this gap we propose a new video mining pipeline which involves transferring captions from image captioning datasets to video clips with no additional manual effort. Using this pipeline, we create a new large-scale, weakly labelled audio-video captioning dataset consist- ing of millions of paired clips and captions. We show that training a multimodal transformer based model on this data achieves competitive performance on video retrieval and video captioning, matching or even outperforming HowTo100M pretraining with 20x fewer clips. We also show that our mined clips are suitable for text-audio pretraining, and achieve state of the art results for the task of audio retrieval.

Resources

  • The dataset has been released here.

Publication

A. Nagrani, P.H. Seo, B. Seybold, A. Hauth, S. Manen, C. Sun, C. Schmid
ECCV, 2022