Arsha Nagrani
anagrani at google dot com

I am a senior research scientist at Google AI Research, where I work on machine learning for video understanding. I did my PhD with Andrew Zisserman in the VGG group at the University of Oxford, where I was fortunate enough to be funded by a Google PhD Fellowship. My thesis won the ELLIS PhD award.

Before that I did my undergrad at the University of Cambridge, where I worked with Roberto Cipolla and Richard Turner.

CV  /  Google Scholar  /  LinkedIn  /  Twitter  /  GitHub  /  Thesis

profile photo

My research focuses on self-supervised and multi-modal machine learning techniques for video recognition, including the use of sound and text to learn better visual representations. Recently, I have also become interested in computer vision for wildlife conservation. For a full list of publications please see Google Scholar.

videocc Learning Audio-Video Modalities from Image Captions
Arsha Nagrani, Paul Hongsuck Seo, Bryan Seybold, Anja Hauth, Santiago Manen, Chen Sun, Cordelia Schmid
arXiv, 2022  

Mining audiovisual clips for text captions by leveraging image-image similarity for SOTA video retrieval and captioning.

mvgpt End-to-end Generative Pretraining for Multimodal Video Captioning
Paul Hongsuck Seo, Arsha Nagrani, Anurag Arnab, Cordelia Schmid
CVPR, 2022  

New unsupervised pretraining framework for multimodal video captioning that leverages future utterances in unlabelled videos.

mmcvr Masking Modalities for Cross-modal Video Retrieval
Valentin Gabeur, Arsha Nagrani, Chen Sun, Karteek Alahari, Cordelia Schmid
WACV, 2022  

Video encoder pretraining using appearance, sound, and transcribed speech, by masking out an entire modality and predicting it using the others.

chimpactions Automated audiovisual behavior recognition in wild primates
Max Bain, Arsha Nagrani , Daniel Schofield, Sophie Berdugo, Joana Bessa, Jake Owens, Kimberley J. Hockings, Tetsuro Matsuzawa, Misato Hayashi, Dora Biro, Susana Carvalho, Andrew Zisserman
Science Advances, 2021  

Fully automated, audio-visual pipeline to detect and track two audio-visually distinctive actions in wild chimpanzees: buttress-drumming and nut-cracking.

mbt Attention Bottlenecks for Multimodal Fusion
Arsha Nagrani, Shan Yang, Anurag Arnab, Aren Jansen, Cordelia Schmid, Chen Sun
NeurIPS, 2021  
arXiv / project page / code / google AI blog

Fully transformer based multimodal fusion model gets SOTA on video classification. Attention bottlenecks at multiple layers force cross-modal information to be condensed thereby improving performance at lower computational cost.

frozen_in_time Frozen in Time: A Joint Video and Image Encoder for End-to-End Retrieval
Max Bain, Arsha Nagrani, Gul Varol, Andrew Zisserman
ICCV, 2021  
arXiv / code, models / WebVid dataset

End-to-end encoder for visual retrieval that uses only self-attention blocks. This allows flexible training with variable length videos and images jointly.

CATE Composable Augmentation Encoding for Video Representation Learning
Chen Sun, Arsha Nagrani, Yonglong Tian, Cordelia Schmid
ICCV, 2021  
arXiv / project page / models

Encoding augmentations along with data views gives SOTA on video self-supervised learning benchmarks.

av_sync Audio-Visual Synchronisation in the Wild
Honglie Chen, Weidi Xie, Triantafyllos Afouras, Arsha Nagrani, Andrea Vedaldi, Andrew Zisserman
BMVC, 2021  
arXiv / VGGSound-sync data, project page

Transformer model for audio-visual synchronization works well on non-speech classes in the wild.

localize_sound Localizing Visual Sounds the Hard Way
Honglie Chen, Weidi Xie, Triantafyllos Afouras, Arsha Nagrani, Andrea Vedaldi, Andrew Zisserman
CVPR, 2021  
arXiv / VGG-SS dataset, project page

Localizes sounding objects without any supervision using hard negative mining from within the image. Gives SOTA on Flickr SoundNet and a new VGG-SS dataset.

mtcn With a Little Help from my Temporal Context: Multimodal Egocentric Action Recognition
Evangelis Kazakos, Jaesung Huh, Arsha Nagrani, Andrew Zisserman, Dima Damen
BMVC, 2021  
arXiv / code, models / project page

Uses a language model to learn a sequence of actions as temporal context for egocentric action recognition.

disent Look Before you Speak: Visually Contextualized Utterances
Paul Hongsuck Seo, Arsha Nagrani, Cordelia Schmid
CVPR, 2021  
arXiv / project page

Predicting future utterances in a video based on previous dialogue and video frames without manual labels gives SOTA on standard QA datasets.

disent Slow-Fast Auditory Streams For Audio Recognition
Evangelis Kazakos, Arsha Nagrani, Andrew Zisserman, Dima Damen
ICASSP, 2021   (Outstanding Paper Award)
arXiv / code, models / project page

Two stream audio recognition models that gets SOTA on VGG-Sound and EPIC-Kitchens-100.

disent Playing a Part: Speaker Verification at the Movies
Andrew Brown*, Jaesung Huh*, Arsha Nagrani*, Joon Son Chung, Andrew Zisserman
ICASSP, 2021  
arXiv / VoxMovies dataset, project page

Investigate the performance of speaker recognition in movies, where often actors intentionally disguise their voice to play a character.

disent Condensed Movies: Story Based Retrieval with Contextual Embeddings
Max Bain, Arsha Nagrani, Andrew Brown, Andrew Zisserman
ACCV, 2020   (Oral Presentation)
project page, CMD dataset / challenge

A large-scale story understanding dataset that contains the key scenes from movies with semantic captions. Basis of the CMD Challenge at ICCV 2021.

disent Spot the conversation: speaker diarisation in the wild
Joon Son Chung*, Jaesung Huh*, Arsha Nagrani*, Triantafyllos Afouras, Andrew Zisserman
project page, VoxConverse dataset / challenge

Breaking up multispeaker videos into "who spoke when". Based on this work we are hosting a new speaker diarisation track at the VoxCeleb Speaker Recognition Challenge.

disent Uncertainty-Aware Weakly Supervised Action Detection from Untrimmed Videos
Anurag Arnab, Chen Sun, Arsha Nagrani, Cordelia Schmid
ECCV, 2020

Action localisation in movies using video level labels only.

disent Speech2Action: Cross-modal Supervision for Action Recognition
Arsha Nagrani, Chen Sun, David Ross, Rahul Sukthankar, Cordelia Schmid, Andrew Zisserman
CVPR, 2020
project page, data / slides

Action recognition in movies using the speech alone.

disent Disentangled Speech Embeddings using Cross-modal Self-supervision
Arsha Nagrani*, Joon Son Chung*, Samuel Albanie*, Andrew Zisserman
ICASSP, 2020
project page / some code

Disentanglement of speech embeddings into content and identity with only accompanying facetrack as supervision. Based on this work we are hosting a new self-supervised track at the VoxCeleb Speaker Recognition Challenge.

vox Voxceleb: Large-scale speaker verification in the wild
Arsha Nagrani, Joon Son Chung, Weidi Xie, Andrew Zisserman
Computer Speech and Language, 2020
project page, data / code & models / challenge

Overview of the VoxCeleb1 and VoxCeleb2 datasets including various updates and splits, and new models for speaker recognition.

ccr Count, Crop and Recognise: Fine-Grained Recognition in the Wild
Max Bain, Arsha Nagrani, Daniel Schofield, Andrew Zisserman
ICCV Workshops, 2019   (Oral Presentation)
project page / slides

Recognition of wild chimpanzees using full body and full frame CNNs methods. We also release an 'in the wild' video chimpanzee recognition dataset.

chimpanzees-facial-recognition Chimpanzee face recognition from videos in the wild using deep learning
Daniel Schofield*, Arsha Nagrani*, Andrew Zisserman, Misato Hayashi, Tetsuro Matsuzawa, Dora Biro, Susana Carvalho
Science Advances, 2019
project page
Press: New Scientist, MIT Tech Review, TechXplore, Verdict, Digital Trends, Oxford News

Face detection, tracking, and recognition of wild chimpanzees from long-term video records using deep CNNs. We also show a brief application for social network analysis.

clean-usnob EPIC-Fusion: Audio-Visual Temporal Binding for Egocentric Action Recognition
Evangelos Kazakos, Arsha Nagrani, Andrew Zisserman, Dima Damen
ICCV, 2019
project page / video / code and models

We propose a novel architecture for combining modalities in videos for action recognition, by using a temporal window to allow a range of temporal offsets.

clean-usnob Use What You Have: Video Retrieval Using Representations From Collaborative Experts
Yang Liu*, Samuel Albanie*, Arsha Nagrani*, Andrew Zisserman
BMVC, 2019
project page / code & models / challenge

We fuse the information from different embeddings experts for the task of video retrieval - achieving SOTA results on 5 different datasets. This work is also the basis for the Video Pentathlon at CVPR 2020.

clean-usnob Utterance-level Aggregation For Speaker Recognition In The Wild
Weidi Xie, Arsha Nagrani, Joon Son Chung, Andrew Zisserman
ICASSP, 2019   (Oral Presentation)
project page / code & models

A NetVlad layer in a deep CNN works well for speaker recognition on long noisy speech utterances.

Emotion Recognition in Speech using Cross-Modal Transfer in the Wild
Samuel Albanie*, Arsha Nagrani*, Andrea Vedaldi, Andrew Zisserman
ACM Multimedia, 2018
project page / code

We use the redundant (common) signal in both audio (speech) and vision (faces) to learn speech representations for emotion recognition without manual supervision.

safs_small VoxCeleb2: Deep Speaker Recognition
Joon Son Chung*, Arsha Nagrani*, Andrew Zisserman

Speaker Recognition in the Wild using deep CNNs. The VoxCeleb datasets are also used integrally in the VoxCeleb Speaker Recognition Challenge.

fast-texture Learnable PINs: Cross-Modal Embeddings for Person Identity
Arsha Nagrani*, Samuel Albanie*, Andrew Zisserman
ECCV, 2018
project page

We learn joint embedding of faces and voices using cross-modal self-supervision from YouTube videos.

prl Seeing Voices and Hearing Faces: Cross-modal biometric matching
Arsha Nagrani, Samuel Albanie, Andrew Zisserman
CVPR, 2018   (Spotlight)
project page / video / blog post

Can you recognise someone’s face if you have only heard their voice? Or recognise their voice if you have only seen their face?

blind-date From Benedict Cumberbatch to Sherlock Holmes: Character Identification in TV series without a Script
Arsha Nagrani, Andrew Zisserman
BMVC, 2017   (Oral Presentation)
project page

VoxCeleb: a large-scale speaker identification dataset
Arsha Nagrani*, Joon Son Chung*, Andrew Zisserman
INTERSPEECH, 2017   (Oral Presentation, Best Student Paper Award)
data / challenge

We use face recognition and active speaker detection to automatically create a large scale speaker identification dataset from YouTube videos.

Teaching/Invited Talks

"Multimodality for Video Understanding", Google Research India AI Summer School, 2020 [slides]
"Learning joint representations for visual and language tasks", Online Multimodal Knowledge Discovery Tutorial, ICDM 2020 "Applications of Machine Learning", Oxford University MPLS DTC on Statistics and Data Mining, 2020 [slides]

video-pent The End-of-End-to-End: A Video Understanding Pentathlon @ CVPR 2020
report / challenge / workshop / recording

voxsrc VoxSRC: VoxCeleb Speaker Recognition Challenge @ INTERSPEECH
[2021] report / challenge / workshop / data
[2020] report / challenge / workshop / data
[2019] report / challenge / workshop / data

wicv WICV: Women in Computer Vision Workshop @ CVPR
[2020] website / twitter
[2019] report / website / twitter

review Reviewer : CVPR, ECCV, ICCV, BMVC, NeurIps, ICML, AAAI, IEEE Triple Access

This guy is good at website design.