Arsha Nagrani
anagrani at google dot com

I am a Research Scientist at Google AI Research, focused on machine learning for video understanding. I completed my PhD with Andrew Zisserman in the VGG group, at the University of Oxford. I was fortunate enough to be funded by a Google PhD Fellowship.

Before that I did my undergrad at the University of Cambridge, where I worked with Roberto Cipolla and Richard Turner.

CV  /  Google Scholar  /  LinkedIn  /  Twitter  /  GitHub  /  Thesis

profile photo
Research

My research focuses on self-supervised and multi-modal machine learning techniques for video recognition, including the use of sound and text to learn better visual representations. Recently, I have also become interested in computer vision for wildlife conservation.

mbt Attention Bottlenecks for Multimodal Fusion
Arsha Nagrani, Shan Yang, Anurag Arnab, Aren Jansen, Cordelia Schmid, Chen Sun
Preprint, 2021  
arXiv / project page

Fully transformer based audiovsual fusion model gets SOTA on video classification. Attention bottlenecks at multiple layers force cross-modal information to be condensed thereby improving performance at lower computational cost.

frozen_in_time Frozen in Time: A Joint Video and Image Encoder for End-to-End Retrieval
Max Bain, Arsha Nagrani, Gul Varol, Andrew Zisserman
ICCV, 2021  
arXiv

End-to-end encoder for visual retrieval that uses only self-attention blocks. This allows flexible training with variable length videos and images jointly.

CATE Composable Augmentation Encoding for Video Representation Learning
Chen Sun, Arsha Nagrani, Yonglong Tian, Cordelia Schmid
ICCV, 2021  
arXiv / project page / models

Encoding augmentations along with data views gives SOTA on video self-supervised learning benchmarks.

localize_sound Localizing Visual Sounds the Hard Way
Honglie Chen, Weidi Xie, Triantafyllos Afouras, Arsha Nagrani, Andrea Vedaldi, Andrew Zisserman
CVPR, 2021  
arXiv / data, project page

Localizes sounding objects without any supervision using hard negative mining from within the image. Gives SOTA on Flickr SoundNet and a new VGG-SS dataset.

disent Look Before you Speak: Visually Contextualized Utterances
Paul Hongsuck Seo, Arsha Nagrani, Cordelia Schmid
CVPR, 2021  
arXiv / project page

Predicting future utterances in a video based on previous dialogue and video frames without manual labels gives SOTA on standard QA datasets.

disent Slow-Fast Auditory Streams For Audio Recognition
Evangelis Kazakos, Arsha Nagrani, Andrew Zisserman, Dima Damen
ICASSP, 2021   (Outstanding Paper Award)
arXiv / code, models / project page

Two stream audio recognition models that gets SOTA on VGG-Sound and EPIC-Kitchens-100.

disent Playing a Part: Speaker Verification at the Movies
Andrew Brown*, Jaesung Huh*, Arsha Nagrani*, Joon Son Chung, Andrew Zisserman
ICASSP, 2021  
arXiv / data, project page

Investigate the performance of speaker recognition in movies, where often actors intentionally disguise their voice to play a character.

disent Condensed Movies: Story Based Retrieval with Contextual Embeddings
Max Bain, Arsha Nagrani, Andrew Brown, Andrew Zisserman
ACCV, 2020   (Oral Presentation)
project page, data

A large-scale story understanding dataset that contains the key scenes from movies with semantic captions.

disent Spot the conversation: speaker diarisation in the wild
Joon Son Chung*, Jaesung Huh*, Arsha Nagrani*, Triantafyllos Afouras, Andrew Zisserman
INTERSPEECH, 2020
project page, data

Breaking up multispeaker videos into "who spoke when". Based on this work we are hosting a new speaker diarisation track at the VoxCeleb Speaker Recognition Challenge.

disent Uncertainty-Aware Weakly Supervised Action Detection from Untrimmed Videos
Anurag Arnab, Chen Sun, Arsha Nagrani, Cordelia Schmid
ECCV, 2020
arXiv

Action localisation in movies using video level labels only.

disent Speech2Action: Cross-modal Supervision for Action Recognition
Arsha Nagrani, Chen Sun, David Ross, Rahul Sukthankar, Cordelia Schmid, Andrew Zisserman
CVPR, 2020
project page, data / slides

Action recognition in movies using the speech alone.

disent Disentangled Speech Embeddings using Cross-modal Self-supervision
Arsha Nagrani*, Joon Son Chung*, Samuel Albanie*, Andrew Zisserman
ICASSP, 2020
project page / some code

Disentanglement of speech embeddings into content and identity with only accompanying facetrack as supervision. Based on this work we are hosting a new self-supervised track at the VoxCeleb Speaker Recognition Challenge.

vox Voxceleb: Large-scale speaker verification in the wild
Arsha Nagrani, Joon Son Chung, Weidi Xie, Andrew Zisserman
Computer Speech and Language, 2020
project page, data / code & models / challenge

Overview of the VoxCeleb1 and VoxCeleb2 datasets including various updates and splits, and new models for speaker recognition.

ccr Count, Crop and Recognise: Fine-Grained Recognition in the Wild
Max Bain, Arsha Nagrani, Daniel Schofield, Andrew Zisserman
ICCV Workshops, 2019   (Oral Presentation)
project page / slides

Recognition of wild chimpanzees using full body and full frame CNNs methods. We also release an 'in the wild' video chimpanzee recognition dataset.

chimpanzees-facial-recognition Chimpanzee face recognition from videos in the wild using deep learning
Daniel Schofield*, Arsha Nagrani*, Andrew Zisserman, Misato Hayashi, Tetsuro Matsuzawa, Dora Biro, Susana Carvalho
Science Advances, 2019
project page
Press: New Scientist, MIT Tech Review, TechXplore, Verdict, Digital Trends, Oxford News

Face detection, tracking, and recognition of wild chimpanzees from long-term video records using deep CNNs. We also show a brief application for social network analysis.

clean-usnob EPIC-Fusion: Audio-Visual Temporal Binding for Egocentric Action Recognition
Evangelos Kazakos, Arsha Nagrani, Andrew Zisserman, Dima Damen
ICCV, 2019
project page / video / code and models

We propose a novel architecture for combining modalities in videos for action recognition, by using a temporal window to allow a range of temporal offsets.

clean-usnob Use What You Have: Video Retrieval Using Representations From Collaborative Experts
Yang Liu*, Samuel Albanie*, Arsha Nagrani*, Andrew Zisserman
BMVC, 2019
project page / code & models / challenge

We fuse the information from different embeddings experts for the task of video retrieval - achieving SOTA results on 5 different datasets. This work is also the basis for the Video Pentathlon at CVPR 2020.

clean-usnob Utterance-level Aggregation For Speaker Recognition In The Wild
Weidi Xie, Arsha Nagrani, Joon Son Chung, Andrew Zisserman
ICASSP, 2019   (Oral Presentation)
project page / code & models

A NetVlad layer in a deep CNN works well for speaker recognition on long noisy speech utterances.

clean-usnob
clean-usnob
Emotion Recognition in Speech using Cross-Modal Transfer in the Wild
Samuel Albanie*, Arsha Nagrani*, Andrea Vedaldi, Andrew Zisserman
ACM Multimedia, 2018
project page / code

We use the redundant (common) signal in both audio (speech) and vision (faces) to learn speech representations for emotion recognition without manual supervision.

safs_small VoxCeleb2: Deep Speaker Recognition
Joon Son Chung*, Arsha Nagrani*, Andrew Zisserman
INTERSPEECH, 2018
data

Speaker Recognition in the Wild using deep CNNs. The VoxCeleb datasets are also used integrally in the VoxCeleb Speaker Recognition Challenge.

fast-texture Learnable PINs: Cross-Modal Embeddings for Person Identity
Arsha Nagrani*, Samuel Albanie*, Andrew Zisserman
ECCV, 2018
project page

We learn joint embedding of faces and voices using cross-modal self-supervision from YouTube videos.

prl Seeing Voices and Hearing Faces: Cross-modal biometric matching
Arsha Nagrani, Samuel Albanie, Andrew Zisserman
CVPR, 2018   (Spotlight)
project page / video / blog post

Can you recognise someone’s face if you have only heard their voice? Or recognise their voice if you have only seen their face?

blind-date From Benedict Cumberbatch to Sherlock Holmes: Character Identification in TV series without a Script
Arsha Nagrani, Andrew Zisserman
BMVC, 2017   (Oral Presentation)
project page

clean-usnob
clean-usnob
VoxCeleb: a large-scale speaker identification dataset
Arsha Nagrani*, Joon Son Chung*, Andrew Zisserman
INTERSPEECH, 2017   (Oral Presentation, Best Student Paper Award)
data / challenge

We use face recognition and active speaker detection to automatically create a large scale speaker identification dataset from YouTube videos.

Teaching/Invited Talks

"Multimodality for Video Understanding", Google Research India AI Summer School, 2020 [slides]
"Learning joint representations for visual and language tasks", Online Multimodal Knowledge Discovery Tutorial, ICDM 2020 "Applications of Machine Learning", Oxford University MPLS DTC on Statistics and Data Mining, 2020 [slides]

Service
video-pent The End-of-End-to-End: A Video Understanding Pentathlon @ CVPR 2020
report / challenge / workshop / recording

voxsrc VoxSRC: VoxCeleb Speaker Recognition Challenge @ INTERSPEECH
[2021] challenge / workshop / data
[2020] report / challenge / workshop / data
[2019] report / challenge / workshop / data

wicv WICV: Women in Computer Vision Workshop @ CVPR
[2020] website / twitter
[2019] report / website / twitter

review Reviewer : CVPR, ECCV, ICCV, BMVC, NeurIps, ICML, AAAI, IEEE Triple Access

This guy is good at website design.