Skip to yearly menu bar Skip to main content


Contribution talk
in
Workshop: Workshop on Visualization for Deep Learning

Skip-Frame Embeddings for Feature Adaptation and Visualization, Zain Shah

[ ] [ Project Page ]
2017 Contribution talk

Abstract:

We present an unsupervised method for visualizing the generalization and adaptation capabilities of pre-trained features on video. Like the skip-grams method for unsupervised learning of word vector representations, we exploit temporal continuity in the target media, namely that neighboring frames are qualitatively similar. By enforcing this continuity in the adapted feature space we can adapt features to a new target task, like house price prediction, without supervision. The domain-specific embeddings can be easily visualized for qualitative introspection and evaluation.

Chat is not available.