Invited Talk
in
Workshop: Uncertainty and Robustness in Deep Learning
Some Thoughts on Generalization, Robustness, and their application with CLIP
Alec Radford
Abstract:
OOD generalization is a very difficult problem. Instead of tackling it head on, this talk argues that, when considering the current strengths and weaknesses of deep learning, we should consider an alternative approach which tries to dodge the problem altogether. If we can develop scalable pre-training methods that can leverage large and highly varied data sources, there is a hope that many examples (which would have been OOD for standard ML datasets) will have at least some relevant training data, removing the need for elusive OOD capabilities.