Poster
in
Workshop: ICML 2024 Workshop on Foundation Models in the Wild
VFA: Vision Frequency Analysis of Foundation Models and Human
Javad Bayazi · Md Rifat Arefin · Jocelyn Faubert · Irina Rish
Keywords: [ Human alignment ] [ Scaling Law ] [ Out-of-Distribution Generalization ] [ Critical frequency band masking ] [ foundation models ]
Machine learning models often struggle with distribution shifts in real-world scenarios, whereas humans exhibit robust adaptation. Models that better align with human perception may achieve higher out-of-distribution generalization. In this study, we investigate how various characteristics of large-scale computer vision models influence their alignment with human capabilities and robustness. Our findings indicate that increasing model and data size, along with incorporating rich semantic information and multiple modalities, significantly enhances models' alignment with human perception and their overall robustness. Our empirical analysis demonstrates a strong correlation between out-of-distribution accuracy and human alignment.