Skip to yearly menu bar Skip to main content


Workshop

ML Interpretability for Scientific Discovery

Subhashini Venugopalan · Michael Brenner · Scott Linderman · Been Kim

Fri 17 Jul, 6:50 a.m. PDT

Keywords:  Interpretability    Science    Discovery  

ML has shown great promise in modeling and predicting complex phenomenon in many scientificdisciples such as predicting cardiovascular risk factors from retinal images, understanding howelectrons behave at the atomic level [3], identifying patterns of weather and climate phenomena, etc. Further, models are able to learn directly (and better) from raw data as opposed to human selected features. The ability to interpret the model and find significant predictors couldprovide new scientific insights.Traditionally, the scientific discovery process has been based on careful observations of nat-ural phenomenon, followed by systematic human analysis (of hypothesis generation and ex-perimental validation). ML interpretability has the potential to bring a radically different yetprincipled approach. While general interpretability relies on ‘human parsing’ (common sense),scientific domains have semi-structured and highly structured bases for interpretation. Thus,despite differences in data modalities and domains, be it brain sciences, the behavioral sciences,or material sciences, there is a need for a common set of tools that address a similar flavor of problem, one of interpretability or fitting models to a known structure. This workshop aims to bring together members from the ML and physical sciences communities to introduce exciting problems to the broader community, and stimulate the productionof new approaches towards solving open scientific problems.

Chat is not available.
Timezone: America/Los_Angeles

Schedule