Position: Significant impact of numerical precision in scientific machine learning
Abstract
The machine learning community has focused on computational efficiency, often leveraging lower-precision formats such as FP16, rather than the standard FP32. In contrast, little attention has been paid to higher-precision formats, such as FP64, despite their critical role in scientific domains like materials science, where even small numerical differences can lead to significant inaccuracies in physicochemical properties. This need for high precision extends to the emerging field of machine learning for scientific tasks, yet it has not been thoroughly investigated. According to several studies and our experiments, models trained with FP32 show insufficient accuracy compared to those trained with FP64, indicating that higher precision is also crucial in scientific machine learning, as in traditional scientific computing. This precision issue limits the potential of scientific machine learning that can replace the traditional scientific computing in practical research. Our position paper not only highlights these precision-related issues but also recommends reporting comparisons between FP32 and FP64 results, encouraging the release of FP64 models. We believe that these efforts can enable machine learning to contribute meaningfully to the natural sciences, ensuring both scientific reliability and practical applicability.