Timezone: »
$O(N^2)$ Universal Antisymmetry in Fermionic Neural Networks
Tianyu Pang · Shuicheng Yan · Min Lin
Event URL: https://openreview.net/forum?id=d23FsidO9s0 »
Fermionic neural network (FermiNet) is a recently proposed wavefunction Ansatz, which is used in variational Monte Carlo (VMC) methods to solve the many-electron Schr\"{o}dinger equation. FermiNet proposes permutation-equivariant architectures, on which a Slater determinant is applied to induce antisymmetry. FermiNet is proved to have universal approximation capability with a single determinant, namely, it suffices to represent any antisymmetric function given sufficient parameters. However, the asymptotic computational bottleneck comes from the Slater determinant, which scales with $O(N^3)$ for $N$ electrons. In this paper, we substitute the Slater determinant with a pairwise antisymmetry construction, which is easy to implement and can reduce the computational cost to $O(N^2)$. Furthermore, we formally prove that the pairwise construction built upon permutation-equivariant architectures can universally represent any antisymmetric function.
Fermionic neural network (FermiNet) is a recently proposed wavefunction Ansatz, which is used in variational Monte Carlo (VMC) methods to solve the many-electron Schr\"{o}dinger equation. FermiNet proposes permutation-equivariant architectures, on which a Slater determinant is applied to induce antisymmetry. FermiNet is proved to have universal approximation capability with a single determinant, namely, it suffices to represent any antisymmetric function given sufficient parameters. However, the asymptotic computational bottleneck comes from the Slater determinant, which scales with $O(N^3)$ for $N$ electrons. In this paper, we substitute the Slater determinant with a pairwise antisymmetry construction, which is easy to implement and can reduce the computational cost to $O(N^2)$. Furthermore, we formally prove that the pairwise construction built upon permutation-equivariant architectures can universally represent any antisymmetric function.
Author Information
Tianyu Pang (Sea AI Lab)
Shuicheng Yan (Sea AI Labs)
Min Lin (Sea AI Lab)
More from the Same Authors
-
2021 : Boosting Transferability of Targeted Adversarial Examples via Hierarchical Generative Networks »
Xiao Yang · Yinpeng Dong · Tianyu Pang -
2022 Poster: Robustness and Accuracy Could Be Reconcilable by (Proper) Definition »
Tianyu Pang · Min Lin · Xiao Yang · Jun Zhu · Shuicheng Yan -
2022 Spotlight: Robustness and Accuracy Could Be Reconcilable by (Proper) Definition »
Tianyu Pang · Min Lin · Xiao Yang · Jun Zhu · Shuicheng Yan -
2021 Workshop: A Blessing in Disguise: The Prospects and Perils of Adversarial Machine Learning »
Hang Su · Yinpeng Dong · Tianyu Pang · Eric Wong · Zico Kolter · Shuo Feng · Bo Li · Henry Liu · Dan Hendrycks · Francesco Croce · Leslie Rice · Tian Tian -
2019 Poster: On the Spectral Bias of Neural Networks »
Nasim Rahaman · Aristide Baratin · Devansh Arpit · Felix Draxler · Min Lin · Fred Hamprecht · Yoshua Bengio · Aaron Courville -
2019 Oral: On the Spectral Bias of Neural Networks »
Nasim Rahaman · Aristide Baratin · Devansh Arpit · Felix Draxler · Min Lin · Fred Hamprecht · Yoshua Bengio · Aaron Courville -
2019 Poster: Improving Adversarial Robustness via Promoting Ensemble Diversity »
Tianyu Pang · Kun Xu · Chao Du · Ning Chen · Jun Zhu -
2019 Oral: Improving Adversarial Robustness via Promoting Ensemble Diversity »
Tianyu Pang · Kun Xu · Chao Du · Ning Chen · Jun Zhu -
2018 Poster: Max-Mahalanobis Linear Discriminant Analysis Networks »
Tianyu Pang · Chao Du · Jun Zhu -
2018 Oral: Max-Mahalanobis Linear Discriminant Analysis Networks »
Tianyu Pang · Chao Du · Jun Zhu