Poster
(How) Can Transformers Predict Pseudo-Random Numbers?
Tao Tao · Darshil Doshi · Dayal Singh Kalra · Tianyu He · Maissam Barkeshli
East Exhibition Hall A-B #E-1206
We study whether a particular class of AI models, called Transformers, can learn to predict sequences of seemingly random numbers (PRNGs) that follow hidden mathematical rules. We find that when the models are complex enough and sufficient example-sequences are shown to them, they can successfully learn to predict new and unseen PRNGs by figuring out the underlying rules. These models develop their own strategies to predict PRNGs, which involve breaking the numbers into smaller prime factors and using them to simplify the sequences. Our research shows how modern AI systems can discover and apply complex mathematical rules without being explicitly programmed to do so, helping us understand both their capabilities and limitations.
Live content is unavailable. Log in and register to view live content