Poster
in
Workshop: 2nd Annual Workshop on Topology, Algebra, and Geometry in Machine Learning (TAG-ML)
Breaking the Structure of Multilayer Perceptrons with Complex Topologies
Tommaso Boccato · Matteo Ferrante · Andrea Duggento · Nicola Toschi
Recent advances in neural network (NN) architectures have demonstrated that complex topologies possess the potential to surpass the performance of conventional feedforward networks. Nonetheless, previous studies investigating the relationship between network topology and model performance have yielded inconsistent results, complicating their applicability in contexts beyond those scrutinized. In this study, we examine the utility of directed acyclic graphs (DAGs) for modeling intricate relationships among neurons within NNs. We introduce a novel algorithm for the efficient training of DAG-based networks and assess their performance relative to multilayer perceptrons (MLPs). Through experimentation on synthetic datasets featuring varying levels of difficulty and noise, we observe that complex networks founded on pertinent graphs outperform MLPs in terms of accuracy, particularly within high-difficulty scenarios. Additionally, we explore the theoretical underpinnings of these observations and explore the potential trade-offs associated with employing complex networks. Our research offers valuable insights into the capabilities and constraints of sophisticated NN architectures, thus contributing to the ongoing pursuit of designing more potent and efficient deep learning models.