## Aren Martinian L&S Math & Physical Sciences

### Free Probability in Infinite Depth Neural Networks

In the past few years, neural networks have gone from obscure to ubiquitous. This technology is shockingly versatile, but conceptually ill-understood: there is a large gap between practice and theory, and much has yet to even be conjectured. For example, scientists are baffled by the overfitting paradox. Overfitting is usually a problem when programmers model a complex system such as the brain. Programmers must base their model on finitely many examples of that system’s behavior. Traditionally, programs that perfectly replicate these examples forget the underlying system. Surprisingly, large neural networks do not in general suffer from this deficiency.

Recent developments suggest that free probability, traditionally used to understand large random matrices, can be used to explain the ways in which large neural networks typically behave. Our project would use free probability to explain the overfitting paradox by describing the average behavior of highly trained neural networks.

#### Message To Sponsor

Thank you so much for giving us the opportunity to conduct research at Berkeley! Ben and I were able to prove concrete results that we hope to write up in a formal paper and submit for publication. We discovered the trials and tribulations of research, including a substantial amount of time dedicated to literature search and deciding what problem to tackle, which we previously had little experience in. Although personally the realm of applied math is not my interest, I developed a love for random matrix theory as a result of this project.**Major:**Mathematics

**Mentor:**Federico Pasqualotto

**Sponsor:**Anselm MPS