703 Thackeray
Abstract or Additional Information
Scaling laws play a central role in understanding the behavior, limitations, and design of modern machine learning systems. This talk presents two contributions that illustrate this principle in distinct settings. The first introduces a higher-order regularization framework for hypergraph learning, motivated by an analysis of large-data asymptotics. This perspective also enables a precise characterization of well-posedness and ill-posedness based on the scaling of connectivity and sample size. The second part focuses on multiple operator learning, proposing an architecture with strong empirical performance and provable expressivity. In particular, for Lipschitz continuous operator families, bounds are derived on the required network width, depth, and sparsity to achieve a desired approximation accuracy. These results underscore how scaling considerations can inform both the theoretical foundations and practical design of learning systems across a range of modern applications.