The Power and Limitations of Convexity in Data Science

We have a special talk scheduled for one of our candidates for the Tenure Track Assistant Professor Position in Mathematics of Deep Learning.

Tea and coffee will be served at this event in Thackeray 705 at 11:30 AM. 

 

 

 

 

 

Monday, January 23, 2023 - 12:00 to 13:00

704 Thackeray Hall

Speaker Information
Dr. Oscar Leong (he/him/his)
von Karman Instructor in Computing and Mathematical Sciences
California Institute of Technology

Abstract or Additional Information

Optimization is a fundamental pillar of data science. Traditionally, the art and challenge in optimization lay primarily in problem formulation to ensure desirable properties such as convexity. In the context of contemporary data science, however, optimization is practiced differently, with scalable local search methods applied to nonconvex objectives being the dominant paradigm in high-dimensional problems. This has brought a number of foundational mathematical challenges at the interface between optimization and data science pertaining to the dichotomy between convexity and nonconvexity.


In this talk, I will discuss some of my work addressing these challenges in regularization, a technique to encourage structure in solutions to statistical estimation and inverse problems. Even setting aside computational considerations, we currently lack a systematic understanding from a modeling perspective of what types of geometries should be preferred in a regularizer for a given data source. In particular, given a data distribution, what is the optimal regularizer for such data and what are the properties that govern whether it is amenable to convex regularization? Using ideas from star geometry, Brunn-Minkowski theory, and variational analysis, I show that we can characterize the optimal regularizer for a given distribution and establish conditions under which this optimal regularizer is convex. Moreover, I describe results establishing the robustness of our approach, such as convergence of optimal regularizers with increasing sample size and statistical learning guarantees with applications to several classes of regularizers of interest.