Interpretable modeling, representation, and learning with transport transforms

We have a special talk scheduled for one of our candidates for the Tenure Track Assistant Professor Position in Mathematics of Deep Learning.

Tea and coffee will be served at this event in Thackeray 705 at 1:30 PM. 

Thursday, February 2, 2023 - 14:00 to 15:00

704 Thackeray Hall

Speaker Information
Shiying Li
Post-Doctoral Research Associate
University of North Carolina-Chapel Hill

Abstract or Additional Information

Data or patterns (e.g., signals and images) emanating from physical sensors and biomedical systems often live in high dimensional spaces and exhibit complicated non-linear structures. The known and unknown non-linearities present in high dimensional data post challenges in modeling and hence constructing effective learning algorithms for downstream tasks. Though many ‘black-box’ models and algorithms exist and perform very well in various applications in terms of accuracy, there are issues like high computational cost and difficulty in interpreting the results. 

In this talk, I will show situations where simple models are as accurate as more complicated models while having the advantage of being more interpretable and computationally efficient. The modeling combines a template-deformation-based assumption which naturally describes morphological variations within data and a representation of data in an embedding space via certain invertible non-linear transforms motivated by the optimal transport theory. I describe situations when data geometry is simplified (convexified) in the embedding space, where the computational complexity can be reduced significantly. Such modeling also gives rise to simple machine learning algorithms with the ability incorporate meaningful invariances, with few or no hyperparameters, and which are robust to out-of-distribution samples (generalizability). I will show applications in various image and signal classification problems.