We have a special talk scheduled for one of our candidates for the Tenure Track Assistant Professor Position in Mathematics of Deep Learning.
Tea and coffee will be served at this event in Thackeray 705 at 11:30 AM.
704 Thackeray Hall
Abstract or Additional Information
Deep neural networks, or DNNs, have become a popular tool in various fields due to their universal approximation properties. This talk delves into the intricacies of designing efficient DNN algorithms and architectures, as well as exploring their use in solving optimal control problems. DNN architectures are known to be difficult to design and train, often resulting in challenges such as the exploding or vanishing gradients issue. These networks also exhibit a large number of equivalent optimal solutions in the parameter space. A neural network architecture designed using fractional derivatives will be introduced to tackle the former challenge. Additionally, the notion of bias ordering, with theoretical guarantees, will be introduced to narrow down the parameter search space. The second half of the talk will focus on the applications of DNNs in high-dimensional optimal control problems. Traditional methods for solving these problems often suffer from the Curse-ofDimensionality, where computational complexity increases exponentially with the dimension of the problem. Utilizing DNNs to approximate the value function of control problems can effectively tackle this issue. One of the key challenges in training is discovering the relevant parts of the state space. To address this, techniques from control theory will be employed to devise a self-supervised training algorithm. Several numerical experiments, including applications to PDE constrained optimization, will be presented