Thackeray Hall 704

### Abstract or Additional Information

Anderson acceleration (AA) is an extrapolation technique developed in 1965 that recombines the most recent iterates and update steps in a fixed point iteration to improve the convergence properties of the sequence. Despite being successfully used for many years to improve nonlinear solver behavior on a wide variety of problems, a theory that explains the often-observed accelerated convergence was lacking. In this talk, we give an introduction to AA, then present a proof of AA convergence which shows how it improves the linear convergence rate based on a gain factor of an underlying optimization problem, but also introduces higher order terms in the residual error bound. We then discuss improvements to AA based on our convergence theory, show numerical results for the algorithms applied to several application problems including Navier-Stokes, Boussinesq, and nonlinear Helmholtz systems, and also the effect of AA on superlinear and sublinear converging iterations.