The School of Mechanical and Materials Engineering Seminar Series Presents “Multifidelity, domain decomposition, and stacking for improving training for physics-informed networks” Presented by Dr. Amanda Howard
About the event
Multifidelity, domain decomposition, and stacking for improving training for physics-informed networks
Presented by Dr. Amanda Howard, Mathematician, Pacific Northwest National Laboratory
Abstract:
Physics-informed neural networks and operator networks have shown promise for effectively solving equations modeling physical systems. However, these networks can be difficult or impossible to train accurately for some systems of equations. One way to improve training is through the use of a small amount of data, however, such data is expensive to produce. We will introduce our novel multifidelity framework for stacking physics-informed neural networks and operator networks that facilitates training by progressively reducing the errors in our predictions for when no data is available. In stacking networks, we successively build a chain of networks, where the output at one step can act as a low-fidelity input for training the next step, gradually increasing the expressivity of the learned model. We will finally discuss the extension to domain decomposition using the finite basis method, including applications to newly-developed Kolmogorov-Arnold Networks.
Presenter Biography:
Amanda Howard is a mathematician a Pacific Northwest National Laboratory, located in Seattle. She completed her PhD in 2018 at Brown University on computational fluid dynamics. Her current research focuses are scientific machine learning, particularly physics-informed operator learning, multifidelity machine learning, and machine learning methods for accelerating scientific simulations.