July 12, 2017
from 01:00 PM - 03:00 PM
Bahen Centre for Information Technology (BA 1220)
40 St George St
Toronto, ON M5S2E4
This presentation will describe recent work on the optimization of high-order provably stable implicit Runge-Kutta methods for the solution of stiff ordinary differential equations. Implicit Runge-Kutta methods are attractive for this class of problem since provably stable methods can be derived for arbitrarily high orders of accuracy. Often in the construction of high-order Runge-Kutta methods several coefficients remain undetermined after the desired order conditions are satisfied. The value of these coefficients can have a significant impact on the efficiency and robustness of the resulting scheme; however, these are often chosen heuristically. The use of numerical optimization enables objective selection of values for undetermined coefficients relative to a merit function, and subject to constraints for linear and nonlinear stability. The Runge-Kutta methods generated using constrained numerical optimization are demonstrably more efficient than many industry standards. Improving the efficiency of implicit Runge-Kutta methods can help reduce the cost of current time-dependent simulations and expand the range of feasible application. The future goal of this work is to apply numerical optimization to generalized classes of time-marching methods, such as general linear methods.
Inverse optimization is a model fitting problem, where data representing decisions are used to infer characteristics of a latent optimization problem used to generate the data. In practice, there are many specialized methods of formulating and solving inverse optimization problems, that rely on the specific application. In this talk, we discuss a generalized inverse optimization (GIO) methodology, which can, by choice of parameters, be easily specialized into various application-specific problems. Although GIO is a non-convex optimization problem, we show how in many cases, it can be reformulated into a finite number of simpler problems and solved efficiently. Finally, we introduce a Python library that allows for easy implementation of inverse optimization problems.