Plenary Speakers

(alphabetical order)

Claire Adjiman

Professor of Chemical Engineering at the Imperial College, London

Jacek Gondzio

EUROPT Fellow 2019

Professor of Optimization at the University of Edinburgh

Title: Interior Point Methods and Beyond

Abstract: In this talk I will discuss an impact made by interior point methods (IPMs) for optimization. IPMs deliver efficient and reliable solution techniques for linear, quadratic, nonlinear, second-order cone and semidefinite programming problems and excel when dimensions of problems are large. They also provide an inspiration for a design of more general schemes for solving other classes of optimization problems by using an inexact Newton method embedded into a continuation scheme.


Daniel Kuhn

Professor of Operations Research at the College of Management of Technology at EPFL

Title: Wasserstein Distributionally Robust Optimization: Theory and Applications in Machine Learning

Absract: Many decision problems in science, engineering and economics are affected by uncertain parameters whose distribution is only indirectly observable through samples. The goal of data-driven decision-making is to learn a decision from finitely many training samples that will perform well on unseen test samples. This learning task is difficult even if all training and test samples are drawn from the same distribution – especially if the dimension of the uncertainty is large relative to the training sample size. Wasserstein distributionally robust optimization seeks data-driven decisions that perform well under the most adverse distribution within a certain Wasserstein distance from a nominal distribution constructed from the training samples. In this talk we will argue that this approach has many conceptual and computational benefits. Most prominently, the optimal decisions can often be computed by solving tractable convex optimization problems, and they enjoy rigorous out-of-sample and asymptotic consistency guarantees. We will also show that Wasserstein distributionally robust optimization has interesting ramifications for statistical learning and motivates new approaches for fundamental learning tasks such as classification, regression, maximum likelihood estimation or minimum mean square error estimation, among others.

Margaret Wright

Silver Professor of Computer Science at Courant Institute of Mathematical Sciences, New York University

Title: Teaching Numerical Optimization: How to Move from Theory to Code?

Abstract: Those who teach numerical optimization almost always start with theory. But it is also important for students to learn about computational and software issues that arise in writing code for solving real-world problems. This talk will discuss a variety of ideas for achieving this often-overlooked transition.