VADOR Events Calendar

Our team is constantly involved in research projects, frequently involving collaboration with international scientists and institutions. Research is carried out in a number of languages, however we present mostly in English.

We frequently host one off lectures on topics relating to variational analysis, dynamics and operations research.  In term-time, we host different speakers at our weekly AKOR seminar.  Seminars take place most Thursdays at 3pm in Sem. R. DB gelb 04 Once a month, the AKOR seminar will be replaced by the Vienna Seminar on Optimization, opens an external URL in a new window - a joint venture with Radu Bot and Yurii Malitskyi of the University of Vienna

We organise the Viennese Conference on Optimal Control and Dynamic Games, typically every three years.  The next iteration - VC2025 - will take place in July 2025.  For further details on this conference, and its forerunners, please visit the VC2025, opens an external URL in a new window website.

Topics and speakers for all forthcoming events will be posted below.

15. May 2024, 16:30 until 18:30

SWM Colloquium : Optimizing my distributions and proving convergence, or how to look into the mirror

Other

Pierre-Cyril Aubin, TU Wien

Many problems in machine learning and applied statistics can be formulated as optimizing a functional over the space of probability measures, e.g. the Kullback–Leibler (KL) divergence. But can we guarantee that we have a converging algorithm? Starting from Expectation-Maximization (EM), I will show that it can always be written as a mirror descent and present two cases, 1) the joint distribution is an exponential family and 2) we have a non-parametric distribution, but only over the latent space. In these cases, EM only involves convex functions and we have a (sub)linear convergence rate. Moving to variational inference in disguise, namely entropic optimal transport, I will then focus on the convergence of Sinkhorn's algorithm, a.k.a IPFP or RAS, outlining the similarities with EM.

Finally, I will show that both these algorithms fall within a general majorize-minimize framework for which we prove novel rates of convergence based on a five-point property introduced by Csiszár and Tusnády (in 1984).

Calendar entry

Event location

Sem R gelb 04
1030 Wien

 

Organiser

VADOR

 

Public

No

 

Entrance fee

No

 

Registration required

No

SWM Colloquium : Optimizing my distributions and proving convergence, or how to look into the mirror

Pierre-Cyril Aubin, TU Wien

Many problems in machine learning and applied statistics can be formulated as optimizing a functional over the space of probability measures, e.g. the Kullback–Leibler (KL) divergence. But can we guarantee that we have a converging algorithm? Starting from Expectation-Maximization (EM), I will show that it can always be written as a mirror descent and present two cases, 1) the joint distribution is an exponential family and 2) we have a non-parametric distribution, but only over the latent space. In these cases, EM only involves convex functions and we have a (sub)linear convergence rate. Moving to variational inference in disguise, namely entropic optimal transport, I will then focus on the convergence of Sinkhorn's algorithm, a.k.a IPFP or RAS, outlining the similarities with EM.

Finally, I will show that both these algorithms fall within a general majorize-minimize framework for which we prove novel rates of convergence based on a five-point property introduced by Csiszár and Tusnády (in 1984).