Das Kolloquium findet im Freihausgebäude der TU Wien (Adresse: Wiedner Hauptstraße 8-10, 1040 Wien) statt.
Unsere Vorträge werden über E-Mail ausgeschrieben. Um diese Ankündigungen zu erhalten senden Sie eine E-Mail an sympa@list.tuwien.ac.at mit dem Betreff "sub swm-kolloquium" um in die E-Mail-Liste eingefügt zu werden.
(Abmeldungen von dieser Liste sind jederzeit möglich, indem Sie eine E-Mail an sympa@list.tuwien.ac.at mit dem Betreff "unsub swm-kolloquium" senden.)
Upcoming Seminars
t.b.a.
Past Seminars
June 12th 2024
Prof. Malgorzata Bogdan, öffnet eine externe URL in einem neuen Fenster, Lund University and University of Wroclaw
Title: Asymptotic distribution of low-dimensional patterns by regularizors with convex non-differentiable penalties
Abstract: In this talk we will discuss the asymptotic distribution of the patterns generated by regularizers with non-differentiable penalties. These patterns depend on the penalty through its subdifferential and can take various forms, such as the sign vector of regression coefficients for LASSO, or the more refined SLOPE pattern, which also identifies clusters of coefficients with the same absolute values.
We focus on the classical asymptotics, where the sample size approaches infinity while the number of regressors remains fixed. We derive the asymptotic distribution of the $\sqrt{n}$ scaled estimation error and its pattern for a broad class of regularizers.
Our framework encompasses various regularizers, including Generalized LASSO, SLOPE or Elastic net. Importantly, it extends beyond ordinary least squares to the robust Huber and Quantile loss functions and to the general log-likelihood penalized methods, like the graphical LASSO or the graphical SLOPE. Additionally, sampling from the asymptotic error distribution facilitates comparisons between different regularizers. We provide a short simulation study showcasing an illustrative comparison between the asymptotic properties of LASSO, fused LASSO and SLOPE.
June 5th 2024
Prof. Panagiotis Tsiamyrtzis, öffnet eine externe URL in einem neuen Fenster, Politecnico di Milano
Efficient Quality Monitoring in Medical Laboratories: the use of Bayesian methods in anomaly detection.
Abstract: Medical decisions regarding a patient, quite often rely on the blood test results received from a medical laboratory. In a daily routine of a medical lab, process automation allows to analyze hundreds to thousands of patient samples and issues regarding the quality of the reported results are of utmost importance. The quality control procedure involves a periodic use of a small number of control samples (1-3 per day), whose results constitute the quality characteristic of the specific process. Anomalies, expressed as transient or persistent shifts, in this quality characteristic need to be alarmed as soon as they occur. In this seminar we will present the Bayesian alternative in providing an efficient quality monitoring scheme that has been successfully introduced in real practice.
The Bayesian approach to Statistical Process Control/Monitoring (BSPC/M) is known to provide control charts that are capable of monitoring efficiently the process parameters, in an online fashion being self-starting (unsupervised). Furthermore, they provide a foundational framework that utilizes available prior information, along with possible historical data (via power priors), leading to more powerful tools when compared to the frequentist based self-starting analogs.
Two big families of such univariate BSPC/M control charts are the Predictive Control Chart (PCC) [1] and the Predictive Ratio Cusum (PRC) [2, 3]. PCCs are specialized in identifying transient parameter shifts (a.k.a. outliers) of moderate/large size, while PRCs are focused on detecting persistent parameter shifts of even small size. Both PCC and PRC are general, closed form mechanisms, capable of handling data from any discrete or continuous distribution, as long as it belongs to the regular exponential family (e.g., Normal, Binomial, Poisson, etc.).
Open Access References
[1] https://www.tandfonline.com/doi/full/10.1080/00224065.2021.1916413, öffnet eine externe URL in einem neuen Fenster
[2] https://www.tandfonline.com/doi/full/10.1080/00224065.2022.2161434, öffnet eine externe URL in einem neuen Fenster
[3] https://www.tandfonline.com/doi/full/10.1080/00224065.2022.2161435, öffnet eine externe URL in einem neuen Fenster
May 15th 2024
Dr. Pierre-Cyril Aubin, öffnet eine externe URL in einem neuen Fenster, TU Wien
Title: Optimizing my distributions and proving convergence, or how to look into the mirror
Abstract: Many problems in machine learning and applied statistics can be formulated as optimizing a functional over the space of probability measures, e.g. the Kullback–Leibler (KL) divergence. But can we guarantee that we have a converging algorithm? Starting from Expectation-Maximization (EM), I will show that it can always be written as a mirror descent and present two cases, 1) the joint distribution is an exponential family and 2) we have a non-parametric distribution, but only over the latent space. In these cases, EM only involves convex functions and we have a (sub)linear convergence rate. Moving to variational inference in disguise, namely entropic optimal transport, I will then focus on the convergence of Sinkhorn's algorithm, a.k.a IPFP or RAS, outlining the similarities with EM.
Finally, I will show that both these algorithms fall within a general majorize-minimize framework for which we prove novel rates of convergence based on a five-point property introduced by Csiszár and Tusnády (in 1984).
The talk is based on joint works with Anna Korba (ENSAE, France) and Flavien Léger (INRIA Paris), see https://arxiv.org/abs/2305.04917, öffnet eine externe URL in einem neuen Fenster Sections 1, 4.7 and 4.8 for an overview.
The Faculty of Business, Economics and Statistics (University of Vienna) and the Institute of Statistics and Mathematical Methods in Economics (TU Vienna) cordially invite you to the public lecture of Prof. Don Rubin.
Everyone has caught themselves playing the mind game of what would happen in the future if we changed our behaviour, or what would have happened in the past if we had done something differently. These conclusions are also known as causal inference and can be researched scientifically and mathematically. With his model, the Rubin Causal Model, Donald Rubin has made one of the most significant contributions to this area of statistical research. On 18 January 2024 at 15:30, he will give a lecture on "Essential concepts of causal inference: a remarkable history and an intriguing future" in the Sky Lounge of the Faculty of Business and Economics (Oskar-Morgenstern-Platz 1, 1090 Vienna). Influenced in his scientific thinking by Sir Ronald Fisher and Jerzy Neyman, Don Rubin compares the findings of these two with Heisenberg's uncertainty principle: At a single point in time, both the position and the momentum of a particle exist, and we can measure both, but we cannot measure both the momentum and the position of a particle at the same point in time. Causal inference is similar and both possible outcomes exist for an unit, where we can observe one of them at a given time, but not both together. This is also the fundamental dilemma within causal inference. With well over 250.000 citations, according to Google Scholar, Donald Rubin is one the most highly cited scientific authors in the world. As of end of 2019, each of his ten single author publications has over a thousand citations. His contributions to Statistics have been recognised by his being elected member of the US National Academy of Sciences.