December 18th, 2017
Prof. Edward Kaplan, opens an external URL in a new window, Yale University.
Approximating the FCFS Stochastic Matching Model With Ohm's Law.
Abstract: The FCFS stochastic matching model, where each server in an infinite sequence is matched to the first eligible customer from a second infinite sequence, developed from queueing problems addressed by Kaplan (1984) in the context of public housing assignments. The goal of this model is to determine the matching rates between eligible customer- and server-types, that is, the fraction of all matches that occur between type-i customers and type-j servers. This model was solved in a beautiful paper by Adan and Weiss (2012), but the resulting equation for the matching rates is quite complicated, involving the sum of permutation-specific terms over all permutations of the server-types.
Here we develop an approximation for the matching rates based on Ohm's Law that in some cases reduces to exact results, and via analytical, numerical, and simulation examples is shown to be highly accurate. As our approximation only requires solving a system of linear equations, it provides an accurate and tractable alternative to the exact solution. (With Mohammad Fazel-Zarandi)
December 13th, 2017
Martin Kerndler, TU Wien.
Contracting Frictions and Inefficient Layoffs of Older Workers.
Abstract: In continental Europe, losing a job after age 55 often leads to permanent withdrawal from the labor market, either in the form of long-term unemployment or through a formal early retirement scheme. Inefficiently high layoff rates of older workers may therefore generate sizeable welfare losses and increase public spending on social security and unemployment insurance. In response, economists frequently recommend eliminating governmental policies that distort separation and retirement decisions. Yet, job destruction may remain inefficiently high if a market failure exists at the same time. I demonstrate this in an age-structured model of the labor market where risk-averse workers apply to vacancies that specify long-run wage contracts. Wages can be contingent on age and tenure, but not on productivity. This restriction on wage contracts could, for instance, arise from asymmetric information. The model reveals that the the contracting friction particularly reduces the employment rate of the elderly, while the employment rate of prime-age workers remains close to its efficient level. Moreover, I find that a pension reform that reduces the generosity of early retirement arrangements is likely to generate inefficiently high unemployment among the elderly. Such a reform should therefore be complemented by labor market policies that increase firms' willingness to keep older workers employed.
December 6th, 2017
Dr. Lukas Steinberger, University of Freiburg.
Conditionally Valid Prediction Intervals for High-Dimensional Stable Algorithms.
Abstract: In this talk we present an intuitive and generically applicable procedure for constructing prediction intervals in general regression problems based on leave-one-out residuals. We show that the conditional coverage probability of the proposed interval, given the observations in the training sample, is close to the nominal level, provided that the underlying algorithm used for computing point predictions is sufficiently stable under the omission of single feature-response pairs. Our results are based on a finite sample analysis of the empirical distribution function of the leave-one-out residuals and hold in a non-parametric setting with only minimal assumptions on the unknown error distribution. To illustrate our results, we also apply them in the high-dimensional linear model, where we obtain asymptotic conditional validity as both sample size and dimension tend to infinity at the same rate.
November 22nd, 2017
Prof. Benedikt Pötscher, University of Vienna.
Controlling the Size of Autocorrelation Robust Tests.
Abstract: Autocorrelation robust tests are notorious for suffering from size distortions and power problems. We investigate under which conditions the size of autocorrelation robust tests can be controlled by an appropriate choice of critical value.
October 12th, 2017
Prof. Hannu Oja, University of Turku.
Independent Component Analysis Using Third and Fourth Cumulants.
Abstract: In independent component analysis it is assumed that the observed random variables are linear combinations of latent, mutually independent random variables called the independent components. It is then often thought that only the non-Gaussian independent components are of interest and the Gaussian components simply present noise. The idea is then to make inference on the unknown number of non-Gaussian components and to estimate the transformations back to the non-Gaussian components. In this talk we show how the classical skewness and kurtosis measures, namely third and fourth cumulants, can be used in the estimation. First, univariate cumulants are used as projection indices in search for independent components (projection pursuit, fastICA). Second, multivariate fourth cumulant matrices are jointly used to solve the problem (FOBI, JADE). The properties of the estimates are considered through corresponding optimization problems, estimating equations, algorithms and asymptotic statistical properties. The theory is illustrated with several examples.
October 11th, 2017
Prof. Dr. Sabrina Mulinacci, opens an external URL in a new window, University of Bologna.
C-Convolution Based Stochastic Processes in Discrete Time and Financial Prices Dynamics.
Abstract: Copula functions represent the most general tool to describe the dependence structure among random variables. While almost all applications of copula functions to finance have been devoted to modeling the dependence across markets, financial products and risk factors (that is cross-dependence), only more recently copula functions have been used to model dynamic dependence, that is the dependence across time in a stochastic process. Having in mind the modeling of price dynamics and financial applications, relevant restrictions on the classes of stochastic processes to be considered must be imposed: the most typical ones are the Markov property and the martingale requirement.
The seminal result of Darsow et al. (1992) is used to implement a specific technique to build stochastic processes modeling the increments and the dependence structure between levels and increments in order to disentangle processes with independent and dependent increments. This technique turns out to be well suited to provide a discrete time representation of the dynamics of innovations to financial prices under the restrictions imposed by the Efficient Market Hypothesis and the martingale condition.
Finally, β-mixing and moment properties of the resulting class of Markov process are analyzed.
References:
[1] Cherubini, U., Mulinacci S., Romagnoli S. (2011) "A Copula-based Model of Speculative Price Dynamics in Discrete Time", Journal of Multivariate Analysis, 102, 1047-1063.
[2] Cherubini U., Gobbi F., Mulinacci S., Romagnoli S. (2012): "Dynamic copula methods in Finance", Wiley.
[3] Cherubini U., Gobbi F., Mulinacci S., (2016) "Convolution Copula Econometrics", SpringerBriefs in Statistics.
[4] Darsow Z.F., B. Nguyen E.T. Olsen (1992): "Copulas and Markov Processes", Illinois Journal of Mathematics, 36, 600-642.
[5] Gobbi F., Mulinacci S. (2017): "β-mixing and moments properties of a non-stationary copula-based Markov process" ([[http://arxiv.org/abs/1704.01458|arXiv]]).
[6] Gobbi F., Mulinacci S. (2017): "Gaussian autoregressive process with dependent innovations. Some asymptotic results" ([[http://arxiv.org/abs/1704.03262|arXiv]]).
July 6th, 2017
Prof. Bing Li, Pennsylvania State University.
Nonlinear Sufficient Dimension Reduction for Functional Data.
Abstract: We propose a general theory and the estimation procedures for nonlinear sufficient dimension reduction where both the predictor and the response may be random functions. The relation between the response and predictor can be arbitrary and the sets of observed time points can vary from subject to subject. The functional and nonlinear nature of the problem leads to construction of two functional spaces: the first representing the functional data, assumed to be a Hilbert space, and the second characterizing nonlinearity, assumed to be a reproducing kernel Hilbert space. A particularly attractive feature of our construction is that the two spaces are nested, in the sense that the kernel for the second space is determined by the inner product of the first.
We propose two estimators for this general dimension reduction problem, and establish the consistency and convergence rate for one of them. These asymptotic results are flexible enough to accommodate both fully and partially observed functional data. We investigate the performances of our estimators by simulations, and applied them to data sets about speech recognition and handwritten symbols. (With Jun Song)
June 28th, 2017
Dr. Klaus Nordhausen, opens an external URL in a new window, TU Wien.
Tests for Subspace Dimension.
Abstract: Most linear dimension reduction methods proposed in the literature can be formulated using an appropriate pair of scatter matrices. In this talk, the eigenvalues of one scatter matrix with respect to another are used to determine the dimensions of the signal and noise subspaces. Three popular dimension reduction methods, namely principal component analysis (PCA), fourth order blind identication (FOBI) and sliced inverse regression (SIR) are considered in detail and the first two moments of the subsets of the eigenvalues are used to test for the dimension of the signal space. The limiting null distributions of the test statistics are discussed and novel bootstrap strategies are suggested for the small sample cases. In all three cases, consistent test-based estimates of the signal subspace dimension are introduced as well. The asymptotic and bootstrap tests are compared in simulations and in real data examples. (With Hannu Oja, David E. Tyler and Joni Virta.)
June 6th, 2017
Prof. Marcel Nutz, opens an external URL in a new window, Columbia University.
A Mean-Field Competition.
Abstract: We introduce a mean field game with rank-based reward: competing agents optimize their effort to achieve a goal, are ranked according to their completion time, and paid a reward based on their relative rank. On the one hand, we propose a tractable Poissonian model in which we can characterize the optimal efforts for a given reward scheme. On the other hand, we study the principal agent problem of designing an optimal reward scheme. A surprising, explicit solution is found to minimize the time until a given fraction of the population has reached the goal. (Work-in-progress with Yuchong Zhang)
May 9th, 2017
Prof. Bernt Øksendal, opens an external URL in a new window, University of Oslo.
Optimal Insider Control of Stochastic Partial Differential Equations, with Applications to Optimal Harvesting and Optimal Insider Portfolio under Noisy Observations.
Abstract: We study the problem of optimal control with inside information of an SPDE (a stochastic evolution equation) driven by a Brownian motion and a Poisson random measure. Our optimal control problem is new in two ways:
(i) The controller has access to inside information, i.e. access to information about a future state of the system,
(ii) The integro-differential operator of the SPDE might depend on the control.
In the first part of the paper, we formulate a sufficient and a necessary maximum principle for this type of control problem, in the following two cases:
(a) The control is allowed to depend both on time t and on the space variable x.
(b) The control is not allowed to depend on x.
In the second part of the paper, we apply the results above to the problem of optimal control of an SDE system when the inside controller has only noisy observations of the state of the system. Using results from nonlinear filtering, we transform this noisy observation SDE inside control problem into a full observation SPDE insider control problem. The results are illustrated by explicit examples. The presentation is based on joint works with Olfa Draouil, University of Tunis El Manar, Tunisia.
May 3rd, 2017
Prof. Efstathia Bura, TU Wien.
Sufficient Dimension Reduction: Forecasting Macro-economic Series.
Abstract: A main objective of statistical inference is the reduction of data: variables are replaced by relatively few quantities (reductions) which adequately represent the relevant information contained in the original data. Sufficient Dimension Reduction (SDR) is a collection of novel tools for reducing the dimension of multivariate data in regression problems without losing inferential information on the distribution of a target variable Y. SDR focuses on finding sufficient, in the statistical sense, reductions of a large set of explanatory variables in order to model a target. The reduction and targeting are carried out simultaneously as SDR identifies a sufficient function of the regressors, R(X), that preserves the information in the conditional distribution of Y|X.
An overview of the sufficient dimension reduction methodology, linear and non-linear, and its progress over the last two and a half decades will be presented and juxtaposed with other data reducing methods. The second part of the talk will focus on SDR in forecasting. SDR and popular estimation methods, such as ordinary least squares (OLS), dynamic factor models (DFM), partial least squares (PLS) and RIDGE regression will be juxtaposed. The connection and fundamental differences between the DFM and SDR frameworks will be presented. In an application, SDR is shown to significantly reduce the dimension of widely used macroeconomic series data with one or two sufficient reductions delivering similar forecasting performance to that of competing methods in macro-forecasting.