Veranstaltungen

17. January 2025, 15:00 bis 16:00
From projection matrices to transformers: A review of key building blocks in deep neural network architectures.
Presenter: Aggelos Pikrakis, Eng., Ph.D., Assistant Professor, University of Piraeus, Greece

Abstract:
This tutorial aims to conceptually connect different neural network architectures to provide a deeper understanding of advancements in the field. To achieve this, we will use standard mathematical tools as our guide. We will begin with standard feed-forward neural networks, examining them through the lens of projection matrices, and then move on to Recurrent Neural Networks (RNNs). Next, we will explore the Attention Mechanism and conclude with the Transformer architecture. Throughout the discussion, we will demonstrate how understanding the key building blocks of these architectures can help decipher their functionality and provide insights into their respective training and implementation challenges. This talk is designed for a broad audience with some prior exposure to machine learning concepts.

Short CV of the presenter: Aggelos Pikrakis is currently an Assistant Professor (tenured) in the Department of Informatics at the University of Piraeus, Greece, where he teaches courses on machine learning and audio processing. He is the co-inventor of several AI/ML patents, the co-author of two international textbooks in English, and the author of more than 50 refereed papers published in international peer-reviewed journals and conference proceedings. His research interests focus on audio analysis, particularly in machine learning algorithms such as deep neural networks, hidden Markov models, Bayesian architectures, and sequence alignment methods. His work has earned award recognition, including the 2019 EURASIP Meritorious Service Award.