On Wednesday 18th of March 2026 starting at 10am, Andra Malina will give a talk at the Institute Seminar on
How Transformers Work in AI: Understanding ‘Attention Is All You Need’
Abstract: This presentation introduces the key concepts behind modern Large Language Models, tracing the evolution from basic NLP and word embeddings to the Transformer architecture introduced by Vaswani et al. in 2017. The central focus is the self-attention mechanism, which allows models to capture relationships between all words in a sequence simultaneously, enabling efficient, scalable language understanding and generation.
Zoom: The talk will be online on Zoom using this access link
or the details below
Meeting ID: 831 7483 1334
Passcode: 026716

