The mandatory reading material for the week is:
Chapter 2 serves as a recap of linear models (covered in 02450), while also establishing the notation used throughout the book. You may also optionally read chapter 1.
During the exercise session, we will work on:
Week 1 β Neural Networks (02456 DTU)
π Course Structure β’ Weeks 1β8: Lectures (1h) + Exercises (3h) β hands-on learning with notebooks + problems from Prince: Understanding Deep Learning. β’ Weeks 9β15: Project-based (groups of 3β4). β’ Evaluation: β’ Written exam (25%) β multiple choice, no electronic aids. β’ Project report (75%) β max 4 pages (+ references & code).
β Learning Objectives β’ Understand core deep learning concepts, terminology, and limitations. β’ Apply & analyze models in exercises and projects. β’ Plan and execute a deep learning project. β’ Use PyTorch for GPU programming. β’ Summarize & communicate results.
βΈ»
Part 1: The Deep Learning Revolution β’ AI hype today: Generative AI (transformers, diffusion models). β’ AI as umbrella: AI β ML β Neural Networks β Deep Learning. β’ History: β’ 1956 Dartmouth workshop β defined AI vision. β’ Neural nets theorized in 40sβ90s. β’ Breakthrough ~2012: Krizhevsky, Sutskever & Hinton win ImageNet (AlexNet). β’ 2015β16: Deep reinforcement learning (Atari, AlphaGo). β’ 2017β20: Transformers & diffusion models β modern generative AI. β’ Why deep learning took off: β’ Faster computers (GPUs/TPUs). β’ Bigger datasets (ImageNet etc.). β’ Better software (PyTorch, TensorFlow). β’ Key idea: Representation learning. β’ Traditional ML: manual feature engineering (PCA, SIFT, etc.). β’ Deep learning: end-to-end feature learning from raw data. β’ βDeepβ = multiple nonlinear transformations β hierarchical representations.
βΈ»
Part 2: Neural Networks Basics
Supervised Learning β’ Data D = \{(x_i, y_i)\}, want model y = f_\phi(x). β’ Learn parameters \phi by minimizing loss L(\phi). β’ Example: Linear regression, MSE loss.