deep-learningfoundations
Neural networks
Feedforward networks, activation functions, universal approximation — foundation of deep learning.
Уровни глубины
L0Intro~2ч
Understands a neuron as a weighted sum + activation; knows that networks learn by adjusting weights.
L1Basics~15ч
Implements a 2-layer MLP from scratch in NumPy; understands sigmoid, ReLU, softmax activations.
L2Working~25ч
Builds and trains networks with PyTorch/Keras; applies batch norm, dropout, weight init; debugs gradient issues.
L3Advanced~40ч
Designs custom layers, loss functions, training loops; applies advanced init schemes (Kaiming/Xavier); understands neural tangent kernel.
L4Research~80ч
Contributes to neural architecture search, mechanistic interpretability, or theoretical deep learning.
Ресурсы
L1 — Basics
L2 — Working
Ведёт к
- ExtendsBackpropagation
- ExtendsRegularization
- ExtendsConvolutional neural networks
- ExtendsRecurrent neural networks
- RelatedPyTorch
- ExtendsActivation functions
- ExtendsNormalization layers
- RelatedEmbeddings and representation learning
- ExtendsVariational autoencoders
- ExtendsGraph neural networks
- ExtendsGenerative adversarial networks