MountainAI
Войти
nlptransformersllm

Pretraining

Next-token prediction, masked language modelling, and contrastive objectives — how foundation models are built.

Уровни глубины

L0Intro~1ч

Knows pretraining creates a general-purpose base model from large corpora before task-specific finetuning.

L1Basics~8ч

Understands CLM, MLM, and contrastive objectives; knows data mixture matters; familiar with GPT/BERT/T5 paradigms.

L2Working~20ч

Can run a small pretraining experiment; understands data deduplication, tokeniser coverage, and curriculum learning basics.

L3Advanced~40ч

Understands scaling laws; designs data mixtures; analyses emergent abilities and training dynamics.

L4Research~100ч

Trains foundation models; contributes to data selection, training efficiency, or continual pretraining research.

Ресурсы