MountainAI
Войти
llmfinetuningefficiency

LoRA and parameter-efficient finetuning

LoRA, QLoRA, prefix tuning, adapters — finetuning LLMs without touching all the weights.

Уровни глубины

L0Intro~1ч

Reads "low-rank matrices injected into attention" one-liner.

L1Basics~10ч

Uses peft library to LoRA-tune a 7B model on a laptop-grade GPU.

L2Working~15ч

Picks target modules, rank, α; combines with 4-bit QLoRA; merges adapters.

L3Advanced~25ч

DoRA, LoRA+ / rsLoRA; analysis of intrinsic rank of finetuning updates.

L4Research~50ч

Contributes new PEFT methods or theoretical rank analyses.

Ресурсы