|
|
4 meses atrás | |
|---|---|---|
| .. | ||
| 01_main-chapter-code | 5 meses atrás | |
| 02_bonus_bytepair-encoder | 8 meses atrás | |
| 03_bonus_embedding-vs-matmul | 1 ano atrás | |
| 04_bonus_dataloader-intuition | 1 ano atrás | |
| 05_bpe-from-scratch | 4 meses atrás | |
| README.md | 5 meses atrás | |
02_bonus_bytepair-encoder contains optional code to benchmark different byte pair encoder implementations
03_bonus_embedding-vs-matmul contains optional (bonus) code to explain that embedding layers and fully connected layers applied to one-hot encoded vectors are equivalent.
04_bonus_dataloader-intuition contains optional (bonus) code to explain the data loader more intuitively with simple numbers rather than text.
05_bpe-from-scratch contains (bonus) code that implements and trains a GPT-2 BPE tokenizer from scratch.
In the video below, I provide a code-along session that covers some of the chapter contents as supplementary material.