Sebastian Raschka 80d4732456 add HF equivalency tests for standalone nbs (#774) hai 3 meses
..
01_main-chapter-code bcfdbd7008 Fix some wording issues in the notes (#695) hai 5 meses
02_alternative_weight_loading 3f93d73d6d Alt weight loading code via PyTorch (#585) hai 7 meses
03_bonus_pretraining_on_gutenberg 15fa6a84f6 fixed plot_losses (#677) hai 5 meses
04_learning_rate_schedulers cf39abac04 Add and link bonus material (#84) hai 1 ano
05_bonus_hparam_tuning 8b3e4b24b0 Remove unused params for hparam script (#710) hai 4 meses
06_user_interface c21bfe4a23 Add PyPI package (#576) hai 8 meses
07_gpt_to_llama 80d4732456 add HF equivalency tests for standalone nbs (#774) hai 3 meses
08_memory_efficient_weight_loading 3233ddc475 get rid of redundant memory profiler import (#744) hai 4 meses
09_extending-tokenizers c21bfe4a23 Add PyPI package (#576) hai 8 meses
10_llm-training-speed 83c76891fc Fix issue 724: unused args (#726) hai 4 meses
11_qwen3 80d4732456 add HF equivalency tests for standalone nbs (#774) hai 3 meses
12_gemma3 80d4732456 add HF equivalency tests for standalone nbs (#774) hai 3 meses
README.md f92b40e4ab Qwen3 Coder Flash & MoE from Scratch (#760) hai 3 meses

README.md

Chapter 5: Pretraining on Unlabeled Data

 

Main Chapter Code

 

Bonus Materials

  • 02_alternative_weight_loading contains code to load the GPT model weights from alternative places in case the model weights become unavailable from OpenAI
  • 03_bonus_pretraining_on_gutenberg contains code to pretrain the LLM longer on the whole corpus of books from Project Gutenberg
  • 04_learning_rate_schedulers contains code implementing a more sophisticated training function including learning rate schedulers and gradient clipping
  • 05_bonus_hparam_tuning contains an optional hyperparameter tuning script
  • 06_user_interface implements an interactive user interface to interact with the pretrained LLM
  • 07_gpt_to_llama contains a step-by-step guide for converting a GPT architecture implementation to Llama 3.2 and loads pretrained weights from Meta AI
  • 08_memory_efficient_weight_loading contains a bonus notebook showing how to load model weights via PyTorch's load_state_dict method more efficiently
  • 09_extending-tokenizers contains a from-scratch implementation of the GPT-2 BPE tokenizer
  • 10_llm-training-speed shows PyTorch performance tips to improve the LLM training speed
  • 11_qwen3 A from-scratch implementation of Qwen3 0.6B and Qwen3 30B-A3B (Mixture-of-Experts) including code to load the pretrained weights of the base, reasoning, and coding model variants



Link to the video