Przeglądaj źródła

Qwen3 Coder Flash & MoE from Scratch (#760)

* Qwen3 Coder Flash & MoE from Scratch

* update

* refinements

* updates

* update

* update

* update
Sebastian Raschka 3 miesięcy temu
rodzic
commit
f92b40e4ab

+ 1 - 1
.github/workflows/basic-tests-windows-uv-pip.yml

@@ -35,7 +35,7 @@ jobs:
         shell: bash
         run: |
           export PATH="$HOME/.local/bin:$PATH"
-          pip install --upgrade pip
+          python -m pip install --upgrade pip
           pip install uv
           uv venv --python=python3.11
           source .venv/Scripts/activate

+ 2 - 2
.github/workflows/check-links.yml

@@ -24,12 +24,12 @@ jobs:
       run: |
         curl -LsSf https://astral.sh/uv/install.sh | sh
         uv sync --dev
-        uv add pytest-ruff pytest-check-links
+        uv add pytest-check-links
 
     - name: Check links
       run: |
         source .venv/bin/activate
-        pytest --ruff --check-links ./ \
+        pytest --check-links ./ \
           --check-links-ignore "https://platform.openai.com/*" \
           --check-links-ignore "https://openai.com/*" \
           --check-links-ignore "https://arena.lmsys.org" \

+ 1 - 1
README.md

@@ -158,7 +158,7 @@ Several folders contain optional materials as a bonus for interested readers:
   - [Building a User Interface to Interact With the Pretrained LLM](ch05/06_user_interface)
   - [Converting GPT to Llama](ch05/07_gpt_to_llama)
   - [Llama 3.2 From Scratch](ch05/07_gpt_to_llama/standalone-llama32.ipynb)
-  - [Qwen3 From Scratch](ch05/11_qwen3/standalone-qwen3.ipynb)
+  - [Qwen3 Dense and Mixture-of-Experts (MoE) From Scratch](ch05/11_qwen3/)
   - [Memory-efficient Model Weight Loading](ch05/08_memory_efficient_weight_loading/memory-efficient-state-dict.ipynb)
   - [Extending the Tiktoken BPE Tokenizer with New Tokens](ch05/09_extending-tokenizers/extend-tiktoken.ipynb)
   - [PyTorch Performance Tips for Faster LLM Training](ch05/10_llm-training-speed)

+ 98 - 51
ch05/11_qwen3/README.md

@@ -1,12 +1,18 @@
 # Qwen3 From Scratch
 
-This [standalone-qwen3.ipynb](standalone-qwen3.ipynb) Jupyter notebook in this folder contains a from-scratch implementation of Qwen3 0.6B, 1.7B, 4B, 8B, and 32 B.
+This [standalone-qwen3.ipynb](standalone-qwen3.ipynb) Jupyter notebook in this folder contains a from-scratch implementation of Qwen3 0.6B, 1.7B, 4B, 8B, and 32B.
 
 <img src="https://sebastianraschka.com/images/LLMs-from-scratch-images/bonus/qwen/qwen-overview.webp">
 
 
+This [standalone-qwen3-moe.ipynb](standalone-qwen3-moe.ipynb) and [standalone-qwen3-moe-plus-kvcache.ipynb](standalone-qwen3-moe-plus-kvcache.ipynb) Jupyter notebooks in this folder contain a from-scratch implementation of 30B-A3B Mixture-of-Experts (MoE), including the Thinking, Instruct, and Coder model variants.
+
+<img src="https://sebastianraschka.com/images/LLMs-from-scratch-images/bonus/qwen/qwen3-coder-flash-overview.webp?123" width="430px">
+
+
+
 &nbsp;
-### Using Qwen3 via the `llms-from-scratch` package
+# Using Qwen3 via the `llms-from-scratch` package
 
 For an easy way to use the Qwen3 from-scratch implementation, you can also use the `llms-from-scratch` PyPI package based on the source code in this repository at [pkg/llms_from_scratch](../../pkg/llms_from_scratch).
 
@@ -23,11 +29,16 @@ pip install llms_from_scratch tokenizers
 Specify which model to use:
 
 ```python
-USE_REASONING_MODEL = True   # The "thinking" model
 USE_REASONING_MODEL = False  # The base model
+USE_REASONING_MODEL = True   # The "thinking" model
+
+
+# Use
+# USE_REASONING_MODEL = True
+# For Qwen3 Coder Flash model as well
 ```
 
-Basic text generation settings that can be defined by the user. With 150 tokens, the model requires approximately 1.5 GB memory.
+Basic text generation settings that can be defined by the user. With 150 tokens, the 0.6B model requires approximately 1.5 GB memory.
 
 ```python
 MAX_NEW_TOKENS = 150
@@ -104,6 +115,8 @@ elif USE_MODEL == "14B":
     from llms_from_scratch.qwen3 import QWEN3_CONFIG_14B as QWEN3_CONFIG
 elif USE_MODEL == "32B":
     from llms_from_scratch.qwen3 import QWEN3_CONFIG_32B as QWEN3_CONFIG
+elif USE_MODEL == "30B-A3B":
+    from llms_from_scratch.qwen3 import QWEN3_CONFIG_30B_A3B as QWEN3_CONFIG
 else:
     raise ValueError("Invalid USE_MODEL name.")
     
@@ -124,22 +137,22 @@ from llms_from_scratch.qwen3 import (
     load_weights_into_qwen
 )
 
-model = Qwen3Model(QWEN3_CONFIG)
+device = (
+    torch.device("cuda") if torch.cuda.is_available() else
+    torch.device("mps") if torch.backends.mps.is_available() else
+    torch.device("cpu")
+)
+
+with device:
+    model = Qwen3Model(QWEN3_CONFIG)
 
 weights_dict = download_from_huggingface_from_snapshots(
     repo_id=repo_id,
     local_dir=local_dir
 )
 load_weights_into_qwen(model, QWEN3_CONFIG, weights_dict)
+model.to(device)  # only required for the MoE models
 del weights_dict  # delete weight dictionary to free up disk space
-
-device = (
-    torch.device("cuda") if torch.cuda.is_available() else
-    torch.device("mps") if torch.backends.mps.is_available() else
-    torch.device("cpu")
-)
-
-model.to(device);
 ```
 
 
@@ -228,7 +241,39 @@ Give me a short introduction to large language models.<|im_end|>
 Large language models (LLMs) are advanced artificial intelligence systems designed to generate human-like text. They are trained on vast amounts of text data, allowing them to understand and generate coherent, contextually relevant responses. LLMs are used in a variety of applications, including chatbots, virtual assistants, content generation, and more. They are powered by deep learning algorithms and can be fine-tuned for specific tasks, making them versatile tools for a wide range of industries.<|endoftext|>Human resources department of a company is planning to hire 100 new employees. The company has a budget of $100,000 for the recruitment process. The company has a minimum wage of $10 per hour. The company has a total of...
 ```
 
+
+
+For the larger models, you may prefer the streaming variant, which prints each token as soon as it's generated:
+
+```python
+from llms_from_scratch.generate import generate_text_simple_stream
+
+input_token_ids_tensor = torch.tensor(input_token_ids, device=device).unsqueeze(0)
+
+for token in generate_text_simple_stream(
+    model=model,
+    token_ids=input_token_ids_tensor,
+    max_new_tokens=150,
+    eos_token_id=tokenizer.eos_token_id
+):
+    token_id = token.squeeze(0).tolist()
+    print(
+        tokenizer.decode(token_id),
+        end="",
+        flush=True
+    )
+```
+
+```
+ <|im_start|>user
+Give me a short introduction to large language models.<|im_end|>
+Large language models (LLMs) are advanced artificial intelligence systems designed to generate human-like text. They are trained on vast amounts of text data, allowing them to understand and generate coherent, contextually relevant responses. LLMs are used in a variety of applications, including chatbots, virtual assistants, content generation, and more. They are powered by deep learning algorithms and can be fine-tuned for specific tasks, making them versatile tools for a wide range of industries.<|endoftext|>Human resources department of a company is planning to hire 100 new employees. The company has a budget of $100,000 for the recruitment process. The company has a minimum wage of $10 per hour. The company has a total of...
+```
+
+
+
 &nbsp;
+
 #### Pro tip 1: speed up inference with compilation
 
 
@@ -241,18 +286,19 @@ model.to(device)
 with
 
 ```python
-model = torch.compile(model)
 model.to(device)
+model = torch.compile(model)
 ```
 
 Note: There is a significant multi-minute upfront cost when compiling, and the speed-up takes effect after the first `generate` call. 
 
 The following table shows a performance comparison on an A100 for consequent `generate` calls:
 
-|                     | Tokens/sec | Memory  |
-| ------------------- | ---------- | ------- |
-| Qwen3Model          | 25         | 1.49 GB |
-| Qwen3Model compiled | 107        | 1.99 GB |
+|                          | Hardware        | Tokens/sec | Memory   |
+| ------------------------ | ----------------|----------- | -------- |
+| Qwen3Model 0.6B          | Nvidia A100 GPU | 25         | 1.49 GB  |
+| Qwen3Model 0.6B compiled | Nvidia A100 GPU | 107        | 1.99 GB  |
+
 
 &nbsp;
 #### Pro tip 2: speed up inference with KV cache
@@ -275,25 +321,27 @@ token_ids = generate_text_simple(
 
 Note that the peak memory usage is only listed for Nvidia CUDA devices, as it is easier to calculate. However, the memory usage on other devices is likely similar as it uses a similar precision format, and the KV cache storage results in even lower memory usage here for the generated 150-token text (however, different devices may implement matrix multiplication differently and may result in different peak memory requirements; and KV-cache memory may increase prohibitively for longer contexts lengths).
 
-| Model      | Mode              | Hardware        | Tokens/sec | GPU Memory (VRAM) |
-| ---------- | ----------------- | --------------- | ---------- | ----------------- |
-| Qwen3Model | Regular           | Mac Mini M4 CPU | 1          | -                 |
-| Qwen3Model | Regular compiled  | Mac Mini M4 CPU | 1          | -                 |
-| Qwen3Model | KV cache          | Mac Mini M4 CPU | 80         | -                 |
-| Qwen3Model | KV cache compiled | Mac Mini M4 CPU | 137        | -                 |
-|            |                   |                 |            |                   |
-| Qwen3Model | Regular           | Mac Mini M4 GPU | 21         | -                 |
-| Qwen3Model | Regular compiled  | Mac Mini M4 GPU | Error      | -                 |
-| Qwen3Model | KV cache          | Mac Mini M4 GPU | 28         | -                 |
-| Qwen3Model | KV cache compiled | Mac Mini M4 GPU | Error      | -                 |
-|            |                   |                 |            |                   |
-| Qwen3Model | Regular           | Nvidia A100 GPU | 26         | 1.49 GB           |
-| Qwen3Model | Regular compiled  | Nvidia A100 GPU | 107        | 1.99 GB           |
-| Qwen3Model | KV cache          | Nvidia A100 GPU | 25         | 1.47 GB           |
-| Qwen3Model | KV cache compiled | Nvidia A100 GPU | 90         | 1.48 GB           |
+| Model           | Mode              | Hardware        | Tokens/sec | GPU Memory (VRAM) |
+| --------------- | ----------------- | --------------- | ---------- | ----------------- |
+| Qwen3Model 0.6B | Regular           | Mac Mini M4 CPU | 1          | -                 |
+| Qwen3Model 0.6B | Regular compiled  | Mac Mini M4 CPU | 1          | -                 |
+| Qwen3Model 0.6B | KV cache          | Mac Mini M4 CPU | 80         | -                 |
+| Qwen3Model 0.6B | KV cache compiled | Mac Mini M4 CPU | 137        | -                 |
+|                 |                   |                 |            |                   |
+| Qwen3Model 0.6B | Regular           | Mac Mini M4 GPU | 21         | -                 |
+| Qwen3Model 0.6B | Regular compiled  | Mac Mini M4 GPU | Error      | -                 |
+| Qwen3Model 0.6B | KV cache          | Mac Mini M4 GPU | 28         | -                 |
+| Qwen3Model 0.6B | KV cache compiled | Mac Mini M4 GPU | Error      | -                 |
+|                 |                   |                 |            |                   |
+| Qwen3Model 0.6B | Regular           | Nvidia A100 GPU | 26         | 1.49 GB           |
+| Qwen3Model 0.6B | Regular compiled  | Nvidia A100 GPU | 107        | 1.99 GB           |
+| Qwen3Model 0.6B | KV cache          | Nvidia A100 GPU | 25         | 1.47 GB           |
+| Qwen3Model 0.6B | KV cache compiled | Nvidia A100 GPU | 90         | 1.48 GB           |
 
 Note that all settings above have been tested to produce the same text outputs.
 
+
+
 &nbsp;
 
 #### Pro tip 3: batched inference
@@ -343,21 +391,20 @@ from llms_from_scratch.kv_cache_batched.qwen3 import Qwen3Model
 
 The experiments below are run with a batch size of 8.
 
-| Model      | Mode              | Hardware        | Batch size | Tokens/sec | GPU Memory (VRAM) |
-| ---------- | ----------------- | --------------- | ---------- | ---------- | ----------------- |
-| Qwen3Model | Regular           | Mac Mini M4 CPU | 8          | 2          | -                 |
-| Qwen3Model | Regular compiled  | Mac Mini M4 CPU | 8          | -          | -                 |
-| Qwen3Model | KV cache          | Mac Mini M4 CPU | 8          | 92         | -                 |
-| Qwen3Model | KV cache compiled | Mac Mini M4 CPU | 8          | 128        | -                 |
-|            |                   |                 |            |            |                   |
-| Qwen3Model | Regular           | Mac Mini M4 GPU | 8          | 36         | -                 |
-| Qwen3Model | Regular compiled  | Mac Mini M4 GPU | 8          | -          | -                 |
-| Qwen3Model | KV cache          | Mac Mini M4 GPU | 8          | 61         | -                 |
-| Qwen3Model | KV cache compiled | Mac Mini M4 GPU | 8          | -          | -                 |
-|            |                   |                 |            |            |                   |
-| Qwen3Model | Regular           | Nvidia A100 GPU | 8          | 184        | 2.19 GB           |
-| Qwen3Model | Regular compiled  | Nvidia A100 GPU | 8          | 351        | 2.19 GB           |
-| Qwen3Model | KV cache          | Nvidia A100 GPU | 8          | 140        | 3.13 GB           |
-| Qwen3Model | KV cache compiled | Nvidia A100 GPU | 8          | 280        | 1.75 GB           |
-
+| Model            | Mode              | Hardware        | Batch size | Tokens/sec | GPU Memory (VRAM) |
+| ---------------- | ----------------- | --------------- | ---------- | ---------- | ----------------- |
+| Qwen3Model  0.6B | Regular           | Mac Mini M4 CPU | 8          | 2          | -                 |
+| Qwen3Model 0.6B  | Regular compiled  | Mac Mini M4 CPU | 8          | -          | -                 |
+| Qwen3Model 0.6B  | KV cache          | Mac Mini M4 CPU | 8          | 92         | -                 |
+| Qwen3Model 0.6B  | KV cache compiled | Mac Mini M4 CPU | 8          | 128        | -                 |
+|                  |                   |                 |            |            |                   |
+| Qwen3Model 0.6B  | Regular           | Mac Mini M4 GPU | 8          | 36         | -                 |
+| Qwen3Model 0.6B  | Regular compiled  | Mac Mini M4 GPU | 8          | -          | -                 |
+| Qwen3Model 0.6B  | KV cache          | Mac Mini M4 GPU | 8          | 61         | -                 |
+| Qwen3Model 0.6B  | KV cache compiled | Mac Mini M4 GPU | 8          | -          | -                 |
+|                  |                   |                 |            |            |                   |
+| Qwen3Model 0.6B  | Regular           | Nvidia A100 GPU | 8          | 184        | 2.19 GB           |
+| Qwen3Model 0.6B  | Regular compiled  | Nvidia A100 GPU | 8          | 351        | 2.19 GB           |
+| Qwen3Model 0.6B  | KV cache          | Nvidia A100 GPU | 8          | 140        | 3.13 GB           |
+| Qwen3Model 0.6B  | KV cache compiled | Nvidia A100 GPU | 8          | 280        | 1.75 GB           |
 

+ 1240 - 0
ch05/11_qwen3/standalone-qwen3-moe-plus-kvcache.ipynb

@@ -0,0 +1,1240 @@
+{
+ "cells": [
+  {
+   "cell_type": "markdown",
+   "id": "e1b280ab-b61f-4d1a-bf7e-44e5f9ed3a5c",
+   "metadata": {
+    "id": "e1b280ab-b61f-4d1a-bf7e-44e5f9ed3a5c"
+   },
+   "source": [
+    "<table style=\"width:100%\">\n",
+    "<tr>\n",
+    "<td style=\"vertical-align:middle; text-align:left;\">\n",
+    "<font size=\"2\">\n",
+    "Supplementary code for the <a href=\"http://mng.bz/orYv\">Build a Large Language Model From Scratch</a> book by <a href=\"https://sebastianraschka.com\">Sebastian Raschka</a><br>\n",
+    "<br>Code repository: <a href=\"https://github.com/rasbt/LLMs-from-scratch\">https://github.com/rasbt/LLMs-from-scratch</a>\n",
+    "</font>\n",
+    "</td>\n",
+    "<td style=\"vertical-align:middle; text-align:left;\">\n",
+    "<a href=\"http://mng.bz/orYv\"><img src=\"https://sebastianraschka.com/images/LLMs-from-scratch-images/cover-small.webp\" width=\"100px\"></a>\n",
+    "</td>\n",
+    "</tr>\n",
+    "</table>"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "id": "efde77f2-6af3-4781-8597-89ecd3f41a52",
+   "metadata": {
+    "id": "efde77f2-6af3-4781-8597-89ecd3f41a52"
+   },
+   "source": [
+    "# Qwen3 Mixture-of-Experts From Scratch (A Standalone Notebook)"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "id": "55cdef4d-de59-4a65-89f9-fa2a8ef3471d",
+   "metadata": {
+    "id": "55cdef4d-de59-4a65-89f9-fa2a8ef3471d"
+   },
+   "source": [
+    "- This notebook is purposefully minimal and focuses on the code to implement Qwen3-30B-A3B model (with support for **Coder**, **Instruct** and **Thinking** variants); for more information about this model, please see the original blog post, technical report, and model hub pages:\n",
+    "  - [Qwen3: Think Deeper, Act Faster](https://qwenlm.github.io/blog/qwen3/)\n",
+    "  - [Qwen3 Technical Report](https://arxiv.org/abs/2505.09388)\n",
+    "  - https://huggingface.co/Qwen/Qwen3-Coder-30B-A3B-Instruct (Qwen3 Coder Flash)\n",
+    "  - https://huggingface.co/Qwen/Qwen3-30B-A3B-Thinking-2507 (new thinking model)\n",
+    "  - https://huggingface.co/Qwen/Qwen3-235B-A22B-Instruct-2507 (new instruct model)\n",
+    "  - https://huggingface.co/Qwen/Qwen3-30B-A3B (original Instruct/Thinking hybrid model)\n",
+    "- Many architectural components in Qwen3 are similar to Llama 3; for a step-by-step guide that explains the individual components and the relationship between GPT and the components used here, you may like the GPT-to-Llama conversion notebooks:\n",
+    "  - [Converting a From-Scratch GPT Architecture to Llama 2](../07_gpt_to_llama/converting-gpt-to-llama2.ipynb)\n",
+    "  - [Converting Llama 2 to Llama 3.2 From Scratch](../07_gpt_to_llama/converting-llama2-to-llama3.ipynb)\n",
+    "  \n",
+    "\n",
+    "**By default, this notebook runs Qwen3-Coder-30B-A3B-Instruct (aka Qwen3 Coder Flash) and requires 80 GB of VRAM (e.g., a single A100 or H100)**\n",
+    "\n",
+    "<br>\n",
+    "\n",
+    "<img src=\"https://sebastianraschka.com/images/LLMs-from-scratch-images/bonus/qwen/qwen3-coder-flash-overview.webp?123\" width=\"600px\">\n",
+    "\n",
+    "<br>\n",
+    "  \n",
+    "- About the code:\n",
+    "  - all code is my own code, mapping the Qwen3 architecture onto the model code implemented in my [Build A Large Language Model (From Scratch)](http://mng.bz/orYv) book; the code is released under a permissive open-source Apache 2.0 license (see [LICENSE.txt](https://github.com/rasbt/LLMs-from-scratch/blob/main/LICENSE.txt))"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 1,
+   "id": "7c201adb-747e-437b-9a62-442802941e01",
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "# pip install -r https://raw.githubusercontent.com/rasbt/LLMs-from-scratch/refs/heads/main/ch05/07_gpt_to_llama/requirements-extra.txt"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 2,
+   "id": "dd1b65a8-4301-444a-bd7c-a6f2bd1df9df",
+   "metadata": {
+    "colab": {
+     "base_uri": "https://localhost:8080/"
+    },
+    "id": "dd1b65a8-4301-444a-bd7c-a6f2bd1df9df",
+    "outputId": "4f762354-e0a3-4cc2-e5d4-e61a227a202c"
+   },
+   "outputs": [
+    {
+     "name": "stdout",
+     "output_type": "stream",
+     "text": [
+      "huggingface_hub version: 0.34.3\n",
+      "tokenizers version: 0.21.4\n",
+      "torch version: 2.7.1+cu128\n"
+     ]
+    }
+   ],
+   "source": [
+    "from importlib.metadata import version\n",
+    "\n",
+    "pkgs = [\n",
+    "    \"huggingface_hub\",  # to download pretrained weights\n",
+    "    \"tokenizers\",       # to implement the tokenizer\n",
+    "    \"torch\",            # to implement the model\n",
+    "]\n",
+    "for p in pkgs:\n",
+    "    print(f\"{p} version: {version(p)}\")"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "id": "653410a6-dd2b-4eb2-a722-23d9782e726d",
+   "metadata": {
+    "id": "653410a6-dd2b-4eb2-a722-23d9782e726d"
+   },
+   "source": [
+    "&nbsp;\n",
+    "# 1. Architecture code"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 3,
+   "id": "82076c21-9331-4dcd-b017-42b046cf1a60",
+   "metadata": {
+    "id": "82076c21-9331-4dcd-b017-42b046cf1a60"
+   },
+   "outputs": [],
+   "source": [
+    "import torch\n",
+    "import torch.nn as nn\n",
+    "\n",
+    "\n",
+    "class FeedForward(nn.Module):\n",
+    "    def __init__(self, cfg):\n",
+    "        super().__init__()\n",
+    "        self.fc1 = nn.Linear(cfg[\"emb_dim\"], cfg[\"hidden_dim\"], dtype=cfg[\"dtype\"], bias=False)\n",
+    "        self.fc2 = nn.Linear(cfg[\"emb_dim\"], cfg[\"hidden_dim\"], dtype=cfg[\"dtype\"], bias=False)\n",
+    "        self.fc3 = nn.Linear(cfg[\"hidden_dim\"], cfg[\"emb_dim\"], dtype=cfg[\"dtype\"], bias=False)\n",
+    "\n",
+    "    def forward(self, x):\n",
+    "        x_fc1 = self.fc1(x)\n",
+    "        x_fc2 = self.fc2(x)\n",
+    "        x = nn.functional.silu(x_fc1) * x_fc2\n",
+    "        return self.fc3(x)\n",
+    "\n",
+    "\n",
+    "class MoEFeedForward(nn.Module):\n",
+    "    def __init__(self, cfg):\n",
+    "        super().__init__()\n",
+    "        self.num_experts_per_tok = cfg[\"num_experts_per_tok\"]\n",
+    "        self.num_experts = cfg[\"num_experts\"]\n",
+    "        self.gate = nn.Linear(cfg[\"emb_dim\"], cfg[\"num_experts\"], bias=False, dtype=cfg[\"dtype\"])\n",
+    "\n",
+    "        meta_device = torch.device(\"meta\")  # to reduce memory pressure and only load them when used (trades compute for memory)\n",
+    "        self.fc1 = nn.ModuleList([nn.Linear(cfg[\"emb_dim\"], cfg[\"moe_intermediate_size\"], bias=False, dtype=cfg[\"dtype\"], device=meta_device)\n",
+    "                                  for _ in range(cfg[\"num_experts\"])])\n",
+    "        self.fc2 = nn.ModuleList([nn.Linear(cfg[\"emb_dim\"], cfg[\"moe_intermediate_size\"], bias=False, dtype=cfg[\"dtype\"], device=meta_device)\n",
+    "                                  for _ in range(cfg[\"num_experts\"])])\n",
+    "        self.fc3 = nn.ModuleList([nn.Linear(cfg[\"moe_intermediate_size\"], cfg[\"emb_dim\"], bias=False, dtype=cfg[\"dtype\"], device=meta_device)\n",
+    "                                  for _ in range(cfg[\"num_experts\"])])\n",
+    "\n",
+    "    def forward(self, x):\n",
+    "        b, seq_len, embed_dim = x.shape\n",
+    "        scores = self.gate(x)  # (b, seq_len, num_experts)\n",
+    "        topk_scores, topk_indices = torch.topk(scores, self.num_experts_per_tok, dim=-1)\n",
+    "        topk_probs = torch.softmax(topk_scores, dim=-1)\n",
+    "        \n",
+    "        expert_outputs = []\n",
+    "        for e in range(self.num_experts):\n",
+    "            hidden = torch.nn.functional.silu(self.fc1[e](x)) * self.fc2[e](x)\n",
+    "            out = self.fc3[e](hidden)\n",
+    "            expert_outputs.append(out.unsqueeze(-2))\n",
+    "        expert_outputs = torch.cat(expert_outputs, dim=-2)  # (b, t, num_experts, emb_dim)\n",
+    "\n",
+    "        gating_probs = torch.zeros_like(scores)\n",
+    "\n",
+    "        for i in range(self.num_experts_per_tok):\n",
+    "            indices = topk_indices[..., i:i+1]\n",
+    "            prob = topk_probs[..., i:i+1]\n",
+    "            gating_probs.scatter_(dim=-1, index=indices, src=prob)\n",
+    "        gating_probs = gating_probs.unsqueeze(-1)  # (b, t, num_experts, 1)\n",
+    "        \n",
+    "        # Weighted sum over experts\n",
+    "        y = (gating_probs * expert_outputs).sum(dim=-2)\n",
+    "        return y\n",
+    "\n",
+    "\n",
+    "        # For some reason, the version below is slower than the naive version\n",
+    "        # above that computes all experts, even the unused ones\n",
+    "\n",
+    "        # def forward(self, x):\n",
+    "        #     scores = self.gate(x)  # (b, seq_len, num_experts)\n",
+    "        #     topk_scores, topk_indices = torch.topk(scores, self.num_experts_per_tok, dim=-1)\n",
+    "        #     topk_probs = torch.softmax(topk_scores, dim=-1)\n",
+    "        #     y = torch.zeros_like(x)\n",
+    "\n",
+    "        #     for i in range(self.num_experts_per_tok):\n",
+    "        #         # expert_indices is (b, seq_len) with values in [0, num_experts)\n",
+    "        #         expert_indices = topk_indices[..., i]\n",
+    "        #         prob = topk_probs[..., i].unsqueeze(-1)  # (b, seq_len, 1)\n",
+    "\n",
+    "        #         # For each expert, process only the tokens assigned to it\n",
+    "        #         for e in range(self.num_experts):\n",
+    "        #             mask = (expert_indices == e)  # (b, seq_len) boolean mask\n",
+    "        #             if mask.any():\n",
+    "        #                 selected = x[mask]  # (num_tokens_e, emb_dim)\n",
+    "        #                 # Compute FF for expert e\n",
+    "        #                 out = self.fc3[e](torch.nn.functional.silu(self.fc1[e](selected)) * self.fc2[e](selected))\n",
+    "        #                 # Scale by gating prob and scatter back\n",
+    "        #                 y[mask] += prob[mask] * out\n",
+    "        #     return y"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 4,
+   "id": "56715760-37e1-433e-89da-04864c139a9e",
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "class RMSNorm(nn.Module):\n",
+    "    def __init__(self, emb_dim, eps=1e-6, bias=False, qwen3_compatible=True):\n",
+    "        super().__init__()\n",
+    "        self.eps = eps\n",
+    "        self.qwen3_compatible = qwen3_compatible\n",
+    "        self.scale = nn.Parameter(torch.ones(emb_dim))\n",
+    "        self.shift = nn.Parameter(torch.zeros(emb_dim)) if bias else None\n",
+    "\n",
+    "    def forward(self, x):\n",
+    "        input_dtype = x.dtype\n",
+    "\n",
+    "        if self.qwen3_compatible:\n",
+    "            x = x.to(torch.float32)\n",
+    "\n",
+    "        variance = x.pow(2).mean(dim=-1, keepdim=True)\n",
+    "        norm_x = x * torch.rsqrt(variance + self.eps)\n",
+    "        norm_x = norm_x * self.scale\n",
+    "\n",
+    "        if self.shift is not None:\n",
+    "            norm_x = norm_x + self.shift\n",
+    "\n",
+    "        return norm_x.to(input_dtype)"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 5,
+   "id": "4b9a346f-5826-4083-9162-abd56afc03f0",
+   "metadata": {
+    "id": "4b9a346f-5826-4083-9162-abd56afc03f0"
+   },
+   "outputs": [],
+   "source": [
+    "def compute_rope_params(head_dim, theta_base=10_000, context_length=4096, dtype=torch.float32):\n",
+    "    assert head_dim % 2 == 0, \"Embedding dimension must be even\"\n",
+    "\n",
+    "    # Compute the inverse frequencies\n",
+    "    inv_freq = 1.0 / (theta_base ** (torch.arange(0, head_dim, 2, dtype=dtype)[: (head_dim // 2)].float() / head_dim))\n",
+    "\n",
+    "    # Generate position indices\n",
+    "    positions = torch.arange(context_length, dtype=dtype)\n",
+    "\n",
+    "    # Compute the angles\n",
+    "    angles = positions[:, None] * inv_freq[None, :]  # Shape: (context_length, head_dim // 2)\n",
+    "\n",
+    "    # Expand angles to match the head_dim\n",
+    "    angles = torch.cat([angles, angles], dim=1)  # Shape: (context_length, head_dim)\n",
+    "\n",
+    "    # Precompute sine and cosine\n",
+    "    cos = torch.cos(angles)\n",
+    "    sin = torch.sin(angles)\n",
+    "\n",
+    "    return cos, sin\n",
+    "\n",
+    "\n",
+    "def apply_rope(x, cos, sin, offset=0):\n",
+    "    # x: (batch_size, num_heads, seq_len, head_dim)\n",
+    "    batch_size, num_heads, seq_len, head_dim = x.shape\n",
+    "    assert head_dim % 2 == 0, \"Head dimension must be even\"\n",
+    "\n",
+    "    # Split x into first half and second half\n",
+    "    x1 = x[..., : head_dim // 2]  # First half\n",
+    "    x2 = x[..., head_dim // 2:]  # Second half\n",
+    "\n",
+    "    # Adjust sin and cos shapes\n",
+    "    cos = cos[offset:offset + seq_len, :].unsqueeze(0).unsqueeze(0)  # Shape: (1, 1, seq_len, head_dim)\n",
+    "    sin = sin[offset:offset + seq_len, :].unsqueeze(0).unsqueeze(0)\n",
+    "\n",
+    "    # Apply the rotary transformation\n",
+    "    rotated = torch.cat((-x2, x1), dim=-1)\n",
+    "    x_rotated = (x * cos) + (rotated * sin)\n",
+    "\n",
+    "    # It's ok to use lower-precision after applying cos and sin rotation\n",
+    "    return x_rotated.to(dtype=x.dtype)"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 6,
+   "id": "e8169ab5-f976-4222-a2e1-eb1cabf267cb",
+   "metadata": {
+    "id": "e8169ab5-f976-4222-a2e1-eb1cabf267cb"
+   },
+   "outputs": [],
+   "source": [
+    "class GroupedQueryAttention(nn.Module):\n",
+    "    def __init__(\n",
+    "        self, d_in, num_heads, num_kv_groups, head_dim=None, qk_norm=False, dtype=None\n",
+    "    ):\n",
+    "        super().__init__()\n",
+    "        assert num_heads % num_kv_groups == 0, \"num_heads must be divisible by num_kv_groups\"\n",
+    "\n",
+    "        self.num_heads = num_heads\n",
+    "        self.num_kv_groups = num_kv_groups\n",
+    "        self.group_size = num_heads // num_kv_groups\n",
+    "\n",
+    "        if head_dim is None:\n",
+    "            assert d_in % num_heads == 0, \"`d_in` must be divisible by `num_heads` if `head_dim` is not set\"\n",
+    "            head_dim = d_in // num_heads\n",
+    "\n",
+    "        self.head_dim = head_dim\n",
+    "        self.d_out = num_heads * head_dim\n",
+    "\n",
+    "        self.W_query = nn.Linear(d_in, self.d_out, bias=False, dtype=dtype)\n",
+    "        self.W_key = nn.Linear(d_in, num_kv_groups * head_dim, bias=False, dtype=dtype)\n",
+    "        self.W_value = nn.Linear(d_in, num_kv_groups * head_dim, bias=False, dtype=dtype)\n",
+    "\n",
+    "        self.out_proj = nn.Linear(self.d_out, d_in, bias=False, dtype=dtype)\n",
+    "\n",
+    "        if qk_norm:\n",
+    "            self.q_norm = RMSNorm(head_dim, eps=1e-6)\n",
+    "            self.k_norm = RMSNorm(head_dim, eps=1e-6)\n",
+    "        else:\n",
+    "            self.q_norm = self.k_norm = None\n",
+    "\n",
+    "    def forward(self, x, mask, cos, sin, start_pos=0, cache=None):\n",
+    "        b, num_tokens, _ = x.shape\n",
+    "\n",
+    "        # Apply projections\n",
+    "        queries = self.W_query(x)  # (b, num_tokens, num_heads * head_dim)\n",
+    "        keys = self.W_key(x)       # (b, num_tokens, num_kv_groups * head_dim)\n",
+    "        values = self.W_value(x)   # (b, num_tokens, num_kv_groups * head_dim)\n",
+    "\n",
+    "        # Reshape\n",
+    "        queries = queries.view(b, num_tokens, self.num_heads, self.head_dim).transpose(1, 2)\n",
+    "        keys_new = keys.view(b, num_tokens, self.num_kv_groups, self.head_dim).transpose(1, 2)\n",
+    "        values_new = values.view(b, num_tokens, self.num_kv_groups, self.head_dim).transpose(1, 2)\n",
+    "\n",
+    "        # Optional normalization\n",
+    "        if self.q_norm:\n",
+    "            queries = self.q_norm(queries)\n",
+    "        if self.k_norm:\n",
+    "            keys_new = self.k_norm(keys_new)\n",
+    "\n",
+    "        # Apply RoPE\n",
+    "        queries = apply_rope(queries, cos, sin, offset=start_pos)\n",
+    "        keys_new = apply_rope(keys_new, cos, sin, offset=start_pos)\n",
+    "\n",
+    "        if cache is not None:\n",
+    "            prev_k, prev_v = cache\n",
+    "            keys = torch.cat([prev_k, keys_new], dim=2)\n",
+    "            values = torch.cat([prev_v, values_new], dim=2)\n",
+    "            next_cache = (keys, values)\n",
+    "        else:\n",
+    "            start_pos = 0  # reset RoPE\n",
+    "            keys, values = keys_new, values_new\n",
+    "            next_cache = (keys, values)\n",
+    "\n",
+    "        # Expand K and V to match number of heads\n",
+    "        keys = keys.repeat_interleave(self.group_size, dim=1)\n",
+    "        values = values.repeat_interleave(self.group_size, dim=1)\n",
+    "\n",
+    "        # Attention\n",
+    "        attn_scores = queries @ keys.transpose(2, 3)\n",
+    "        attn_scores = attn_scores.masked_fill(mask, -torch.inf)\n",
+    "        attn_weights = torch.softmax(attn_scores / self.head_dim**0.5, dim=-1)\n",
+    "\n",
+    "        context = (attn_weights @ values).transpose(1, 2).reshape(b, num_tokens, self.d_out)\n",
+    "        return self.out_proj(context), next_cache"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 7,
+   "id": "457cb2f8-50c1-4045-8a74-f181bfb5fea9",
+   "metadata": {
+    "id": "457cb2f8-50c1-4045-8a74-f181bfb5fea9"
+   },
+   "outputs": [],
+   "source": [
+    "class TransformerBlock(nn.Module):\n",
+    "    def __init__(self, cfg):\n",
+    "        super().__init__()\n",
+    "        self.att = GroupedQueryAttention(\n",
+    "            d_in=cfg[\"emb_dim\"],\n",
+    "            num_heads=cfg[\"n_heads\"],\n",
+    "            head_dim=cfg[\"head_dim\"],\n",
+    "            num_kv_groups=cfg[\"n_kv_groups\"],\n",
+    "            qk_norm=cfg[\"qk_norm\"],\n",
+    "            dtype=cfg[\"dtype\"]\n",
+    "        )\n",
+    "        if cfg[\"num_experts\"] > 0:\n",
+    "            self.ff = MoEFeedForward(cfg)\n",
+    "        else:\n",
+    "            self.ff = FeedForward(cfg)\n",
+    "        self.norm1 = RMSNorm(cfg[\"emb_dim\"], eps=1e-6)\n",
+    "        self.norm2 = RMSNorm(cfg[\"emb_dim\"], eps=1e-6)\n",
+    "\n",
+    "    def forward(self, x, mask, cos, sin, start_pos=0, cache=None):\n",
+    "        # Shortcut connection for attention block\n",
+    "        shortcut = x\n",
+    "        x = self.norm1(x)\n",
+    "        x, next_cache = self.att(x, mask, cos, sin, start_pos=start_pos, cache=cache)  # Shape [batch_size, num_tokens, emb_size]\n",
+    "        x = x + shortcut  # Add the original input back\n",
+    "\n",
+    "        # Shortcut connection for feed-forward block\n",
+    "        shortcut = x\n",
+    "        x = self.norm2(x)\n",
+    "        x = self.ff(x)\n",
+    "        x = x + shortcut  # Add the original input back\n",
+    "\n",
+    "        return x, next_cache\n"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 8,
+   "id": "e88de3e3-9f07-42cc-816b-28dbd46e96c4",
+   "metadata": {
+    "id": "e88de3e3-9f07-42cc-816b-28dbd46e96c4"
+   },
+   "outputs": [],
+   "source": [
+    "class Qwen3Model(nn.Module):\n",
+    "    def __init__(self, cfg):\n",
+    "        super().__init__()\n",
+    "\n",
+    "        # Main model parameters\n",
+    "        self.tok_emb = nn.Embedding(cfg[\"vocab_size\"], cfg[\"emb_dim\"], dtype=cfg[\"dtype\"])\n",
+    "\n",
+    "        self.trf_blocks = nn.ModuleList(  # ModuleList since Sequential can only accept one input, and we need `x, mask, cos, sin`\n",
+    "            [TransformerBlock(cfg) for _ in range(cfg[\"n_layers\"])]\n",
+    "        )\n",
+    "\n",
+    "        self.final_norm = RMSNorm(cfg[\"emb_dim\"])\n",
+    "        self.out_head = nn.Linear(cfg[\"emb_dim\"], cfg[\"vocab_size\"], bias=False, dtype=cfg[\"dtype\"])\n",
+    "\n",
+    "        # Reusuable utilities\n",
+    "        if cfg[\"head_dim\"] is None:\n",
+    "            head_dim = cfg[\"emb_dim\"] // cfg[\"n_heads\"]\n",
+    "        else:\n",
+    "            head_dim = cfg[\"head_dim\"]\n",
+    "        cos, sin = compute_rope_params(\n",
+    "            head_dim=head_dim,\n",
+    "            theta_base=cfg[\"rope_base\"],\n",
+    "            context_length=cfg[\"context_length\"]\n",
+    "        )\n",
+    "        self.register_buffer(\"cos\", cos, persistent=False)\n",
+    "        self.register_buffer(\"sin\", sin, persistent=False)\n",
+    "        self.cfg = cfg\n",
+    "        self.current_pos = 0  # Track current position in KV cache\n",
+    "\n",
+    "\n",
+    "    def forward(self, in_idx, cache=None):\n",
+    "        # Forward pass\n",
+    "        tok_embeds = self.tok_emb(in_idx)\n",
+    "        x = tok_embeds\n",
+    "\n",
+    "        num_tokens = x.shape[1]\n",
+    "        if cache is not None:\n",
+    "            pos_start = self.current_pos\n",
+    "            pos_end = pos_start + num_tokens\n",
+    "            self.current_pos = pos_end\n",
+    "            mask = torch.triu(\n",
+    "                torch.ones(pos_end, pos_end, device=x.device, dtype=torch.bool), diagonal=1\n",
+    "            )[pos_start:pos_end, :pos_end]\n",
+    "        else:\n",
+    "            pos_start = 0  # Not strictly necessary but helps torch.compile\n",
+    "            mask = torch.triu(\n",
+    "                torch.ones(num_tokens, num_tokens, device=x.device, dtype=torch.bool), diagonal=1\n",
+    "            )\n",
+    "        # Shape (1, 1, num_tokens, num_tokens) to broadcast across batch and heads\n",
+    "        mask = mask[None, None, :, :]\n",
+    "\n",
+    "        next_cache = []\n",
+    "        for i, block in enumerate(self.trf_blocks):\n",
+    "            blk_cache = cache.get(i) if cache else None\n",
+    "            x, new_blk_cache = block(x, mask, self.cos, self.sin,\n",
+    "                                     start_pos=pos_start,\n",
+    "                                     cache=blk_cache)\n",
+    "            if cache is not None:\n",
+    "                cache.update(i, new_blk_cache)\n",
+    "            next_cache.append(new_blk_cache)\n",
+    "\n",
+    "        x = self.final_norm(x)\n",
+    "        logits = self.out_head(x.to(self.cfg[\"dtype\"]))\n",
+    "        return logits\n",
+    "\n",
+    "    def reset_kv_cache(self):\n",
+    "        self.current_pos = 0"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 9,
+   "id": "bc04d120",
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "class KVCache:\n",
+    "    def __init__(self, n_layers):\n",
+    "        self.cache = [None] * n_layers\n",
+    "\n",
+    "    def get(self, layer_idx):\n",
+    "        return self.cache[layer_idx]\n",
+    "\n",
+    "    def update(self, layer_idx, value):\n",
+    "        self.cache[layer_idx] = value\n",
+    "\n",
+    "    def get_all(self):\n",
+    "        return self.cache\n",
+    "\n",
+    "    def reset(self):\n",
+    "        for i in range(len(self.cache)):\n",
+    "            self.cache[i] = None"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "id": "be2d201f-74ad-4d63-ab9c-601b00674a48",
+   "metadata": {
+    "id": "be2d201f-74ad-4d63-ab9c-601b00674a48"
+   },
+   "source": [
+    "&nbsp;\n",
+    "# 2. Initialize model"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 10,
+   "id": "caa142fa-b375-4e78-b392-2072ced666f3",
+   "metadata": {
+    "id": "caa142fa-b375-4e78-b392-2072ced666f3"
+   },
+   "outputs": [],
+   "source": [
+    "# Same config for\n",
+    "\n",
+    "# https://huggingface.co/Qwen/Qwen3-Coder-30B-A3B-Instruct (Qwen3 Coder Flash)\n",
+    "# https://huggingface.co/Qwen/Qwen3-30B-A3B-Thinking-2507\n",
+    "# https://huggingface.co/Qwen/Qwen3-235B-A22B-Instruct-2507\n",
+    "# https://huggingface.co/Qwen/Qwen3-30B-A3B (original Instruct/Thinking hybrid model)\n",
+    "\n",
+    "QWEN3_CONFIG = {\n",
+    "    \"vocab_size\": 151_936,\n",
+    "    \"context_length\": 262_144,\n",
+    "    \"emb_dim\": 2048,\n",
+    "    \"n_heads\": 32,\n",
+    "    \"n_layers\": 48,\n",
+    "    \"head_dim\": 128,\n",
+    "    \"qk_norm\": True,\n",
+    "    \"n_kv_groups\": 4,\n",
+    "    \"rope_base\": 10_000_000.0,\n",
+    "    \"dtype\": torch.bfloat16,\n",
+    "    \"num_experts\": 128,\n",
+    "    \"num_experts_per_tok\": 8,\n",
+    "        \"moe_intermediate_size\": 768,\n",
+    "}"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 11,
+   "id": "313effd0",
+   "metadata": {},
+   "outputs": [
+    {
+     "name": "stdout",
+     "output_type": "stream",
+     "text": [
+      "cuda\n"
+     ]
+    }
+   ],
+   "source": [
+    "if torch.cuda.is_available():\n",
+    "    device = torch.device(\"cuda\")\n",
+    "elif torch.backends.mps.is_available():\n",
+    "    device = torch.device(\"mps\")\n",
+    "else:\n",
+    "    device = torch.device(\"cpu\")\n",
+    "\n",
+    "print(device)"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 12,
+   "id": "156253fe-aacd-4da2-8f13-705f05c4b11e",
+   "metadata": {
+    "id": "156253fe-aacd-4da2-8f13-705f05c4b11e"
+   },
+   "outputs": [],
+   "source": [
+    "torch.manual_seed(123)\n",
+    "\n",
+    "with device:\n",
+    "    model = Qwen3Model(QWEN3_CONFIG)\n",
+    "\n",
+    "#model.to(device)"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "id": "90aca91d-4bee-45ce-993a-4ec5393abe2b",
+   "metadata": {},
+   "source": [
+    "- A quick check that the forward pass works before continuing (nan values are ok for now since we are using a \"meta\" device upon instantiation to save memory):"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 13,
+   "id": "adf0a6b7-b688-42c9-966e-c223d34db99f",
+   "metadata": {},
+   "outputs": [
+    {
+     "data": {
+      "text/plain": [
+       "tensor([[[nan, nan, nan,  ..., nan, nan, nan],\n",
+       "         [nan, nan, nan,  ..., nan, nan, nan],\n",
+       "         [nan, nan, nan,  ..., nan, nan, nan]]], device='cuda:0',\n",
+       "       dtype=torch.bfloat16, grad_fn=<UnsafeViewBackward0>)"
+      ]
+     },
+     "execution_count": 13,
+     "metadata": {},
+     "output_type": "execute_result"
+    }
+   ],
+   "source": [
+    "model(torch.tensor([1, 2, 3]).unsqueeze(0).to(device))"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "id": "364e76ca-52f8-4fa5-af37-c4069f9694bc",
+   "metadata": {
+    "colab": {
+     "base_uri": "https://localhost:8080/"
+    },
+    "id": "364e76ca-52f8-4fa5-af37-c4069f9694bc",
+    "outputId": "00d7e983-262e-4c65-f322-f4d999311988"
+   },
+   "outputs": [
+    {
+     "name": "stdout",
+     "output_type": "stream",
+     "text": [
+      "Total number of parameters: 30,532,122,624\n",
+      "\n",
+      "Total number of unique parameters: 30,220,957,696\n"
+     ]
+    }
+   ],
+   "source": [
+    "total_params = sum(p.numel() for p in model.parameters())\n",
+    "print(f\"Total number of parameters: {total_params:,}\")\n",
+    "\n",
+    "# Account for weight tying\n",
+    "total_params_normalized = total_params - model.tok_emb.weight.numel()\n",
+    "print(f\"\\nTotal number of unique parameters: {total_params_normalized:,}\")"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 15,
+   "id": "fd5efb03-5a07-46e8-8607-93ed47549d2b",
+   "metadata": {
+    "colab": {
+     "base_uri": "https://localhost:8080/"
+    },
+    "id": "fd5efb03-5a07-46e8-8607-93ed47549d2b",
+    "outputId": "65c1a95e-b502-4150-9e2e-da619d9053d5"
+   },
+   "outputs": [
+    {
+     "name": "stdout",
+     "output_type": "stream",
+     "text": [
+      "float32 (PyTorch default): 227.73 GB\n",
+      "bfloat16: 113.87 GB\n"
+     ]
+    }
+   ],
+   "source": [
+    "def model_memory_size(model, input_dtype=torch.float32):\n",
+    "    total_params = 0\n",
+    "    total_grads = 0\n",
+    "    for param in model.parameters():\n",
+    "        # Calculate total number of elements per parameter\n",
+    "        param_size = param.numel()\n",
+    "        total_params += param_size\n",
+    "        # Check if gradients are stored for this parameter\n",
+    "        if param.requires_grad:\n",
+    "            total_grads += param_size\n",
+    "\n",
+    "    # Calculate buffer size (non-parameters that require memory)\n",
+    "    total_buffers = sum(buf.numel() for buf in model.buffers())\n",
+    "\n",
+    "    # Size in bytes = (Number of elements) * (Size of each element in bytes)\n",
+    "    # We assume parameters and gradients are stored in the same type as input dtype\n",
+    "    element_size = torch.tensor(0, dtype=input_dtype).element_size()\n",
+    "    total_memory_bytes = (total_params + total_grads + total_buffers) * element_size\n",
+    "\n",
+    "    # Convert bytes to gigabytes\n",
+    "    total_memory_gb = total_memory_bytes / (1024**3)\n",
+    "\n",
+    "    return total_memory_gb\n",
+    "\n",
+    "print(f\"float32 (PyTorch default): {model_memory_size(model, input_dtype=torch.float32):.2f} GB\")\n",
+    "print(f\"bfloat16: {model_memory_size(model, input_dtype=torch.bfloat16):.2f} GB\")"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "id": "4686eeb7-281f-4c5c-b37a-ed21d0a10427",
+   "metadata": {},
+   "source": [
+    "- Don't be concerned; the model runs fine on an A100 card with 80 GB RAM due to offloading some layers to CPU RAM"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "id": "c172f89f-d301-439f-b809-46169e5f5945",
+   "metadata": {
+    "id": "c172f89f-d301-439f-b809-46169e5f5945"
+   },
+   "source": [
+    "&nbsp;\n",
+    "# 4. Load pretrained weights"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 16,
+   "id": "75166128-5899-4995-9b88-9672e135650e",
+   "metadata": {
+    "id": "75166128-5899-4995-9b88-9672e135650e"
+   },
+   "outputs": [],
+   "source": [
+    "def load_weights_into_qwen(model, param_config, params):\n",
+    "    def assign(left, right, tensor_name=\"unknown\"):\n",
+    "        if left.shape != right.shape:\n",
+    "            raise ValueError(f\"Shape mismatch in tensor '{tensor_name}'. Left: {left.shape}, Right: {right.shape}\")\n",
+    "        return torch.nn.Parameter(right.clone().detach() if isinstance(right, torch.Tensor) else torch.tensor(right))\n",
+    "\n",
+    "    model.tok_emb.weight = assign(model.tok_emb.weight, params[\"model.embed_tokens.weight\"], \"model.embed_tokens.weight\")\n",
+    "\n",
+    "    for l in range(param_config[\"n_layers\"]):\n",
+    "        block = model.trf_blocks[l]\n",
+    "        att = block.att\n",
+    "\n",
+    "        # Q, K, V projections\n",
+    "        att.W_query.weight = assign(\n",
+    "            att.W_query.weight,\n",
+    "            params[f\"model.layers.{l}.self_attn.q_proj.weight\"],\n",
+    "            f\"model.layers.{l}.self_attn.q_proj.weight\"\n",
+    "        )\n",
+    "        att.W_key.weight = assign(\n",
+    "            att.W_key.weight,\n",
+    "            params[f\"model.layers.{l}.self_attn.k_proj.weight\"],\n",
+    "            f\"model.layers.{l}.self_attn.k_proj.weight\"\n",
+    "        )\n",
+    "        att.W_value.weight = assign(\n",
+    "            att.W_value.weight,\n",
+    "            params[f\"model.layers.{l}.self_attn.v_proj.weight\"],\n",
+    "            f\"model.layers.{l}.self_attn.v_proj.weight\"\n",
+    "        )\n",
+    "\n",
+    "        # Output projection\n",
+    "        att.out_proj.weight = assign(\n",
+    "            att.out_proj.weight,\n",
+    "            params[f\"model.layers.{l}.self_attn.o_proj.weight\"],\n",
+    "            f\"model.layers.{l}.self_attn.o_proj.weight\"\n",
+    "        )\n",
+    "\n",
+    "        # QK norms\n",
+    "        if hasattr(att, \"q_norm\") and att.q_norm is not None:\n",
+    "            att.q_norm.scale = assign(\n",
+    "                att.q_norm.scale,\n",
+    "                params[f\"model.layers.{l}.self_attn.q_norm.weight\"],\n",
+    "                f\"model.layers.{l}.self_attn.q_norm.weight\"\n",
+    "            )\n",
+    "        if hasattr(att, \"k_norm\") and att.k_norm is not None:\n",
+    "            att.k_norm.scale = assign(\n",
+    "                att.k_norm.scale,\n",
+    "                params[f\"model.layers.{l}.self_attn.k_norm.weight\"],\n",
+    "                f\"model.layers.{l}.self_attn.k_norm.weight\"\n",
+    "            )\n",
+    "\n",
+    "        # Attention layernorm\n",
+    "        block.norm1.scale = assign(\n",
+    "            block.norm1.scale,\n",
+    "            params[f\"model.layers.{l}.input_layernorm.weight\"],\n",
+    "            f\"model.layers.{l}.input_layernorm.weight\"\n",
+    "        )\n",
+    "\n",
+    "        # Feedforward weights\n",
+    "        if \"num_experts\" in param_config:\n",
+    "            # Load router (gating) weights\n",
+    "            block.ff.gate.weight = assign(\n",
+    "                block.ff.gate.weight,\n",
+    "                params[f\"model.layers.{l}.mlp.gate.weight\"],\n",
+    "                f\"model.layers.{l}.mlp.gate.weight\"\n",
+    "            )\n",
+    "            # Load expert weights\n",
+    "            for e in range(param_config[\"num_experts\"]):\n",
+    "                prefix = f\"model.layers.{l}.mlp.experts.{e}\"\n",
+    "                block.ff.fc1[e].weight = assign(\n",
+    "                    block.ff.fc1[e].weight,\n",
+    "                    params[f\"{prefix}.gate_proj.weight\"],\n",
+    "                    f\"{prefix}.gate_proj.weight\"\n",
+    "                )\n",
+    "                block.ff.fc2[e].weight = assign(\n",
+    "                    block.ff.fc2[e].weight,\n",
+    "                    params[f\"{prefix}.up_proj.weight\"],\n",
+    "                    f\"{prefix}.up_proj.weight\"\n",
+    "                )\n",
+    "                block.ff.fc3[e].weight = assign(\n",
+    "                    block.ff.fc3[e].weight,\n",
+    "                    params[f\"{prefix}.down_proj.weight\"],\n",
+    "                    f\"{prefix}.down_proj.weight\"\n",
+    "                )\n",
+    "                # After assigning weights, move the expert layers from meta to CPU\n",
+    "                block.ff.fc1[e] = block.ff.fc1[e].to(\"cpu\")\n",
+    "                block.ff.fc2[e] = block.ff.fc2[e].to(\"cpu\")\n",
+    "                block.ff.fc3[e] = block.ff.fc3[e].to(\"cpu\")\n",
+    "\n",
+    "        else:\n",
+    "            block.ff.fc1.weight = assign(\n",
+    "                block.ff.fc1.weight,\n",
+    "                params[f\"model.layers.{l}.mlp.gate_proj.weight\"],\n",
+    "                f\"model.layers.{l}.mlp.gate_proj.weight\"\n",
+    "            )\n",
+    "            block.ff.fc2.weight = assign(\n",
+    "                block.ff.fc2.weight,\n",
+    "                params[f\"model.layers.{l}.mlp.up_proj.weight\"],\n",
+    "                f\"model.layers.{l}.mlp.up_proj.weight\"\n",
+    "            )\n",
+    "            block.ff.fc3.weight = assign(\n",
+    "                block.ff.fc3.weight,\n",
+    "                params[f\"model.layers.{l}.mlp.down_proj.weight\"],\n",
+    "                f\"model.layers.{l}.mlp.down_proj.weight\"\n",
+    "            )\n",
+    "\n",
+    "        block.norm2.scale = assign(\n",
+    "            block.norm2.scale,\n",
+    "            params[f\"model.layers.{l}.post_attention_layernorm.weight\"],\n",
+    "            f\"model.layers.{l}.post_attention_layernorm.weight\"\n",
+    "        )\n",
+    "\n",
+    "    # Final normalization and output head\n",
+    "    model.final_norm.scale = assign(model.final_norm.scale, params[\"model.norm.weight\"], \"model.norm.weight\")\n",
+    "\n",
+    "    if \"lm_head.weight\" in params:\n",
+    "        model.out_head.weight = assign(model.out_head.weight, params[\"lm_head.weight\"], \"lm_head.weight\")\n",
+    "    else:\n",
+    "        # Model uses weight tying, hence we reuse the embedding layer weights here\n",
+    "        print(\"Model uses weight tying.\")\n",
+    "        model.out_head.weight = assign(model.out_head.weight, params[\"model.embed_tokens.weight\"], \"model.embed_tokens.weight\")"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 17,
+   "id": "699cb1b8-a67d-49fb-80a6-0dad9d81f392",
+   "metadata": {
+    "colab": {
+     "base_uri": "https://localhost:8080/",
+     "height": 17,
+     "referenced_widgets": [
+      "9881b6995c3f49dc89e6992fd9ab660b",
+      "17a3174e65c54476b2e0d1faf8f011ca",
+      "1bbf2e62c0754d1593beb4105a7f1ac1",
+      "b82112e1dec645d98aa1c1ba64abcb61",
+      "271e2bd6a35e4a8b92de8697f7c0be5f",
+      "90a79523187446dfa692723b2e5833a7",
+      "431ffb83b8c14bf182f0430e07ea6154",
+      "a8f1b72a33dd4b548de23fbd95e0da18",
+      "25cc36132d384189acfbecc59483134b",
+      "bfd06423ad544218968648016e731a46",
+      "d029630b63ff44cf807ade428d2eb421"
+     ]
+    },
+    "id": "699cb1b8-a67d-49fb-80a6-0dad9d81f392",
+    "outputId": "55b2f28c-142f-4698-9d23-d27456d3ed6d"
+   },
+   "outputs": [
+    {
+     "data": {
+      "application/vnd.jupyter.widget-view+json": {
+       "model_id": "acdfb3a707444d7691bc8f1b053224b1",
+       "version_major": 2,
+       "version_minor": 0
+      },
+      "text/plain": [
+       "Fetching 27 files:   0%|          | 0/27 [00:00<?, ?it/s]"
+      ]
+     },
+     "metadata": {},
+     "output_type": "display_data"
+    }
+   ],
+   "source": [
+    "import json\n",
+    "import os\n",
+    "from pathlib import Path\n",
+    "from safetensors.torch import load_file\n",
+    "from huggingface_hub import snapshot_download\n",
+    "\n",
+    "repo_id = \"Qwen/Qwen3-30B-A3B\"  # Original Instruct/Thinking hybrind model\n",
+    "repo_id = \"Qwen/Qwen3-235B-A22B-Instruct-2507\"  # New instruct model\n",
+    "repo_id = \"Qwen/Qwen3-30B-A3B-Thinking-2507\"  # New thinking model\n",
+    "repo_id = \"Qwen/Qwen3-Coder-30B-A3B-Instruct\"  # (Qwen3 Coder Flash)\n",
+    "\n",
+    "local_dir = Path(repo_id).parts[-1]\n",
+    "\n",
+    "repo_dir = snapshot_download(repo_id=repo_id, local_dir=local_dir)\n",
+    "index_path = os.path.join(repo_dir, \"model.safetensors.index.json\")\n",
+    "with open(index_path, \"r\") as f:\n",
+    "    index = json.load(f)\n",
+    "\n",
+    "weights_dict = {}\n",
+    "for filename in set(index[\"weight_map\"].values()):\n",
+    "    shard_path = os.path.join(repo_dir, filename)\n",
+    "    shard = load_file(shard_path)\n",
+    "    weights_dict.update(shard)\n",
+    "\n",
+    "load_weights_into_qwen(model, QWEN3_CONFIG, weights_dict)\n",
+    "model.to(device);"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "id": "6b345491-3510-4397-92d3-cd0a3fa3deee",
+   "metadata": {},
+   "source": [
+    "&nbsp;\n",
+    "# 4. Load tokenizer"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 18,
+   "id": "b68ab489-48e5-471e-a814-56cda2d60f81",
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "import re\n",
+    "from tokenizers import Tokenizer\n",
+    "\n",
+    "\n",
+    "class Qwen3Tokenizer:\n",
+    "    _SPECIALS = [\n",
+    "        \"<|endoftext|>\",\n",
+    "        \"<|im_start|>\", \"<|im_end|>\",\n",
+    "        \"<|object_ref_start|>\", \"<|object_ref_end|>\",\n",
+    "        \"<|box_start|>\", \"<|box_end|>\",\n",
+    "        \"<|quad_start|>\", \"<|quad_end|>\",\n",
+    "        \"<|vision_start|>\", \"<|vision_end|>\",\n",
+    "        \"<|vision_pad|>\", \"<|image_pad|>\", \"<|video_pad|>\",\n",
+    "    ]\n",
+    "    _SPLIT_RE = re.compile(r\"(<\\|[^>]+?\\|>)\")\n",
+    "\n",
+    "    def __init__(self, tokenizer_file_path=\"tokenizer.json\", repo_id=None,\n",
+    "                 apply_chat_template=True, add_generation_prompt=False, add_thinking=False):\n",
+    "\n",
+    "        self.apply_chat_template = apply_chat_template\n",
+    "        self.add_generation_prompt = add_generation_prompt\n",
+    "        self.add_thinking = add_thinking\n",
+    "\n",
+    "        tok_file = Path(tokenizer_file_path)\n",
+    "        self._tok = Tokenizer.from_file(str(tok_file))\n",
+    "        self._special_to_id = {t: self._tok.token_to_id(t) for t in self._SPECIALS}\n",
+    "\n",
+    "        self.pad_token_id = self._special_to_id.get(\"<|endoftext|>\")\n",
+    "        self.eos_token_id = self.pad_token_id\n",
+    "\n",
+    "        if repo_id and \"Base\" not in repo_id:\n",
+    "            eos_token = \"<|im_end|>\"\n",
+    "        else:\n",
+    "            eos_token = \"<|endoftext|>\"\n",
+    "        if eos_token in self._special_to_id:\n",
+    "            self.eos_token_id = self._special_to_id[eos_token]\n",
+    "\n",
+    "    def encode(self, text, chat_wrapped=None):\n",
+    "        if chat_wrapped is None:\n",
+    "            chat_wrapped = self.apply_chat_template\n",
+    "\n",
+    "        stripped = text.strip()\n",
+    "        if stripped in self._special_to_id and \"\\n\" not in stripped:\n",
+    "            return [self._special_to_id[stripped]]\n",
+    "\n",
+    "        if chat_wrapped:\n",
+    "            text = self._wrap_chat(text)\n",
+    "\n",
+    "        ids = []\n",
+    "        for part in filter(None, self._SPLIT_RE.split(text)):\n",
+    "            if part in self._special_to_id:\n",
+    "                ids.append(self._special_to_id[part])\n",
+    "            else:\n",
+    "                ids.extend(self._tok.encode(part).ids)\n",
+    "        return ids\n",
+    "\n",
+    "    def decode(self, ids):\n",
+    "        return self._tok.decode(ids, skip_special_tokens=False)\n",
+    "\n",
+    "    def _wrap_chat(self, user_msg):\n",
+    "        s = f\"<|im_start|>user\\n{user_msg}<|im_end|>\\n\"\n",
+    "        if self.add_generation_prompt:\n",
+    "            s += \"<|im_start|>assistant\"\n",
+    "            if self.add_thinking:\n",
+    "                s += \"\\n\"\n",
+    "            else:\n",
+    "                s += \"\\n<think>\\n\\n</think>\\n\\n\"\n",
+    "        return s"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 19,
+   "id": "7b6df8bc-7308-468e-93ce-2d5529ea7866",
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "tokenizer_file_path = f\"{Path(repo_id).parts[-1]}/tokenizer.json\"\n",
+    "\n",
+    "tokenizer = Qwen3Tokenizer(\n",
+    "    tokenizer_file_path=tokenizer_file_path,\n",
+    "    repo_id=repo_id,\n",
+    "    add_generation_prompt=True,\n",
+    "    add_thinking=True\n",
+    ")"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 21,
+   "id": "1946b534-e3af-431a-a222-391a60bfa892",
+   "metadata": {},
+   "outputs": [
+    {
+     "data": {
+      "text/plain": [
+       "'<|im_start|>user\\nImplement a binary search function in Python<|im_end|>\\n<|im_start|>assistant\\n'"
+      ]
+     },
+     "execution_count": 21,
+     "metadata": {},
+     "output_type": "execute_result"
+    }
+   ],
+   "source": [
+    "# prompt = \"Give me a short introduction to large language models.\"\n",
+    "prompt = \"Implement a binary search function in Python\"\n",
+    "\n",
+    "\n",
+    "input_token_ids = tokenizer.encode(prompt)\n",
+    "text = tokenizer.decode(input_token_ids)\n",
+    "text"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "id": "57d07df1-4401-4792-b549-7c4cc5632323",
+   "metadata": {
+    "id": "57d07df1-4401-4792-b549-7c4cc5632323"
+   },
+   "source": [
+    "&nbsp;\n",
+    "# 5. Generate text"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 22,
+   "id": "60b9fc72",
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "def generate_text_basic_stream(model, token_ids, max_new_tokens, eos_token_id=None, context_size=None):\n",
+    "    model.eval()\n",
+    "\n",
+    "    with torch.no_grad():\n",
+    "        cache = KVCache(n_layers=model.cfg[\"n_layers\"])\n",
+    "        model.reset_kv_cache()\n",
+    "\n",
+    "        # Prime the cache with the initial context\n",
+    "        logits = model(token_ids, cache=cache)\n",
+    "\n",
+    "        for _ in range(max_new_tokens):\n",
+    "            next_token = torch.argmax(logits[:, -1], dim=-1, keepdim=True)\n",
+    "\n",
+    "            if eos_token_id is not None and torch.all(next_token == eos_token_id):\n",
+    "                break\n",
+    "\n",
+    "            yield next_token\n",
+    "\n",
+    "            token_ids = torch.cat([token_ids, next_token], dim=1)\n",
+    "\n",
+    "            # Feed only the new token to the model; cache handles history\n",
+    "            logits = model(next_token, cache=cache)"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 23,
+   "id": "a5b30753",
+   "metadata": {},
+   "outputs": [
+    {
+     "name": "stdout",
+     "output_type": "stream",
+     "text": [
+      "Here's a comprehensive implementation of binary search in Python with both iterative and recursive approaches:\n",
+      "\n",
+      "## Iterative Binary Search\n",
+      "\n",
+      "```python\n",
+      "def binary_search(arr, target):\n",
+      "    \"\"\"\n",
+      "    Iterative binary search implementation\n",
+      "    \n",
+      "    Args:\n",
+      "        arr: Sorted list of elements\n",
+      "        target: Element to search for\n",
+      "    \n",
+      "    Returns:\n",
+      "        int: Index of target if found, -1 if not found\n",
+      "    \n",
+      "    Time Complexity: O(log n)\n",
+      "    Space Complexity: O(1)\n",
+      "    \"\"\"\n",
+      "    left = 0\n",
+      "    right = len(arr) - 1\n",
+      "    \n",
+      "    while left <= right:\n",
+      "        # Calculate middle index (avoiding potential overflow)\n",
+      "        mid = left + (right - left) // 2\n",
+      "        \n",
+      "        if arr[mid] == target:\n",
+      "            return mid\n",
+      "        elif arr[mid] < target:\n",
+      "            left = mid + 1\n",
+      "        else:\n",
+      "            right = mid - 1\n",
+      "    \n",
+      "    return -1  # Target not found\n",
+      "```\n",
+      "\n",
+      "## Recursive Binary Search\n",
+      "\n"
+     ]
+    }
+   ],
+   "source": [
+    "input_token_ids_tensor = torch.tensor(input_token_ids, device=device).unsqueeze(0)\n",
+    "\n",
+    "\n",
+    "for token in generate_text_basic_stream(\n",
+    "    model=model,\n",
+    "    token_ids=input_token_ids_tensor,\n",
+    "    max_new_tokens=200,\n",
+    "    eos_token_id=tokenizer.eos_token_id\n",
+    "):\n",
+    "    token_id = token.squeeze(0).tolist()\n",
+    "    print(\n",
+    "        tokenizer.decode(token_id),\n",
+    "        end=\"\",\n",
+    "        flush=True\n",
+    "    )"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "id": "549324d6-5c71-4147-ae21-2e67675faa3d",
+   "metadata": {
+    "id": "549324d6-5c71-4147-ae21-2e67675faa3d"
+   },
+   "source": [
+    "&nbsp;\n",
+    "# What's next?"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "id": "e6edaaae-2de1-406c-8ffa-897cdfa3808c",
+   "metadata": {
+    "id": "e6edaaae-2de1-406c-8ffa-897cdfa3808c"
+   },
+   "source": [
+    "- Check out the [README.md](./README.md), to use this model via the `llms_from_scratch` package\n",
+    "- For those interested in a comprehensive guide on building a large language model from scratch and gaining a deeper understanding of its mechanics, you might like my [Build a Large Language Model (From Scratch)](http://mng.bz/orYv)\n",
+    "\n",
+    "<a href=\"http://mng.bz/orYv\"><img src=\"https://sebastianraschka.com/images/LLMs-from-scratch-images/cover-small.webp\" width=\"100px\"></a>"
+   ]
+  }
+ ],
+ "metadata": {
+  "accelerator": "GPU",
+  "colab": {
+   "gpuType": "A100",
+   "provenance": []
+  },
+  "kernelspec": {
+   "display_name": "Python 3 (ipykernel)",
+   "language": "python",
+   "name": "python3"
+  },
+  "language_info": {
+   "codemirror_mode": {
+    "name": "ipython",
+    "version": 3
+   },
+   "file_extension": ".py",
+   "mimetype": "text/x-python",
+   "name": "python",
+   "nbconvert_exporter": "python",
+   "pygments_lexer": "ipython3",
+   "version": "3.10.16"
+  }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 5
+}

+ 1220 - 0
ch05/11_qwen3/standalone-qwen3-moe.ipynb

@@ -0,0 +1,1220 @@
+{
+ "cells": [
+  {
+   "cell_type": "markdown",
+   "id": "e1b280ab-b61f-4d1a-bf7e-44e5f9ed3a5c",
+   "metadata": {
+    "id": "e1b280ab-b61f-4d1a-bf7e-44e5f9ed3a5c"
+   },
+   "source": [
+    "<table style=\"width:100%\">\n",
+    "<tr>\n",
+    "<td style=\"vertical-align:middle; text-align:left;\">\n",
+    "<font size=\"2\">\n",
+    "Supplementary code for the <a href=\"http://mng.bz/orYv\">Build a Large Language Model From Scratch</a> book by <a href=\"https://sebastianraschka.com\">Sebastian Raschka</a><br>\n",
+    "<br>Code repository: <a href=\"https://github.com/rasbt/LLMs-from-scratch\">https://github.com/rasbt/LLMs-from-scratch</a>\n",
+    "</font>\n",
+    "</td>\n",
+    "<td style=\"vertical-align:middle; text-align:left;\">\n",
+    "<a href=\"http://mng.bz/orYv\"><img src=\"https://sebastianraschka.com/images/LLMs-from-scratch-images/cover-small.webp\" width=\"100px\"></a>\n",
+    "</td>\n",
+    "</tr>\n",
+    "</table>"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "id": "efde77f2-6af3-4781-8597-89ecd3f41a52",
+   "metadata": {
+    "id": "efde77f2-6af3-4781-8597-89ecd3f41a52"
+   },
+   "source": [
+    "# Qwen3 Mixture-of-Experts From Scratch (A Standalone Notebook)"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "id": "55cdef4d-de59-4a65-89f9-fa2a8ef3471d",
+   "metadata": {
+    "id": "55cdef4d-de59-4a65-89f9-fa2a8ef3471d"
+   },
+   "source": [
+    "- This notebook is purposefully minimal and focuses on the code to implement Qwen3-30B-A3B model (with support for **Coder**, **Instruct** and **Thinking** variants); for more information about this model, please see the original blog post, technical report, and model hub pages:\n",
+    "  - [Qwen3: Think Deeper, Act Faster](https://qwenlm.github.io/blog/qwen3/)\n",
+    "  - [Qwen3 Technical Report](https://arxiv.org/abs/2505.09388)\n",
+    "  - https://huggingface.co/Qwen/Qwen3-Coder-30B-A3B-Instruct (Qwen3 Coder Flash)\n",
+    "  - https://huggingface.co/Qwen/Qwen3-30B-A3B-Thinking-2507 (new thinking model)\n",
+    "  - https://huggingface.co/Qwen/Qwen3-235B-A22B-Instruct-2507 (new instruct model)\n",
+    "  - https://huggingface.co/Qwen/Qwen3-30B-A3B (original Instruct/Thinking hybrid model)\n",
+    "- Many architectural components in Qwen3 are similar to Llama 3; for a step-by-step guide that explains the individual components and the relationship between GPT and the components used here, you may like the GPT-to-Llama conversion notebooks:\n",
+    "  - [Converting a From-Scratch GPT Architecture to Llama 2](../07_gpt_to_llama/converting-gpt-to-llama2.ipynb)\n",
+    "  - [Converting Llama 2 to Llama 3.2 From Scratch](../07_gpt_to_llama/converting-llama2-to-llama3.ipynb)\n",
+    "  \n",
+    "\n",
+    "**By default, this notebook runs Qwen3-Coder-30B-A3B-Instruct (aka Qwen3 Coder Flash) and requires 80 GB of VRAM (e.g., a single A100 or H100). Note that [this related KV-cache notebook](standalone-qwen3-moe-plus-kvcache.ipynb) adds more code complexity but runs about 3x faster.**\n",
+    "\n",
+    "<br>\n",
+    "\n",
+    "<img src=\"https://sebastianraschka.com/images/LLMs-from-scratch-images/bonus/qwen/qwen3-coder-flash-overview.webp?123\" width=\"600px\">\n",
+    "\n",
+    "<br>\n",
+    "  \n",
+    "- About the code:\n",
+    "  - all code is my own code, mapping the Qwen3 architecture onto the model code implemented in my [Build A Large Language Model (From Scratch)](http://mng.bz/orYv) book; the code is released under a permissive open-source Apache 2.0 license (see [LICENSE.txt](https://github.com/rasbt/LLMs-from-scratch/blob/main/LICENSE.txt))"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 1,
+   "id": "7c201adb-747e-437b-9a62-442802941e01",
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "# pip install -r https://raw.githubusercontent.com/rasbt/LLMs-from-scratch/refs/heads/main/ch05/07_gpt_to_llama/requirements-extra.txt"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 2,
+   "id": "dd1b65a8-4301-444a-bd7c-a6f2bd1df9df",
+   "metadata": {
+    "colab": {
+     "base_uri": "https://localhost:8080/"
+    },
+    "id": "dd1b65a8-4301-444a-bd7c-a6f2bd1df9df",
+    "outputId": "4f762354-e0a3-4cc2-e5d4-e61a227a202c"
+   },
+   "outputs": [
+    {
+     "name": "stdout",
+     "output_type": "stream",
+     "text": [
+      "huggingface_hub version: 0.34.3\n",
+      "tokenizers version: 0.21.4\n",
+      "torch version: 2.7.1+cu128\n"
+     ]
+    }
+   ],
+   "source": [
+    "from importlib.metadata import version\n",
+    "\n",
+    "pkgs = [\n",
+    "    \"huggingface_hub\",  # to download pretrained weights\n",
+    "    \"tokenizers\",       # to implement the tokenizer\n",
+    "    \"torch\",            # to implement the model\n",
+    "]\n",
+    "for p in pkgs:\n",
+    "    print(f\"{p} version: {version(p)}\")"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "id": "653410a6-dd2b-4eb2-a722-23d9782e726d",
+   "metadata": {
+    "id": "653410a6-dd2b-4eb2-a722-23d9782e726d"
+   },
+   "source": [
+    "&nbsp;\n",
+    "# 1. Architecture code"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 3,
+   "id": "82076c21-9331-4dcd-b017-42b046cf1a60",
+   "metadata": {
+    "id": "82076c21-9331-4dcd-b017-42b046cf1a60"
+   },
+   "outputs": [],
+   "source": [
+    "import torch\n",
+    "import torch.nn as nn\n",
+    "\n",
+    "\n",
+    "class FeedForward(nn.Module):\n",
+    "    def __init__(self, cfg):\n",
+    "        super().__init__()\n",
+    "        self.fc1 = nn.Linear(cfg[\"emb_dim\"], cfg[\"hidden_dim\"], dtype=cfg[\"dtype\"], bias=False)\n",
+    "        self.fc2 = nn.Linear(cfg[\"emb_dim\"], cfg[\"hidden_dim\"], dtype=cfg[\"dtype\"], bias=False)\n",
+    "        self.fc3 = nn.Linear(cfg[\"hidden_dim\"], cfg[\"emb_dim\"], dtype=cfg[\"dtype\"], bias=False)\n",
+    "\n",
+    "    def forward(self, x):\n",
+    "        x_fc1 = self.fc1(x)\n",
+    "        x_fc2 = self.fc2(x)\n",
+    "        x = nn.functional.silu(x_fc1) * x_fc2\n",
+    "        return self.fc3(x)\n",
+    "\n",
+    "\n",
+    "class MoEFeedForward(nn.Module):\n",
+    "    def __init__(self, cfg):\n",
+    "        super().__init__()\n",
+    "        self.num_experts_per_tok = cfg[\"num_experts_per_tok\"]\n",
+    "        self.num_experts = cfg[\"num_experts\"]\n",
+    "        self.gate = nn.Linear(cfg[\"emb_dim\"], cfg[\"num_experts\"], bias=False, dtype=cfg[\"dtype\"])\n",
+    "\n",
+    "        meta_device = torch.device(\"meta\")  # to reduce memory pressure and only load them when used (trades compute for memory)\n",
+    "        self.fc1 = nn.ModuleList([nn.Linear(cfg[\"emb_dim\"], cfg[\"moe_intermediate_size\"], bias=False, dtype=cfg[\"dtype\"], device=meta_device)\n",
+    "                                  for _ in range(cfg[\"num_experts\"])])\n",
+    "        self.fc2 = nn.ModuleList([nn.Linear(cfg[\"emb_dim\"], cfg[\"moe_intermediate_size\"], bias=False, dtype=cfg[\"dtype\"], device=meta_device)\n",
+    "                                  for _ in range(cfg[\"num_experts\"])])\n",
+    "        self.fc3 = nn.ModuleList([nn.Linear(cfg[\"moe_intermediate_size\"], cfg[\"emb_dim\"], bias=False, dtype=cfg[\"dtype\"], device=meta_device)\n",
+    "                                  for _ in range(cfg[\"num_experts\"])])\n",
+    "\n",
+    "    def forward(self, x):\n",
+    "        b, seq_len, embed_dim = x.shape\n",
+    "        scores = self.gate(x)  # (b, seq_len, num_experts)\n",
+    "        topk_scores, topk_indices = torch.topk(scores, self.num_experts_per_tok, dim=-1)\n",
+    "        topk_probs = torch.softmax(topk_scores, dim=-1)\n",
+    "        \n",
+    "        expert_outputs = []\n",
+    "        for e in range(self.num_experts):\n",
+    "            hidden = torch.nn.functional.silu(self.fc1[e](x)) * self.fc2[e](x)\n",
+    "            out = self.fc3[e](hidden)\n",
+    "            expert_outputs.append(out.unsqueeze(-2))\n",
+    "        expert_outputs = torch.cat(expert_outputs, dim=-2)  # (b, t, num_experts, emb_dim)\n",
+    "\n",
+    "        gating_probs = torch.zeros_like(scores)\n",
+    "\n",
+    "        for i in range(self.num_experts_per_tok):\n",
+    "            indices = topk_indices[..., i:i+1]\n",
+    "            prob = topk_probs[..., i:i+1]\n",
+    "            gating_probs.scatter_(dim=-1, index=indices, src=prob)\n",
+    "        gating_probs = gating_probs.unsqueeze(-1)  # (b, t, num_experts, 1)\n",
+    "        \n",
+    "        # Weighted sum over experts\n",
+    "        y = (gating_probs * expert_outputs).sum(dim=-2)\n",
+    "        return y\n",
+    "\n",
+    "\n",
+    "        # For some reason, the version below is slower than the naive version\n",
+    "        # above that computes all experts, even the unused ones\n",
+    "\n",
+    "        # def forward(self, x):\n",
+    "        #     scores = self.gate(x)  # (b, seq_len, num_experts)\n",
+    "        #     topk_scores, topk_indices = torch.topk(scores, self.num_experts_per_tok, dim=-1)\n",
+    "        #     topk_probs = torch.softmax(topk_scores, dim=-1)\n",
+    "        #     y = torch.zeros_like(x)\n",
+    "\n",
+    "        #     for i in range(self.num_experts_per_tok):\n",
+    "        #         # expert_indices is (b, seq_len) with values in [0, num_experts)\n",
+    "        #         expert_indices = topk_indices[..., i]\n",
+    "        #         prob = topk_probs[..., i].unsqueeze(-1)  # (b, seq_len, 1)\n",
+    "\n",
+    "        #         # For each expert, process only the tokens assigned to it\n",
+    "        #         for e in range(self.num_experts):\n",
+    "        #             mask = (expert_indices == e)  # (b, seq_len) boolean mask\n",
+    "        #             if mask.any():\n",
+    "        #                 selected = x[mask]  # (num_tokens_e, emb_dim)\n",
+    "        #                 # Compute FF for expert e\n",
+    "        #                 out = self.fc3[e](torch.nn.functional.silu(self.fc1[e](selected)) * self.fc2[e](selected))\n",
+    "        #                 # Scale by gating prob and scatter back\n",
+    "        #                 y[mask] += prob[mask] * out\n",
+    "        #     return y"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 4,
+   "id": "56715760-37e1-433e-89da-04864c139a9e",
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "class RMSNorm(nn.Module):\n",
+    "    def __init__(self, emb_dim, eps=1e-6, bias=False, qwen3_compatible=True):\n",
+    "        super().__init__()\n",
+    "        self.eps = eps\n",
+    "        self.qwen3_compatible = qwen3_compatible\n",
+    "        self.scale = nn.Parameter(torch.ones(emb_dim))\n",
+    "        self.shift = nn.Parameter(torch.zeros(emb_dim)) if bias else None\n",
+    "\n",
+    "    def forward(self, x):\n",
+    "        input_dtype = x.dtype\n",
+    "\n",
+    "        if self.qwen3_compatible:\n",
+    "            x = x.to(torch.float32)\n",
+    "\n",
+    "        variance = x.pow(2).mean(dim=-1, keepdim=True)\n",
+    "        norm_x = x * torch.rsqrt(variance + self.eps)\n",
+    "        norm_x = norm_x * self.scale\n",
+    "\n",
+    "        if self.shift is not None:\n",
+    "            norm_x = norm_x + self.shift\n",
+    "\n",
+    "        return norm_x.to(input_dtype)"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 5,
+   "id": "4b9a346f-5826-4083-9162-abd56afc03f0",
+   "metadata": {
+    "id": "4b9a346f-5826-4083-9162-abd56afc03f0"
+   },
+   "outputs": [],
+   "source": [
+    "def compute_rope_params(head_dim, theta_base=10_000, context_length=4096, dtype=torch.float32):\n",
+    "    assert head_dim % 2 == 0, \"Embedding dimension must be even\"\n",
+    "\n",
+    "    # Compute the inverse frequencies\n",
+    "    inv_freq = 1.0 / (theta_base ** (torch.arange(0, head_dim, 2, dtype=dtype)[: (head_dim // 2)].float() / head_dim))\n",
+    "\n",
+    "    # Generate position indices\n",
+    "    positions = torch.arange(context_length, dtype=dtype)\n",
+    "\n",
+    "    # Compute the angles\n",
+    "    angles = positions[:, None] * inv_freq[None, :]  # Shape: (context_length, head_dim // 2)\n",
+    "\n",
+    "    # Expand angles to match the head_dim\n",
+    "    angles = torch.cat([angles, angles], dim=1)  # Shape: (context_length, head_dim)\n",
+    "\n",
+    "    # Precompute sine and cosine\n",
+    "    cos = torch.cos(angles)\n",
+    "    sin = torch.sin(angles)\n",
+    "\n",
+    "    return cos, sin\n",
+    "\n",
+    "\n",
+    "def apply_rope(x, cos, sin):\n",
+    "    # x: (batch_size, num_heads, seq_len, head_dim)\n",
+    "    batch_size, num_heads, seq_len, head_dim = x.shape\n",
+    "    assert head_dim % 2 == 0, \"Head dimension must be even\"\n",
+    "\n",
+    "    # Split x into first half and second half\n",
+    "    x1 = x[..., : head_dim // 2]  # First half\n",
+    "    x2 = x[..., head_dim // 2 :]  # Second half\n",
+    "\n",
+    "    # Adjust sin and cos shapes\n",
+    "    cos = cos[:seq_len, :].unsqueeze(0).unsqueeze(0)  # Shape: (1, 1, seq_len, head_dim)\n",
+    "    sin = sin[:seq_len, :].unsqueeze(0).unsqueeze(0)\n",
+    "\n",
+    "    # Apply the rotary transformation\n",
+    "    rotated = torch.cat((-x2, x1), dim=-1)\n",
+    "    x_rotated = (x * cos) + (rotated * sin)\n",
+    "\n",
+    "    # It's ok to use lower-precision after applying cos and sin rotation\n",
+    "    return x_rotated.to(dtype=x.dtype)"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 6,
+   "id": "e8169ab5-f976-4222-a2e1-eb1cabf267cb",
+   "metadata": {
+    "id": "e8169ab5-f976-4222-a2e1-eb1cabf267cb"
+   },
+   "outputs": [],
+   "source": [
+    "class GroupedQueryAttention(nn.Module):\n",
+    "    def __init__(\n",
+    "        self, d_in, num_heads, num_kv_groups, head_dim=None, qk_norm=False, dtype=None\n",
+    "    ):\n",
+    "        super().__init__()\n",
+    "        assert num_heads % num_kv_groups == 0, \"num_heads must be divisible by num_kv_groups\"\n",
+    "\n",
+    "        self.num_heads = num_heads\n",
+    "        self.num_kv_groups = num_kv_groups\n",
+    "        self.group_size = num_heads // num_kv_groups\n",
+    "\n",
+    "        if head_dim is None:\n",
+    "            assert d_in % num_heads == 0, \"`d_in` must be divisible by `num_heads` if `head_dim` is not set\"\n",
+    "            head_dim = d_in // num_heads\n",
+    "\n",
+    "        self.head_dim = head_dim\n",
+    "        self.d_out = num_heads * head_dim\n",
+    "\n",
+    "        self.W_query = nn.Linear(d_in, self.d_out, bias=False, dtype=dtype)\n",
+    "        self.W_key = nn.Linear(d_in, num_kv_groups * head_dim, bias=False, dtype=dtype)\n",
+    "        self.W_value = nn.Linear(d_in, num_kv_groups * head_dim, bias=False, dtype=dtype)\n",
+    "\n",
+    "        self.out_proj = nn.Linear(self.d_out, d_in, bias=False, dtype=dtype)\n",
+    "\n",
+    "        if qk_norm:\n",
+    "            self.q_norm = RMSNorm(head_dim, eps=1e-6)\n",
+    "            self.k_norm = RMSNorm(head_dim, eps=1e-6)\n",
+    "        else:\n",
+    "            self.q_norm = self.k_norm = None\n",
+    "\n",
+    "    def forward(self, x, mask, cos, sin):\n",
+    "        b, num_tokens, _ = x.shape\n",
+    "\n",
+    "        # Apply projections\n",
+    "        queries = self.W_query(x)  # (b, num_tokens, num_heads * head_dim)\n",
+    "        keys = self.W_key(x)       # (b, num_tokens, num_kv_groups * head_dim)\n",
+    "        values = self.W_value(x)   # (b, num_tokens, num_kv_groups * head_dim)\n",
+    "\n",
+    "        # Reshape\n",
+    "        queries = queries.view(b, num_tokens, self.num_heads, self.head_dim).transpose(1, 2)\n",
+    "        keys = keys.view(b, num_tokens, self.num_kv_groups, self.head_dim).transpose(1, 2)\n",
+    "        values = values.view(b, num_tokens, self.num_kv_groups, self.head_dim).transpose(1, 2)\n",
+    "\n",
+    "        # Optional normalization\n",
+    "        if self.q_norm:\n",
+    "            queries = self.q_norm(queries)\n",
+    "        if self.k_norm:\n",
+    "            keys = self.k_norm(keys)\n",
+    "\n",
+    "        # Apply RoPE\n",
+    "        queries = apply_rope(queries, cos, sin)\n",
+    "        keys = apply_rope(keys, cos, sin)\n",
+    "\n",
+    "        # Expand K and V to match number of heads\n",
+    "        keys = keys.repeat_interleave(self.group_size, dim=1)\n",
+    "        values = values.repeat_interleave(self.group_size, dim=1)\n",
+    "\n",
+    "        # Attention\n",
+    "        attn_scores = queries @ keys.transpose(2, 3)\n",
+    "        attn_scores = attn_scores.masked_fill(mask, -torch.inf)\n",
+    "        attn_weights = torch.softmax(attn_scores / self.head_dim**0.5, dim=-1)\n",
+    "\n",
+    "        context = (attn_weights @ values).transpose(1, 2).reshape(b, num_tokens, self.d_out)\n",
+    "        return self.out_proj(context)"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 7,
+   "id": "457cb2f8-50c1-4045-8a74-f181bfb5fea9",
+   "metadata": {
+    "id": "457cb2f8-50c1-4045-8a74-f181bfb5fea9"
+   },
+   "outputs": [],
+   "source": [
+    "class TransformerBlock(nn.Module):\n",
+    "    def __init__(self, cfg):\n",
+    "        super().__init__()\n",
+    "        self.att = GroupedQueryAttention(\n",
+    "            d_in=cfg[\"emb_dim\"],\n",
+    "            num_heads=cfg[\"n_heads\"],\n",
+    "            head_dim=cfg[\"head_dim\"],\n",
+    "            num_kv_groups=cfg[\"n_kv_groups\"],\n",
+    "            qk_norm=cfg[\"qk_norm\"],\n",
+    "            dtype=cfg[\"dtype\"]\n",
+    "        )\n",
+    "        if cfg[\"num_experts\"] > 0:\n",
+    "            self.ff = MoEFeedForward(cfg)\n",
+    "        else:\n",
+    "            self.ff = FeedForward(cfg)\n",
+    "        self.norm1 = RMSNorm(cfg[\"emb_dim\"], eps=1e-6)\n",
+    "        self.norm2 = RMSNorm(cfg[\"emb_dim\"], eps=1e-6)\n",
+    "\n",
+    "    def forward(self, x, mask, cos, sin):\n",
+    "        # Shortcut connection for attention block\n",
+    "        shortcut = x\n",
+    "        x = self.norm1(x)\n",
+    "        x = self.att(x, mask, cos, sin)  # Shape [batch_size, num_tokens, emb_size]\n",
+    "        x = x + shortcut  # Add the original input back\n",
+    "\n",
+    "        # Shortcut connection for feed-forward block\n",
+    "        shortcut = x\n",
+    "        x = self.norm2(x)\n",
+    "        x = self.ff(x)\n",
+    "        x = x + shortcut  # Add the original input back\n",
+    "\n",
+    "        return x"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 8,
+   "id": "e88de3e3-9f07-42cc-816b-28dbd46e96c4",
+   "metadata": {
+    "id": "e88de3e3-9f07-42cc-816b-28dbd46e96c4"
+   },
+   "outputs": [],
+   "source": [
+    "class Qwen3Model(nn.Module):\n",
+    "    def __init__(self, cfg):\n",
+    "        super().__init__()\n",
+    "\n",
+    "        # Main model parameters\n",
+    "        self.tok_emb = nn.Embedding(cfg[\"vocab_size\"], cfg[\"emb_dim\"], dtype=cfg[\"dtype\"])\n",
+    "\n",
+    "        self.trf_blocks = nn.ModuleList(  # ModuleList since Sequential can only accept one input, and we need `x, mask, cos, sin`\n",
+    "            [TransformerBlock(cfg) for _ in range(cfg[\"n_layers\"])]\n",
+    "        )\n",
+    "\n",
+    "        self.final_norm = RMSNorm(cfg[\"emb_dim\"])\n",
+    "        self.out_head = nn.Linear(cfg[\"emb_dim\"], cfg[\"vocab_size\"], bias=False, dtype=cfg[\"dtype\"])\n",
+    "\n",
+    "        # Reusuable utilities\n",
+    "        if cfg[\"head_dim\"] is None:\n",
+    "            head_dim = cfg[\"emb_dim\"] // cfg[\"n_heads\"]\n",
+    "        else:\n",
+    "            head_dim = cfg[\"head_dim\"]\n",
+    "        cos, sin = compute_rope_params(\n",
+    "            head_dim=head_dim,\n",
+    "            theta_base=cfg[\"rope_base\"],\n",
+    "            context_length=cfg[\"context_length\"]\n",
+    "        )\n",
+    "        self.register_buffer(\"cos\", cos, persistent=False)\n",
+    "        self.register_buffer(\"sin\", sin, persistent=False)\n",
+    "        self.cfg = cfg\n",
+    "\n",
+    "\n",
+    "    def forward(self, in_idx):\n",
+    "        # Forward pass\n",
+    "        tok_embeds = self.tok_emb(in_idx)\n",
+    "        x = tok_embeds\n",
+    "\n",
+    "        num_tokens = x.shape[1]\n",
+    "        mask = torch.triu(torch.ones(num_tokens, num_tokens, device=x.device, dtype=torch.bool), diagonal=1)\n",
+    "        \n",
+    "        for block in self.trf_blocks:\n",
+    "            x = block(x, mask, self.cos, self.sin)\n",
+    "        x = self.final_norm(x)\n",
+    "        logits = self.out_head(x.to(self.cfg[\"dtype\"]))\n",
+    "        return logits"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "id": "be2d201f-74ad-4d63-ab9c-601b00674a48",
+   "metadata": {
+    "id": "be2d201f-74ad-4d63-ab9c-601b00674a48"
+   },
+   "source": [
+    "&nbsp;\n",
+    "# 2. Initialize model"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 9,
+   "id": "caa142fa-b375-4e78-b392-2072ced666f3",
+   "metadata": {
+    "id": "caa142fa-b375-4e78-b392-2072ced666f3"
+   },
+   "outputs": [],
+   "source": [
+    "# Same config for\n",
+    "\n",
+    "# https://huggingface.co/Qwen/Qwen3-Coder-30B-A3B-Instruct (Qwen3 Coder Flash)\n",
+    "# https://huggingface.co/Qwen/Qwen3-30B-A3B-Thinking-2507\n",
+    "# https://huggingface.co/Qwen/Qwen3-235B-A22B-Instruct-2507\n",
+    "# https://huggingface.co/Qwen/Qwen3-30B-A3B (original Instruct/Thinking hybrid model)\n",
+    "\n",
+    "QWEN3_CONFIG = {\n",
+    "    \"vocab_size\": 151_936,\n",
+    "    \"context_length\": 262_144,\n",
+    "    \"emb_dim\": 2048,\n",
+    "    \"n_heads\": 32,\n",
+    "    \"n_layers\": 48,\n",
+    "    \"head_dim\": 128,\n",
+    "    \"qk_norm\": True,\n",
+    "    \"n_kv_groups\": 4,\n",
+    "    \"rope_base\": 10_000_000.0,\n",
+    "    \"dtype\": torch.bfloat16,\n",
+    "    \"num_experts\": 128,\n",
+    "    \"num_experts_per_tok\": 8,\n",
+    "        \"moe_intermediate_size\": 768,\n",
+    "}"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 10,
+   "id": "313effd0",
+   "metadata": {},
+   "outputs": [
+    {
+     "name": "stdout",
+     "output_type": "stream",
+     "text": [
+      "cuda\n"
+     ]
+    }
+   ],
+   "source": [
+    "if torch.cuda.is_available():\n",
+    "    device = torch.device(\"cuda\")\n",
+    "elif torch.backends.mps.is_available():\n",
+    "    device = torch.device(\"mps\")\n",
+    "else:\n",
+    "    device = torch.device(\"cpu\")\n",
+    "\n",
+    "print(device)"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 11,
+   "id": "156253fe-aacd-4da2-8f13-705f05c4b11e",
+   "metadata": {
+    "id": "156253fe-aacd-4da2-8f13-705f05c4b11e"
+   },
+   "outputs": [],
+   "source": [
+    "torch.manual_seed(123)\n",
+    "\n",
+    "with device:\n",
+    "    model = Qwen3Model(QWEN3_CONFIG)\n",
+    "\n",
+    "#model.to(device)"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "id": "90aca91d-4bee-45ce-993a-4ec5393abe2b",
+   "metadata": {},
+   "source": [
+    "- A quick check that the forward pass works before continuing (nan values are ok for now since we are using a \"meta\" device upon instantiation to save memory):"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 12,
+   "id": "adf0a6b7-b688-42c9-966e-c223d34db99f",
+   "metadata": {},
+   "outputs": [
+    {
+     "data": {
+      "text/plain": [
+       "tensor([[[nan, nan, nan,  ..., nan, nan, nan],\n",
+       "         [nan, nan, nan,  ..., nan, nan, nan],\n",
+       "         [nan, nan, nan,  ..., nan, nan, nan]]], device='cuda:0',\n",
+       "       dtype=torch.bfloat16, grad_fn=<UnsafeViewBackward0>)"
+      ]
+     },
+     "execution_count": 12,
+     "metadata": {},
+     "output_type": "execute_result"
+    }
+   ],
+   "source": [
+    "model(torch.tensor([1, 2, 3]).unsqueeze(0).to(device))"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 13,
+   "id": "364e76ca-52f8-4fa5-af37-c4069f9694bc",
+   "metadata": {
+    "colab": {
+     "base_uri": "https://localhost:8080/"
+    },
+    "id": "364e76ca-52f8-4fa5-af37-c4069f9694bc",
+    "outputId": "00d7e983-262e-4c65-f322-f4d999311988"
+   },
+   "outputs": [
+    {
+     "name": "stdout",
+     "output_type": "stream",
+     "text": [
+      "Total number of parameters: 30,532,122,624\n",
+      "\n",
+      "Total number of unique parameters: 30,220,957,696\n"
+     ]
+    }
+   ],
+   "source": [
+    "total_params = sum(p.numel() for p in model.parameters())\n",
+    "print(f\"Total number of parameters: {total_params:,}\")\n",
+    "\n",
+    "# Account for weight tying\n",
+    "total_params_normalized = total_params - model.tok_emb.weight.numel()\n",
+    "print(f\"\\nTotal number of unique parameters: {total_params_normalized:,}\")"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 14,
+   "id": "fd5efb03-5a07-46e8-8607-93ed47549d2b",
+   "metadata": {
+    "colab": {
+     "base_uri": "https://localhost:8080/"
+    },
+    "id": "fd5efb03-5a07-46e8-8607-93ed47549d2b",
+    "outputId": "65c1a95e-b502-4150-9e2e-da619d9053d5"
+   },
+   "outputs": [
+    {
+     "name": "stdout",
+     "output_type": "stream",
+     "text": [
+      "float32 (PyTorch default): 227.73 GB\n",
+      "bfloat16: 113.87 GB\n"
+     ]
+    }
+   ],
+   "source": [
+    "def model_memory_size(model, input_dtype=torch.float32):\n",
+    "    total_params = 0\n",
+    "    total_grads = 0\n",
+    "    for param in model.parameters():\n",
+    "        # Calculate total number of elements per parameter\n",
+    "        param_size = param.numel()\n",
+    "        total_params += param_size\n",
+    "        # Check if gradients are stored for this parameter\n",
+    "        if param.requires_grad:\n",
+    "            total_grads += param_size\n",
+    "\n",
+    "    # Calculate buffer size (non-parameters that require memory)\n",
+    "    total_buffers = sum(buf.numel() for buf in model.buffers())\n",
+    "\n",
+    "    # Size in bytes = (Number of elements) * (Size of each element in bytes)\n",
+    "    # We assume parameters and gradients are stored in the same type as input dtype\n",
+    "    element_size = torch.tensor(0, dtype=input_dtype).element_size()\n",
+    "    total_memory_bytes = (total_params + total_grads + total_buffers) * element_size\n",
+    "\n",
+    "    # Convert bytes to gigabytes\n",
+    "    total_memory_gb = total_memory_bytes / (1024**3)\n",
+    "\n",
+    "    return total_memory_gb\n",
+    "\n",
+    "print(f\"float32 (PyTorch default): {model_memory_size(model, input_dtype=torch.float32):.2f} GB\")\n",
+    "print(f\"bfloat16: {model_memory_size(model, input_dtype=torch.bfloat16):.2f} GB\")"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "id": "4686eeb7-281f-4c5c-b37a-ed21d0a10427",
+   "metadata": {},
+   "source": [
+    "- Don't be concerned; the model runs fine on an A100 card with 80 GB RAM due to offloading some layers to CPU RAM"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "id": "c172f89f-d301-439f-b809-46169e5f5945",
+   "metadata": {
+    "id": "c172f89f-d301-439f-b809-46169e5f5945"
+   },
+   "source": [
+    "&nbsp;\n",
+    "# 4. Load pretrained weights"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 15,
+   "id": "75166128-5899-4995-9b88-9672e135650e",
+   "metadata": {
+    "id": "75166128-5899-4995-9b88-9672e135650e"
+   },
+   "outputs": [],
+   "source": [
+    "def load_weights_into_qwen(model, param_config, params):\n",
+    "    def assign(left, right, tensor_name=\"unknown\"):\n",
+    "        if left.shape != right.shape:\n",
+    "            raise ValueError(f\"Shape mismatch in tensor '{tensor_name}'. Left: {left.shape}, Right: {right.shape}\")\n",
+    "        return torch.nn.Parameter(right.clone().detach() if isinstance(right, torch.Tensor) else torch.tensor(right))\n",
+    "\n",
+    "    model.tok_emb.weight = assign(model.tok_emb.weight, params[\"model.embed_tokens.weight\"], \"model.embed_tokens.weight\")\n",
+    "\n",
+    "    for l in range(param_config[\"n_layers\"]):\n",
+    "        block = model.trf_blocks[l]\n",
+    "        att = block.att\n",
+    "\n",
+    "        # Q, K, V projections\n",
+    "        att.W_query.weight = assign(\n",
+    "            att.W_query.weight,\n",
+    "            params[f\"model.layers.{l}.self_attn.q_proj.weight\"],\n",
+    "            f\"model.layers.{l}.self_attn.q_proj.weight\"\n",
+    "        )\n",
+    "        att.W_key.weight = assign(\n",
+    "            att.W_key.weight,\n",
+    "            params[f\"model.layers.{l}.self_attn.k_proj.weight\"],\n",
+    "            f\"model.layers.{l}.self_attn.k_proj.weight\"\n",
+    "        )\n",
+    "        att.W_value.weight = assign(\n",
+    "            att.W_value.weight,\n",
+    "            params[f\"model.layers.{l}.self_attn.v_proj.weight\"],\n",
+    "            f\"model.layers.{l}.self_attn.v_proj.weight\"\n",
+    "        )\n",
+    "\n",
+    "        # Output projection\n",
+    "        att.out_proj.weight = assign(\n",
+    "            att.out_proj.weight,\n",
+    "            params[f\"model.layers.{l}.self_attn.o_proj.weight\"],\n",
+    "            f\"model.layers.{l}.self_attn.o_proj.weight\"\n",
+    "        )\n",
+    "\n",
+    "        # QK norms\n",
+    "        if hasattr(att, \"q_norm\") and att.q_norm is not None:\n",
+    "            att.q_norm.scale = assign(\n",
+    "                att.q_norm.scale,\n",
+    "                params[f\"model.layers.{l}.self_attn.q_norm.weight\"],\n",
+    "                f\"model.layers.{l}.self_attn.q_norm.weight\"\n",
+    "            )\n",
+    "        if hasattr(att, \"k_norm\") and att.k_norm is not None:\n",
+    "            att.k_norm.scale = assign(\n",
+    "                att.k_norm.scale,\n",
+    "                params[f\"model.layers.{l}.self_attn.k_norm.weight\"],\n",
+    "                f\"model.layers.{l}.self_attn.k_norm.weight\"\n",
+    "            )\n",
+    "\n",
+    "        # Attention layernorm\n",
+    "        block.norm1.scale = assign(\n",
+    "            block.norm1.scale,\n",
+    "            params[f\"model.layers.{l}.input_layernorm.weight\"],\n",
+    "            f\"model.layers.{l}.input_layernorm.weight\"\n",
+    "        )\n",
+    "\n",
+    "        # Feedforward weights\n",
+    "        if \"num_experts\" in param_config:\n",
+    "            # Load router (gating) weights\n",
+    "            block.ff.gate.weight = assign(\n",
+    "                block.ff.gate.weight,\n",
+    "                params[f\"model.layers.{l}.mlp.gate.weight\"],\n",
+    "                f\"model.layers.{l}.mlp.gate.weight\"\n",
+    "            )\n",
+    "            # Load expert weights\n",
+    "            for e in range(param_config[\"num_experts\"]):\n",
+    "                prefix = f\"model.layers.{l}.mlp.experts.{e}\"\n",
+    "                block.ff.fc1[e].weight = assign(\n",
+    "                    block.ff.fc1[e].weight,\n",
+    "                    params[f\"{prefix}.gate_proj.weight\"],\n",
+    "                    f\"{prefix}.gate_proj.weight\"\n",
+    "                )\n",
+    "                block.ff.fc2[e].weight = assign(\n",
+    "                    block.ff.fc2[e].weight,\n",
+    "                    params[f\"{prefix}.up_proj.weight\"],\n",
+    "                    f\"{prefix}.up_proj.weight\"\n",
+    "                )\n",
+    "                block.ff.fc3[e].weight = assign(\n",
+    "                    block.ff.fc3[e].weight,\n",
+    "                    params[f\"{prefix}.down_proj.weight\"],\n",
+    "                    f\"{prefix}.down_proj.weight\"\n",
+    "                )\n",
+    "                # After assigning weights, move the expert layers from meta to CPU\n",
+    "                block.ff.fc1[e] = block.ff.fc1[e].to(\"cpu\")\n",
+    "                block.ff.fc2[e] = block.ff.fc2[e].to(\"cpu\")\n",
+    "                block.ff.fc3[e] = block.ff.fc3[e].to(\"cpu\")\n",
+    "\n",
+    "        else:\n",
+    "            block.ff.fc1.weight = assign(\n",
+    "                block.ff.fc1.weight,\n",
+    "                params[f\"model.layers.{l}.mlp.gate_proj.weight\"],\n",
+    "                f\"model.layers.{l}.mlp.gate_proj.weight\"\n",
+    "            )\n",
+    "            block.ff.fc2.weight = assign(\n",
+    "                block.ff.fc2.weight,\n",
+    "                params[f\"model.layers.{l}.mlp.up_proj.weight\"],\n",
+    "                f\"model.layers.{l}.mlp.up_proj.weight\"\n",
+    "            )\n",
+    "            block.ff.fc3.weight = assign(\n",
+    "                block.ff.fc3.weight,\n",
+    "                params[f\"model.layers.{l}.mlp.down_proj.weight\"],\n",
+    "                f\"model.layers.{l}.mlp.down_proj.weight\"\n",
+    "            )\n",
+    "\n",
+    "        block.norm2.scale = assign(\n",
+    "            block.norm2.scale,\n",
+    "            params[f\"model.layers.{l}.post_attention_layernorm.weight\"],\n",
+    "            f\"model.layers.{l}.post_attention_layernorm.weight\"\n",
+    "        )\n",
+    "\n",
+    "    # Final normalization and output head\n",
+    "    model.final_norm.scale = assign(model.final_norm.scale, params[\"model.norm.weight\"], \"model.norm.weight\")\n",
+    "\n",
+    "    if \"lm_head.weight\" in params:\n",
+    "        model.out_head.weight = assign(model.out_head.weight, params[\"lm_head.weight\"], \"lm_head.weight\")\n",
+    "    else:\n",
+    "        # Model uses weight tying, hence we reuse the embedding layer weights here\n",
+    "        print(\"Model uses weight tying.\")\n",
+    "        model.out_head.weight = assign(model.out_head.weight, params[\"model.embed_tokens.weight\"], \"model.embed_tokens.weight\")"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 16,
+   "id": "699cb1b8-a67d-49fb-80a6-0dad9d81f392",
+   "metadata": {
+    "colab": {
+     "base_uri": "https://localhost:8080/",
+     "height": 17,
+     "referenced_widgets": [
+      "9881b6995c3f49dc89e6992fd9ab660b",
+      "17a3174e65c54476b2e0d1faf8f011ca",
+      "1bbf2e62c0754d1593beb4105a7f1ac1",
+      "b82112e1dec645d98aa1c1ba64abcb61",
+      "271e2bd6a35e4a8b92de8697f7c0be5f",
+      "90a79523187446dfa692723b2e5833a7",
+      "431ffb83b8c14bf182f0430e07ea6154",
+      "a8f1b72a33dd4b548de23fbd95e0da18",
+      "25cc36132d384189acfbecc59483134b",
+      "bfd06423ad544218968648016e731a46",
+      "d029630b63ff44cf807ade428d2eb421"
+     ]
+    },
+    "id": "699cb1b8-a67d-49fb-80a6-0dad9d81f392",
+    "outputId": "55b2f28c-142f-4698-9d23-d27456d3ed6d"
+   },
+   "outputs": [
+    {
+     "data": {
+      "application/vnd.jupyter.widget-view+json": {
+       "model_id": "488c832145db4dd4848aa67d54a33f0d",
+       "version_major": 2,
+       "version_minor": 0
+      },
+      "text/plain": [
+       "Fetching 27 files:   0%|          | 0/27 [00:00<?, ?it/s]"
+      ]
+     },
+     "metadata": {},
+     "output_type": "display_data"
+    }
+   ],
+   "source": [
+    "import json\n",
+    "import os\n",
+    "from pathlib import Path\n",
+    "from safetensors.torch import load_file\n",
+    "from huggingface_hub import snapshot_download\n",
+    "\n",
+    "repo_id = \"Qwen/Qwen3-30B-A3B\"  # Original Instruct/Thinking hybrind model\n",
+    "repo_id = \"Qwen/Qwen3-235B-A22B-Instruct-2507\"  # New instruct model\n",
+    "repo_id = \"Qwen/Qwen3-30B-A3B-Thinking-2507\"  # New thinking model\n",
+    "repo_id = \"Qwen/Qwen3-Coder-30B-A3B-Instruct\"  # (Qwen3 Coder Flash)\n",
+    "\n",
+    "local_dir = Path(repo_id).parts[-1]\n",
+    "\n",
+    "repo_dir = snapshot_download(repo_id=repo_id, local_dir=local_dir)\n",
+    "index_path = os.path.join(repo_dir, \"model.safetensors.index.json\")\n",
+    "with open(index_path, \"r\") as f:\n",
+    "    index = json.load(f)\n",
+    "\n",
+    "weights_dict = {}\n",
+    "for filename in set(index[\"weight_map\"].values()):\n",
+    "    shard_path = os.path.join(repo_dir, filename)\n",
+    "    shard = load_file(shard_path)\n",
+    "    weights_dict.update(shard)\n",
+    "\n",
+    "load_weights_into_qwen(model, QWEN3_CONFIG, weights_dict)\n",
+    "model.to(device);"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "id": "6b345491-3510-4397-92d3-cd0a3fa3deee",
+   "metadata": {},
+   "source": [
+    "&nbsp;\n",
+    "# 4. Load tokenizer"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 17,
+   "id": "b68ab489-48e5-471e-a814-56cda2d60f81",
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "import re\n",
+    "from tokenizers import Tokenizer\n",
+    "\n",
+    "\n",
+    "class Qwen3Tokenizer:\n",
+    "    _SPECIALS = [\n",
+    "        \"<|endoftext|>\",\n",
+    "        \"<|im_start|>\", \"<|im_end|>\",\n",
+    "        \"<|object_ref_start|>\", \"<|object_ref_end|>\",\n",
+    "        \"<|box_start|>\", \"<|box_end|>\",\n",
+    "        \"<|quad_start|>\", \"<|quad_end|>\",\n",
+    "        \"<|vision_start|>\", \"<|vision_end|>\",\n",
+    "        \"<|vision_pad|>\", \"<|image_pad|>\", \"<|video_pad|>\",\n",
+    "    ]\n",
+    "    _SPLIT_RE = re.compile(r\"(<\\|[^>]+?\\|>)\")\n",
+    "\n",
+    "    def __init__(self, tokenizer_file_path=\"tokenizer.json\", repo_id=None,\n",
+    "                 apply_chat_template=True, add_generation_prompt=False, add_thinking=False):\n",
+    "\n",
+    "        self.apply_chat_template = apply_chat_template\n",
+    "        self.add_generation_prompt = add_generation_prompt\n",
+    "        self.add_thinking = add_thinking\n",
+    "\n",
+    "        tok_file = Path(tokenizer_file_path)\n",
+    "        self._tok = Tokenizer.from_file(str(tok_file))\n",
+    "        self._special_to_id = {t: self._tok.token_to_id(t) for t in self._SPECIALS}\n",
+    "\n",
+    "        self.pad_token_id = self._special_to_id.get(\"<|endoftext|>\")\n",
+    "        self.eos_token_id = self.pad_token_id\n",
+    "\n",
+    "        if repo_id and \"Base\" not in repo_id:\n",
+    "            eos_token = \"<|im_end|>\"\n",
+    "        else:\n",
+    "            eos_token = \"<|endoftext|>\"\n",
+    "        if eos_token in self._special_to_id:\n",
+    "            self.eos_token_id = self._special_to_id[eos_token]\n",
+    "\n",
+    "    def encode(self, text, chat_wrapped=None):\n",
+    "        if chat_wrapped is None:\n",
+    "            chat_wrapped = self.apply_chat_template\n",
+    "\n",
+    "        stripped = text.strip()\n",
+    "        if stripped in self._special_to_id and \"\\n\" not in stripped:\n",
+    "            return [self._special_to_id[stripped]]\n",
+    "\n",
+    "        if chat_wrapped:\n",
+    "            text = self._wrap_chat(text)\n",
+    "\n",
+    "        ids = []\n",
+    "        for part in filter(None, self._SPLIT_RE.split(text)):\n",
+    "            if part in self._special_to_id:\n",
+    "                ids.append(self._special_to_id[part])\n",
+    "            else:\n",
+    "                ids.extend(self._tok.encode(part).ids)\n",
+    "        return ids\n",
+    "\n",
+    "    def decode(self, ids):\n",
+    "        return self._tok.decode(ids, skip_special_tokens=False)\n",
+    "\n",
+    "    def _wrap_chat(self, user_msg):\n",
+    "        s = f\"<|im_start|>user\\n{user_msg}<|im_end|>\\n\"\n",
+    "        if self.add_generation_prompt:\n",
+    "            s += \"<|im_start|>assistant\"\n",
+    "            if self.add_thinking:\n",
+    "                s += \"\\n\"\n",
+    "            else:\n",
+    "                s += \"\\n<think>\\n\\n</think>\\n\\n\"\n",
+    "        return s"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 18,
+   "id": "7b6df8bc-7308-468e-93ce-2d5529ea7866",
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "tokenizer_file_path = f\"{Path(repo_id).parts[-1]}/tokenizer.json\"\n",
+    "\n",
+    "tokenizer = Qwen3Tokenizer(\n",
+    "    tokenizer_file_path=tokenizer_file_path,\n",
+    "    repo_id=repo_id,\n",
+    "    add_generation_prompt=True,\n",
+    "    add_thinking=True\n",
+    ")"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 19,
+   "id": "1946b534-e3af-431a-a222-391a60bfa892",
+   "metadata": {},
+   "outputs": [
+    {
+     "data": {
+      "text/plain": [
+       "'<|im_start|>user\\nImplement a binary search function in Python<|im_end|>\\n<|im_start|>assistant\\n'"
+      ]
+     },
+     "execution_count": 19,
+     "metadata": {},
+     "output_type": "execute_result"
+    }
+   ],
+   "source": [
+    "# prompt = \"Give me a short introduction to large language models.\"\n",
+    "prompt = \"Implement a binary search function in Python\"\n",
+    "\n",
+    "\n",
+    "input_token_ids = tokenizer.encode(prompt)\n",
+    "text = tokenizer.decode(input_token_ids)\n",
+    "text"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "id": "57d07df1-4401-4792-b549-7c4cc5632323",
+   "metadata": {
+    "id": "57d07df1-4401-4792-b549-7c4cc5632323"
+   },
+   "source": [
+    "&nbsp;\n",
+    "# 5. Generate text"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 20,
+   "id": "60b9fc72",
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "def generate_text_basic_stream(model, token_ids, max_new_tokens, eos_token_id=None):\n",
+    "\n",
+    "    model.eval()\n",
+    "    with torch.no_grad():\n",
+    "        for _ in range(max_new_tokens):\n",
+    "            out = model(token_ids)[:, -1]\n",
+    "            next_token = torch.argmax(out, dim=-1, keepdim=True)\n",
+    "\n",
+    "            if (eos_token_id is not None\n",
+    "                   and torch.all(next_token == eos_token_id)):\n",
+    "               break\n",
+    "\n",
+    "            yield next_token\n",
+    "            \n",
+    "            token_ids = torch.cat([token_ids, next_token], dim=1)"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 21,
+   "id": "a5b30753",
+   "metadata": {},
+   "outputs": [
+    {
+     "name": "stdout",
+     "output_type": "stream",
+     "text": [
+      "Here's a comprehensive implementation of binary search in Python with both iterative and recursive approaches:\n",
+      "\n",
+      "## Iterative Binary Search\n",
+      "\n",
+      "```python\n",
+      "def binary_search(arr, target):\n",
+      "    \"\"\"\n",
+      "    Iterative binary search implementation\n",
+      "    \n",
+      "    Args:\n",
+      "        arr: Sorted list of elements\n",
+      "        target: Element to search for\n",
+      "    \n",
+      "    Returns:\n",
+      "        int: Index of target if found, -1 if not found\n",
+      "    \n",
+      "    Time Complexity: O(log n)\n",
+      "    Space Complexity: O(1)\n",
+      "    \"\"\"\n",
+      "    left = 0\n",
+      "    right = len(arr) - 1\n",
+      "    \n",
+      "    while left <= right:\n",
+      "        # Calculate middle index (avoiding potential overflow)\n",
+      "        mid = left + (right - left) // 2\n",
+      "        \n",
+      "        if arr[mid] == target:\n",
+      "            return mid\n",
+      "        elif arr[mid] < target:\n",
+      "            left = mid + 1\n",
+      "        else:\n",
+      "            right = mid - 1\n",
+      "    \n",
+      "    return -1  # Target not found\n",
+      "```\n",
+      "\n",
+      "## Recursive Binary Search\n",
+      "\n",
+      "```python\n",
+      "def binary_search_recursive(arr, target, left=0, right=None):\n",
+      "    \"\"\"\n",
+      "    Recursive binary search implementation\n",
+      "    \n",
+      "    Args:\n",
+      "        arr: Sorted list of elements\n",
+      "        target: Element to search for\n",
+      "        left: Left boundary (default: 0)\n",
+      "        right: Right boundary (default: len(arr) - 1)\n",
+      "    \n",
+      "    Returns:\n",
+      "        int: Index of target if found, -1 if not found\n",
+      "    \n",
+      "    Time Complexity: O(log n)\n",
+      "    Space Complexity: O(log n) due to recursion stack\n",
+      "    \"\"\"\n",
+      "    if right is None:\n",
+      "        right = len(arr) - 1\n",
+      "    \n",
+      "    # Base case: element not found\n",
+      "    if left > right:\n",
+      "        return -1\n",
+      "    \n",
+      "    # Calculate middle index\n",
+      "    mid = left + (right - left) // 2\n",
+      "    \n",
+      "    if arr[mid] == target:\n",
+      "        return mid\n",
+      "    elif arr[mid] < target:\n",
+      "        return binary_search_recursive(arr, target, mid + 1, right)\n",
+      "    else:\n",
+      "        return binary_search_recursive(arr, target, left, mid - 1)\n",
+      "```\n",
+      "\n",
+      "## Enhanced Version with Additional Features\n",
+      "\n",
+      "```python\n",
+      "def binary_search_enhanced(arr, target, find_first=True):\n",
+      "    \"\"\"\n",
+      "    Enhanced binary search that can find first or last occurrence\n",
+      "    of a target in case of duplicates\n",
+      "    \n",
+      "    Args:\n",
+      "        arr: Sorted list of elements\n",
+      "        target: Element to search for\n",
+      "        find_first: If True, find"
+     ]
+    }
+   ],
+   "source": [
+    "input_token_ids_tensor = torch.tensor(input_token_ids, device=device).unsqueeze(0)\n",
+    "\n",
+    "\n",
+    "for token in generate_text_basic_stream(\n",
+    "    model=model,\n",
+    "    token_ids=input_token_ids_tensor,\n",
+    "    max_new_tokens=500,\n",
+    "    # eos_token_id=tokenizer.eos_token_id\n",
+    "):\n",
+    "    token_id = token.squeeze(0).tolist()\n",
+    "    print(\n",
+    "        tokenizer.decode(token_id),\n",
+    "        end=\"\",\n",
+    "        flush=True\n",
+    "    )"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "id": "549324d6-5c71-4147-ae21-2e67675faa3d",
+   "metadata": {
+    "id": "549324d6-5c71-4147-ae21-2e67675faa3d"
+   },
+   "source": [
+    "&nbsp;\n",
+    "# What's next?"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "id": "e6edaaae-2de1-406c-8ffa-897cdfa3808c",
+   "metadata": {
+    "id": "e6edaaae-2de1-406c-8ffa-897cdfa3808c"
+   },
+   "source": [
+    "- Check out the [README.md](./README.md), to use this model via the `llms_from_scratch` package\n",
+    "- For those interested in a comprehensive guide on building a large language model from scratch and gaining a deeper understanding of its mechanics, you might like my [Build a Large Language Model (From Scratch)](http://mng.bz/orYv)\n",
+    "\n",
+    "<a href=\"http://mng.bz/orYv\"><img src=\"https://sebastianraschka.com/images/LLMs-from-scratch-images/cover-small.webp\" width=\"100px\"></a>"
+   ]
+  }
+ ],
+ "metadata": {
+  "accelerator": "GPU",
+  "colab": {
+   "gpuType": "A100",
+   "provenance": []
+  },
+  "kernelspec": {
+   "display_name": "Python 3 (ipykernel)",
+   "language": "python",
+   "name": "python3"
+  },
+  "language_info": {
+   "codemirror_mode": {
+    "name": "ipython",
+    "version": 3
+   },
+   "file_extension": ".py",
+   "mimetype": "text/x-python",
+   "name": "python",
+   "nbconvert_exporter": "python",
+   "pygments_lexer": "ipython3",
+   "version": "3.10.16"
+  }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 5
+}

+ 130 - 193
ch05/11_qwen3/standalone-qwen3.ipynb

@@ -80,8 +80,8 @@
      "name": "stdout",
      "output_type": "stream",
      "text": [
-      "huggingface_hub version: 0.33.0\n",
-      "tokenizers version: 0.21.1\n",
+      "huggingface_hub version: 0.33.2\n",
+      "tokenizers version: 0.21.2\n",
       "torch version: 2.6.0\n"
      ]
     }
@@ -418,7 +418,7 @@
   },
   {
    "cell_type": "code",
-   "execution_count": 25,
+   "execution_count": 10,
    "id": "caa142fa-b375-4e78-b392-2072ced666f3",
    "metadata": {
     "id": "caa142fa-b375-4e78-b392-2072ced666f3"
@@ -523,7 +523,7 @@
   },
   {
    "cell_type": "code",
-   "execution_count": 27,
+   "execution_count": 11,
    "id": "156253fe-aacd-4da2-8f13-705f05c4b11e",
    "metadata": {
     "id": "156253fe-aacd-4da2-8f13-705f05c4b11e"
@@ -536,7 +536,7 @@
   },
   {
    "cell_type": "code",
-   "execution_count": 28,
+   "execution_count": 12,
    "id": "eaf86265-4e9d-4024-9ed0-99076944e304",
    "metadata": {},
    "outputs": [
@@ -544,32 +544,32 @@
      "data": {
       "text/plain": [
        "Qwen3Model(\n",
-       "  (tok_emb): Embedding(151936, 4096)\n",
+       "  (tok_emb): Embedding(151936, 1024)\n",
        "  (trf_blocks): ModuleList(\n",
-       "    (0-35): 36 x TransformerBlock(\n",
+       "    (0-27): 28 x TransformerBlock(\n",
        "      (att): GroupedQueryAttention(\n",
-       "        (W_query): Linear(in_features=4096, out_features=4096, bias=False)\n",
-       "        (W_key): Linear(in_features=4096, out_features=1024, bias=False)\n",
-       "        (W_value): Linear(in_features=4096, out_features=1024, bias=False)\n",
-       "        (out_proj): Linear(in_features=4096, out_features=4096, bias=False)\n",
+       "        (W_query): Linear(in_features=1024, out_features=2048, bias=False)\n",
+       "        (W_key): Linear(in_features=1024, out_features=1024, bias=False)\n",
+       "        (W_value): Linear(in_features=1024, out_features=1024, bias=False)\n",
+       "        (out_proj): Linear(in_features=2048, out_features=1024, bias=False)\n",
        "        (q_norm): RMSNorm()\n",
        "        (k_norm): RMSNorm()\n",
        "      )\n",
        "      (ff): FeedForward(\n",
-       "        (fc1): Linear(in_features=4096, out_features=12288, bias=False)\n",
-       "        (fc2): Linear(in_features=4096, out_features=12288, bias=False)\n",
-       "        (fc3): Linear(in_features=12288, out_features=4096, bias=False)\n",
+       "        (fc1): Linear(in_features=1024, out_features=3072, bias=False)\n",
+       "        (fc2): Linear(in_features=1024, out_features=3072, bias=False)\n",
+       "        (fc3): Linear(in_features=3072, out_features=1024, bias=False)\n",
        "      )\n",
        "      (norm1): RMSNorm()\n",
        "      (norm2): RMSNorm()\n",
        "    )\n",
        "  )\n",
        "  (final_norm): RMSNorm()\n",
-       "  (out_head): Linear(in_features=4096, out_features=151936, bias=False)\n",
+       "  (out_head): Linear(in_features=1024, out_features=151936, bias=False)\n",
        ")"
       ]
      },
-     "execution_count": 28,
+     "execution_count": 12,
      "metadata": {},
      "output_type": "execute_result"
     }
@@ -588,20 +588,20 @@
   },
   {
    "cell_type": "code",
-   "execution_count": 29,
+   "execution_count": 13,
    "id": "adf0a6b7-b688-42c9-966e-c223d34db99f",
    "metadata": {},
    "outputs": [
     {
      "data": {
       "text/plain": [
-       "tensor([[[-0.7305, -1.2109,  0.4551,  ..., -0.0215, -0.5742, -0.2754],\n",
-       "         [-0.4023, -0.6094,  0.0415,  ...,  0.6094, -0.6758,  0.3789],\n",
-       "         [-0.4043,  0.1943, -0.0757,  ...,  0.4121, -1.2344, -0.1445]]],\n",
+       "tensor([[[-0.2256, -0.0164, -0.7070,  ...,  0.4414,  0.1245,  1.0703],\n",
+       "         [-0.6602,  0.5352, -0.0718,  ..., -0.0737,  0.5391,  0.3086],\n",
+       "         [-0.4785, -0.1562,  0.1045,  ..., -0.2324,  0.2354,  0.6328]]],\n",
        "       dtype=torch.bfloat16, grad_fn=<UnsafeViewBackward0>)"
       ]
      },
-     "execution_count": 29,
+     "execution_count": 13,
      "metadata": {},
      "output_type": "execute_result"
     }
@@ -612,7 +612,7 @@
   },
   {
    "cell_type": "code",
-   "execution_count": 30,
+   "execution_count": 14,
    "id": "364e76ca-52f8-4fa5-af37-c4069f9694bc",
    "metadata": {
     "colab": {
@@ -626,9 +626,9 @@
      "name": "stdout",
      "output_type": "stream",
      "text": [
-      "Total number of parameters: 8,190,735,360\n",
+      "Total number of parameters: 751,632,384\n",
       "\n",
-      "Total number of unique parameters: 7,568,405,504\n"
+      "Total number of unique parameters: 596,049,920\n"
      ]
     }
    ],
@@ -643,7 +643,7 @@
   },
   {
    "cell_type": "code",
-   "execution_count": 31,
+   "execution_count": 15,
    "id": "fd5efb03-5a07-46e8-8607-93ed47549d2b",
    "metadata": {
     "colab": {
@@ -657,8 +657,8 @@
      "name": "stdout",
      "output_type": "stream",
      "text": [
-      "float32 (PyTorch default): 61.06 GB\n",
-      "bfloat16: 30.53 GB\n"
+      "float32 (PyTorch default): 5.64 GB\n",
+      "bfloat16: 2.82 GB\n"
      ]
     }
    ],
@@ -693,7 +693,7 @@
   },
   {
    "cell_type": "code",
-   "execution_count": 32,
+   "execution_count": 16,
    "id": "31f12baf-f79b-499f-85c0-51328a6a20f5",
    "metadata": {
     "id": "31f12baf-f79b-499f-85c0-51328a6a20f5"
@@ -723,7 +723,7 @@
   },
   {
    "cell_type": "code",
-   "execution_count": 36,
+   "execution_count": 17,
    "id": "75166128-5899-4995-9b88-9672e135650e",
    "metadata": {
     "id": "75166128-5899-4995-9b88-9672e135650e"
@@ -822,7 +822,7 @@
   },
   {
    "cell_type": "code",
-   "execution_count": null,
+   "execution_count": 18,
    "id": "699cb1b8-a67d-49fb-80a6-0dad9d81f392",
    "metadata": {
     "colab": {
@@ -845,62 +845,7 @@
     "id": "699cb1b8-a67d-49fb-80a6-0dad9d81f392",
     "outputId": "55b2f28c-142f-4698-9d23-d27456d3ed6d"
    },
-   "outputs": [
-    {
-     "data": {
-      "application/vnd.jupyter.widget-view+json": {
-       "model_id": "bf7fbc5f95ed4f06b5ba47d4aec96738",
-       "version_major": 2,
-       "version_minor": 0
-      },
-      "text/plain": [
-       "Fetching 14 files:   0%|          | 0/14 [00:00<?, ?it/s]"
-      ]
-     },
-     "metadata": {},
-     "output_type": "display_data"
-    },
-    {
-     "name": "stdout",
-     "output_type": "stream",
-     "text": [
-      "True\n"
-     ]
-    },
-    {
-     "data": {
-      "text/plain": [
-       "Qwen3Model(\n",
-       "  (tok_emb): Embedding(151936, 4096)\n",
-       "  (trf_blocks): ModuleList(\n",
-       "    (0-35): 36 x TransformerBlock(\n",
-       "      (att): GroupedQueryAttention(\n",
-       "        (W_query): Linear(in_features=4096, out_features=4096, bias=False)\n",
-       "        (W_key): Linear(in_features=4096, out_features=1024, bias=False)\n",
-       "        (W_value): Linear(in_features=4096, out_features=1024, bias=False)\n",
-       "        (out_proj): Linear(in_features=4096, out_features=4096, bias=False)\n",
-       "        (q_norm): RMSNorm()\n",
-       "        (k_norm): RMSNorm()\n",
-       "      )\n",
-       "      (ff): FeedForward(\n",
-       "        (fc1): Linear(in_features=4096, out_features=12288, bias=False)\n",
-       "        (fc2): Linear(in_features=4096, out_features=12288, bias=False)\n",
-       "        (fc3): Linear(in_features=12288, out_features=4096, bias=False)\n",
-       "      )\n",
-       "      (norm1): RMSNorm()\n",
-       "      (norm2): RMSNorm()\n",
-       "    )\n",
-       "  )\n",
-       "  (final_norm): RMSNorm()\n",
-       "  (out_head): Linear(in_features=4096, out_features=151936, bias=False)\n",
-       ")"
-      ]
-     },
-     "execution_count": 37,
-     "metadata": {},
-     "output_type": "execute_result"
-    }
-   ],
+   "outputs": [],
    "source": [
     "import json\n",
     "import os\n",
@@ -951,60 +896,84 @@
   },
   {
    "cell_type": "code",
-   "execution_count": 38,
+   "execution_count": 19,
    "id": "b68ab489-48e5-471e-a814-56cda2d60f81",
    "metadata": {},
    "outputs": [],
    "source": [
+    "import re\n",
     "from tokenizers import Tokenizer\n",
     "\n",
     "\n",
-    "class Qwen3Tokenizer():\n",
-    "    def __init__(self, tokenizer_file_path=\"tokenizer.json\", repo_id=None, add_generation_prompt=False, add_thinking=False):\n",
-    "        self.tokenizer_file_path = tokenizer_file_path\n",
+    "class Qwen3Tokenizer:\n",
+    "    _SPECIALS = [\n",
+    "        \"<|endoftext|>\",\n",
+    "        \"<|im_start|>\", \"<|im_end|>\",\n",
+    "        \"<|object_ref_start|>\", \"<|object_ref_end|>\",\n",
+    "        \"<|box_start|>\", \"<|box_end|>\",\n",
+    "        \"<|quad_start|>\", \"<|quad_end|>\",\n",
+    "        \"<|vision_start|>\", \"<|vision_end|>\",\n",
+    "        \"<|vision_pad|>\", \"<|image_pad|>\", \"<|video_pad|>\",\n",
+    "    ]\n",
+    "    _SPLIT_RE = re.compile(r\"(<\\|[^>]+?\\|>)\")\n",
+    "\n",
+    "    def __init__(self, tokenizer_file_path=\"tokenizer.json\", repo_id=None,\n",
+    "                 apply_chat_template=True, add_generation_prompt=False, add_thinking=False):\n",
+    "\n",
+    "        self.apply_chat_template = apply_chat_template\n",
     "        self.add_generation_prompt = add_generation_prompt\n",
     "        self.add_thinking = add_thinking\n",
     "\n",
-    "        tokenizer_file_path_obj = Path(tokenizer_file_path)\n",
-    "        if not tokenizer_file_path_obj.is_file() and repo_id is not None:\n",
-    "            _ = hf_hub_download(\n",
-    "                repo_id=repo_id,\n",
-    "                filename=str(tokenizer_file_path_obj.name),\n",
-    "                local_dir=str(tokenizer_file_path_obj.parent.name)\n",
-    "            )\n",
-    "        self.tokenizer = Tokenizer.from_file(tokenizer_file_path)\n",
-    "\n",
-    "    def encode(self, prompt):\n",
-    "        messages = [\n",
-    "            {\"role\": \"user\", \"content\": prompt}\n",
-    "        ]  \n",
-    "        formatted_prompt = self.format_qwen_chat(\n",
-    "            messages,\n",
-    "            add_generation_prompt=self.add_generation_prompt,\n",
-    "            add_thinking=self.add_thinking\n",
-    "        )\n",
-    "        return self.tokenizer.encode(formatted_prompt).ids\n",
-    "                \n",
-    "    def decode(self, token_ids):\n",
-    "        return self.tokenizer.decode(token_ids, skip_special_tokens=False)\n",
-    "    \n",
-    "    @staticmethod\n",
-    "    def format_qwen_chat(messages, add_generation_prompt=False, add_thinking=False):\n",
-    "        prompt = \"\"\n",
-    "        for msg in messages:\n",
-    "            prompt += f\"<|im_start|>{msg['role']}\\n{msg['content']}<|im_end|>\\n\"\n",
-    "        if add_generation_prompt:\n",
-    "            prompt += \"<|im_start|>assistant\"\n",
-    "            if not add_thinking:\n",
-    "                prompt += \"<|think>\\n\\n<|/think>\\n\\n\"\n",
+    "        tok_file = Path(tokenizer_file_path)\n",
+    "        self._tok = Tokenizer.from_file(str(tok_file))\n",
+    "        self._special_to_id = {t: self._tok.token_to_id(t) for t in self._SPECIALS}\n",
+    "\n",
+    "        self.pad_token_id = self._special_to_id.get(\"<|endoftext|>\")\n",
+    "        self.eos_token_id = self.pad_token_id\n",
+    "\n",
+    "        if repo_id and \"Base\" not in repo_id:\n",
+    "            eos_token = \"<|im_end|>\"\n",
+    "        else:\n",
+    "            eos_token = \"<|endoftext|>\"\n",
+    "        if eos_token in self._special_to_id:\n",
+    "            self.eos_token_id = self._special_to_id[eos_token]\n",
+    "\n",
+    "    def encode(self, text, chat_wrapped=None):\n",
+    "        if chat_wrapped is None:\n",
+    "            chat_wrapped = self.apply_chat_template\n",
+    "\n",
+    "        stripped = text.strip()\n",
+    "        if stripped in self._special_to_id and \"\\n\" not in stripped:\n",
+    "            return [self._special_to_id[stripped]]\n",
+    "\n",
+    "        if chat_wrapped:\n",
+    "            text = self._wrap_chat(text)\n",
+    "\n",
+    "        ids = []\n",
+    "        for part in filter(None, self._SPLIT_RE.split(text)):\n",
+    "            if part in self._special_to_id:\n",
+    "                ids.append(self._special_to_id[part])\n",
     "            else:\n",
-    "                prompt += \"\\n\"    \n",
-    "        return prompt"
+    "                ids.extend(self._tok.encode(part).ids)\n",
+    "        return ids\n",
+    "\n",
+    "    def decode(self, ids):\n",
+    "        return self._tok.decode(ids, skip_special_tokens=False)\n",
+    "\n",
+    "    def _wrap_chat(self, user_msg):\n",
+    "        s = f\"<|im_start|>user\\n{user_msg}<|im_end|>\\n\"\n",
+    "        if self.add_generation_prompt:\n",
+    "            s += \"<|im_start|>assistant\"\n",
+    "            if self.add_thinking:\n",
+    "                s += \"\\n\"\n",
+    "            else:\n",
+    "                s += \"\\n<think>\\n\\n</think>\\n\\n\"\n",
+    "        return s"
    ]
   },
   {
    "cell_type": "code",
-   "execution_count": 39,
+   "execution_count": 20,
    "id": "7b6df8bc-7308-468e-93ce-2d5529ea7866",
    "metadata": {},
    "outputs": [],
@@ -1024,7 +993,7 @@
   },
   {
    "cell_type": "code",
-   "execution_count": 40,
+   "execution_count": 21,
    "id": "1946b534-e3af-431a-a222-391a60bfa892",
    "metadata": {},
    "outputs": [
@@ -1034,7 +1003,7 @@
        "'<|im_start|>user\\nGive me a short introduction to large language models.<|im_end|>\\n<|im_start|>assistant\\n'"
       ]
      },
-     "execution_count": 40,
+     "execution_count": 21,
      "metadata": {},
      "output_type": "execute_result"
     }
@@ -1060,56 +1029,33 @@
   },
   {
    "cell_type": "code",
-   "execution_count": 41,
+   "execution_count": 22,
    "id": "7b8401c6-e244-4cb7-9849-2ba71ce758d5",
    "metadata": {
     "id": "7b8401c6-e244-4cb7-9849-2ba71ce758d5"
    },
    "outputs": [],
    "source": [
-    "# Identical function from chapter 5\n",
-    "\n",
-    "def generate(model, idx, max_new_tokens, context_size, temperature=0.0, top_k=None, eos_id=None):\n",
-    "    # For-loop is the same as before: Get logits, and only focus on last time step\n",
-    "    for _ in range(max_new_tokens):\n",
-    "        idx_cond = idx[:, -context_size:]\n",
-    "        with torch.no_grad():\n",
-    "            logits = model(idx_cond)\n",
-    "        logits = logits[:, -1, :]\n",
-    "\n",
-    "        # Filter logits with top_k sampling\n",
-    "        if top_k is not None:\n",
-    "            # Keep only top_k values\n",
-    "            top_logits, _ = torch.topk(logits, top_k)\n",
-    "            min_val = top_logits[:, -1]\n",
-    "            logits = torch.where(logits < min_val, torch.tensor(-torch.inf).to(logits.device), logits)\n",
-    "\n",
-    "        # Apply temperature scaling\n",
-    "        if temperature > 0.0:\n",
-    "            logits = logits / temperature\n",
-    "\n",
-    "            # Apply softmax to get probabilities\n",
-    "            probs = torch.softmax(logits, dim=-1)  # (batch_size, context_len)\n",
-    "\n",
-    "            # Sample from the distribution\n",
-    "            idx_next = torch.multinomial(probs, num_samples=1)  # (batch_size, 1)\n",
-    "\n",
-    "        # Otherwise same as before: get idx of the vocab entry with the highest logits value\n",
-    "        else:\n",
-    "            idx_next = torch.argmax(logits, dim=-1, keepdim=True)  # (batch_size, 1)\n",
+    "def generate_text_basic_stream(model, token_ids, max_new_tokens, eos_token_id=None):\n",
     "\n",
-    "        if eos_id is not None and idx_next.item() == eos_id:\n",
-    "            break  # Stop generating early if end-of-sequence token is encountered and eos_id is specified\n",
+    "    model.eval()\n",
+    "    with torch.no_grad():\n",
+    "        for _ in range(max_new_tokens):\n",
+    "            out = model(token_ids)[:, -1]\n",
+    "            next_token = torch.argmax(out, dim=-1, keepdim=True)\n",
     "\n",
-    "        # Same as before: append sampled index to the running sequence\n",
-    "        idx = torch.cat((idx, idx_next), dim=1)  # (batch_size, num_tokens+1)\n",
+    "            if (eos_token_id is not None\n",
+    "                   and torch.all(next_token == eos_token_id)):\n",
+    "               break\n",
     "\n",
-    "    return idx"
+    "            yield next_token\n",
+    "            \n",
+    "            token_ids = torch.cat([token_ids, next_token], dim=1)"
    ]
   },
   {
    "cell_type": "code",
-   "execution_count": 42,
+   "execution_count": 24,
    "id": "1c7a04fa-6aac-416b-8f63-f1e19227633d",
    "metadata": {
     "id": "1c7a04fa-6aac-416b-8f63-f1e19227633d"
@@ -1119,41 +1065,32 @@
      "name": "stdout",
      "output_type": "stream",
      "text": [
-      "Time: 78.98 sec\n",
-      "<|im_start|>user\n",
-      "Give me a short introduction to large language models.<|im_end|>\n",
-      "<|im_start|>assistant\n",
       "<think>\n",
-      "Okay, the user wants a short introduction to large language models. Let me start by defining what they are. They're AI systems trained on vast amounts of text data, right? I should mention their ability to understand and generate human-like text. Maybe include examples like GPT or BERT. Also, highlight their applications in tasks like answering questions, writing, coding, and more. Need to keep it concise but cover the key points. Oh, and maybe touch on how they're trained using deep learning techniques. Wait, should I explain the training process briefly? Probably not necessary for a short intro. Focus on the main aspects: what they are, how they work, and their uses. Make sure it's easy to understand without too...\n"
+      "Okay, the user wants a short introduction to large language models. Let me start by recalling what I know. Large language models are AI systems that can understand and generate human language. They're trained on massive datasets, so they can learn complex patterns and nuances.\n",
+      "\n",
+      "I should mention their ability to understand and generate text, not just specific tasks. Maybe include examples like chatbots or language assistants. Also, emphasize their adaptability and versatility. Oh, and maybe touch on their applications in various fields. Let me check if I'm covering all key points without being too technical. Keep it concise, around 3-4 sentences. Make sure it's clear and easy to understand.\n",
+      "</think>\n",
+      "\n",
+      "Large language models (LLMs) are AI systems designed to understand and generate human language. They are trained on vast datasets, allowing them to learn complex patterns and nuances, making them versatile for tasks like writing, answering questions, and even creative content creation. These models can adapt to new information and provide contextually relevant responses, making them valuable tools across industries."
      ]
     }
    ],
    "source": [
-    "import time\n",
+    "input_token_ids_tensor = torch.tensor(input_token_ids, device=device).unsqueeze(0)\n",
     "\n",
-    "torch.manual_seed(123)\n",
-    "\n",
-    "start = time.time()\n",
     "\n",
-    "output_token_ids = generate(\n",
+    "for token in generate_text_basic_stream(\n",
     "    model=model,\n",
-    "    idx=torch.tensor(input_token_ids, device=device).unsqueeze(0),\n",
-    "    max_new_tokens=150,\n",
-    "    context_size=QWEN3_CONFIG[\"context_length\"],\n",
-    "    top_k=1,\n",
-    "    temperature=0.\n",
-    ")\n",
-    "\n",
-    "print(f\"Time: {time.time() - start:.2f} sec\")\n",
-    "\n",
-    "if torch.cuda.is_available():\n",
-    "    max_mem_bytes = torch.cuda.max_memory_allocated()\n",
-    "    max_mem_gb = max_mem_bytes / (1024 ** 3)\n",
-    "    print(f\"Max memory allocated: {max_mem_gb:.2f} GB\")\n",
-    "\n",
-    "output_text = tokenizer.decode(output_token_ids.squeeze(0).tolist())\n",
-    "\n",
-    "print(output_text + \"...\")"
+    "    token_ids=input_token_ids_tensor,\n",
+    "    max_new_tokens=500,\n",
+    "    eos_token_id=tokenizer.eos_token_id\n",
+    "):\n",
+    "    token_id = token.squeeze(0).tolist()\n",
+    "    print(\n",
+    "        tokenizer.decode(token_id),\n",
+    "        end=\"\",\n",
+    "        flush=True\n",
+    "    )"
    ]
   },
   {
@@ -1188,7 +1125,7 @@
    "provenance": []
   },
   "kernelspec": {
-   "display_name": ".venv",
+   "display_name": "Python 3 (ipykernel)",
    "language": "python",
    "name": "python3"
   },
@@ -1202,7 +1139,7 @@
    "name": "python",
    "nbconvert_exporter": "python",
    "pygments_lexer": "ipython3",
-   "version": "3.12.6"
+   "version": "3.10.16"
   }
  },
  "nbformat": 4,

+ 1 - 1
ch05/README.md

@@ -17,7 +17,7 @@
 - [08_memory_efficient_weight_loading](08_memory_efficient_weight_loading) contains a bonus notebook showing how to load model weights via PyTorch's `load_state_dict` method more efficiently
 - [09_extending-tokenizers](09_extending-tokenizers) contains a from-scratch implementation of the GPT-2 BPE tokenizer
 - [10_llm-training-speed](10_llm-training-speed) shows PyTorch performance tips to improve the LLM training speed
-- [11_qwen3](11_qwen3) A from-scratch implementation of Qwen3 0.6B including code to load the pretrained weights of the base and reasoning model variants
+- [11_qwen3](11_qwen3) A from-scratch implementation of Qwen3 0.6B and Qwen3 30B-A3B (Mixture-of-Experts) including code to load the pretrained weights of the base, reasoning, and coding model variants
 
 
 

+ 8 - 2
pkg/llms_from_scratch/README.md

@@ -160,10 +160,16 @@ from llms_from_scratch.qwen3 import (
 
 # KV cache drop-in replacements
 from llms_from_scratch.kv_cache.qwen3 import Qwen3Model
-from llms_from_scratch.kv_cache.generate import generate_text_simple
+from llms_from_scratch.kv_cache.generate import (
+    generate_text_simple,
+    generate_text_simple_stream
+)
 
 # KV cache drop-in replacements with batched inference support
-from llms_from_scratch.kv_cache_batched.generate import generate_text_simple
+from llms_from_scratch.kv_cache_batched.generate import (
+    generate_text_simple,
+    generate_text_simple_stream
+)
 from llms_from_scratch.kv_cache_batched.qwen3 import Qwen3Model
 ```
 

+ 24 - 0
pkg/llms_from_scratch/kv_cache/generate.py

@@ -28,3 +28,27 @@ def generate_text_simple(model, idx, max_new_tokens, context_size=None, use_cach
                 idx = torch.cat([idx, next_idx], dim=1)
 
     return idx
+
+
+def generate_text_simple_stream(model, token_ids, max_new_tokens, eos_token_id=None, context_size=None):
+    model.eval()
+
+    with torch.no_grad():
+        cache = KVCache(n_layers=model.cfg["n_layers"])
+        model.reset_kv_cache()
+
+        # Prime the cache with the initial context
+        logits = model(token_ids, cache=cache)
+
+        for _ in range(max_new_tokens):
+            next_token = torch.argmax(logits[:, -1], dim=-1, keepdim=True)
+
+            if eos_token_id is not None and torch.all(next_token == eos_token_id):
+                break
+
+            yield next_token
+
+            token_ids = torch.cat([token_ids, next_token], dim=1)
+
+            # Feed only the new token to the model; cache handles history
+            logits = model(next_token, cache=cache)

+ 45 - 2
pkg/llms_from_scratch/kv_cache/qwen3.py

@@ -29,7 +29,7 @@ class Qwen3Model(nn.Module):
         self.final_norm = RMSNorm(cfg["emb_dim"])
         self.out_head = nn.Linear(cfg["emb_dim"], cfg["vocab_size"], bias=False, dtype=cfg["dtype"])
 
-        # Reusuable utilities
+        # Reusable utilities
         if cfg["head_dim"] is None:
             head_dim = cfg["emb_dim"] // cfg["n_heads"]
         else:
@@ -94,7 +94,10 @@ class TransformerBlock(nn.Module):
             qk_norm=cfg["qk_norm"],
             dtype=cfg["dtype"]
         )
-        self.ff = FeedForward(cfg)
+        if "num_experts" in cfg and cfg["num_experts"] > 0:
+            self.ff = MoEFeedForward(cfg)
+        else:
+            self.ff = FeedForward(cfg)
         self.norm1 = RMSNorm(cfg["emb_dim"], eps=1e-6)
         self.norm2 = RMSNorm(cfg["emb_dim"], eps=1e-6)
 
@@ -128,6 +131,46 @@ class FeedForward(nn.Module):
         return self.fc3(x)
 
 
+class MoEFeedForward(nn.Module):
+    def __init__(self, cfg):
+        super().__init__()
+        self.num_experts_per_tok = cfg["num_experts_per_tok"]
+        self.num_experts = cfg["num_experts"]
+        self.gate = nn.Linear(cfg["emb_dim"], cfg["num_experts"], bias=False, dtype=cfg["dtype"])
+
+        meta_device = torch.device("meta")  # to reduce memory pressure and only load them when used (trades compute for memory)
+        self.fc1 = nn.ModuleList([nn.Linear(cfg["emb_dim"], cfg["moe_intermediate_size"], bias=False, dtype=cfg["dtype"], device=meta_device)
+                                  for _ in range(cfg["num_experts"])])
+        self.fc2 = nn.ModuleList([nn.Linear(cfg["emb_dim"], cfg["moe_intermediate_size"], bias=False, dtype=cfg["dtype"], device=meta_device)
+                                  for _ in range(cfg["num_experts"])])
+        self.fc3 = nn.ModuleList([nn.Linear(cfg["moe_intermediate_size"], cfg["emb_dim"], bias=False, dtype=cfg["dtype"], device=meta_device)
+                                  for _ in range(cfg["num_experts"])])
+
+    def forward(self, x):
+        scores = self.gate(x)  # (b, seq_len, num_experts)
+        topk_scores, topk_indices = torch.topk(scores, self.num_experts_per_tok, dim=-1)
+        topk_probs = torch.softmax(topk_scores, dim=-1)
+
+        expert_outputs = []
+        for e in range(self.num_experts):
+            hidden = torch.nn.functional.silu(self.fc1[e](x)) * self.fc2[e](x)
+            out = self.fc3[e](hidden)
+            expert_outputs.append(out.unsqueeze(-2))
+        expert_outputs = torch.cat(expert_outputs, dim=-2)  # (b, t, num_experts, emb_dim)
+
+        gating_probs = torch.zeros_like(scores)
+
+        for i in range(self.num_experts_per_tok):
+            indices = topk_indices[..., i:i+1]
+            prob = topk_probs[..., i:i+1]
+            gating_probs.scatter_(dim=-1, index=indices, src=prob)
+        gating_probs = gating_probs.unsqueeze(-1)  # (b, t, num_experts, 1)
+
+        # Weighted sum over experts
+        y = (gating_probs * expert_outputs).sum(dim=-2)
+        return y
+
+
 class GroupedQueryAttention(nn.Module):
     def __init__(
         self, d_in, num_heads, num_kv_groups, head_dim=None, qk_norm=False, dtype=None

+ 114 - 18
pkg/llms_from_scratch/qwen3.py

@@ -102,6 +102,23 @@ QWEN3_CONFIG_32B = {
         "dtype": torch.bfloat16,
 }
 
+# Mixture of Experts Model
+QWEN3_CONFIG_30B_A3B = {
+    "vocab_size": 151_936,
+    "context_length": 262_144,
+    "emb_dim": 2048,
+    "n_heads": 32,
+    "n_layers": 48,
+    "head_dim": 128,
+    "qk_norm": True,
+    "n_kv_groups": 4,
+    "rope_base": 10_000_000.0,
+    "dtype": torch.bfloat16,
+    "num_experts": 128,
+    "num_experts_per_tok": 8,
+        "moe_intermediate_size": 768,
+}
+
 
 class Qwen3Model(nn.Module):
     def __init__(self, cfg):
@@ -156,7 +173,10 @@ class TransformerBlock(nn.Module):
             qk_norm=cfg["qk_norm"],
             dtype=cfg["dtype"]
         )
-        self.ff = FeedForward(cfg)
+        if "num_experts" in cfg and cfg["num_experts"] > 0:
+            self.ff = MoEFeedForward(cfg)
+        else:
+            self.ff = FeedForward(cfg)
         self.norm1 = RMSNorm(cfg["emb_dim"], eps=1e-6)
         self.norm2 = RMSNorm(cfg["emb_dim"], eps=1e-6)
 
@@ -190,6 +210,46 @@ class FeedForward(nn.Module):
         return self.fc3(x)
 
 
+class MoEFeedForward(nn.Module):
+    def __init__(self, cfg):
+        super().__init__()
+        self.num_experts_per_tok = cfg["num_experts_per_tok"]
+        self.num_experts = cfg["num_experts"]
+        self.gate = nn.Linear(cfg["emb_dim"], cfg["num_experts"], bias=False, dtype=cfg["dtype"])
+
+        meta_device = torch.device("meta")  # to reduce memory pressure and only load them when used (trades compute for memory)
+        self.fc1 = nn.ModuleList([nn.Linear(cfg["emb_dim"], cfg["moe_intermediate_size"], bias=False, dtype=cfg["dtype"], device=meta_device)
+                                  for _ in range(cfg["num_experts"])])
+        self.fc2 = nn.ModuleList([nn.Linear(cfg["emb_dim"], cfg["moe_intermediate_size"], bias=False, dtype=cfg["dtype"], device=meta_device)
+                                  for _ in range(cfg["num_experts"])])
+        self.fc3 = nn.ModuleList([nn.Linear(cfg["moe_intermediate_size"], cfg["emb_dim"], bias=False, dtype=cfg["dtype"], device=meta_device)
+                                  for _ in range(cfg["num_experts"])])
+
+    def forward(self, x):
+        scores = self.gate(x)  # (b, seq_len, num_experts)
+        topk_scores, topk_indices = torch.topk(scores, self.num_experts_per_tok, dim=-1)
+        topk_probs = torch.softmax(topk_scores, dim=-1)
+
+        expert_outputs = []
+        for e in range(self.num_experts):
+            hidden = torch.nn.functional.silu(self.fc1[e](x)) * self.fc2[e](x)
+            out = self.fc3[e](hidden)
+            expert_outputs.append(out.unsqueeze(-2))
+        expert_outputs = torch.cat(expert_outputs, dim=-2)  # (b, t, num_experts, emb_dim)
+
+        gating_probs = torch.zeros_like(scores)
+
+        for i in range(self.num_experts_per_tok):
+            indices = topk_indices[..., i:i+1]
+            prob = topk_probs[..., i:i+1]
+            gating_probs.scatter_(dim=-1, index=indices, src=prob)
+        gating_probs = gating_probs.unsqueeze(-1)  # (b, t, num_experts, 1)
+
+        # Weighted sum over experts
+        y = (gating_probs * expert_outputs).sum(dim=-2)
+        return y
+
+
 class GroupedQueryAttention(nn.Module):
     def __init__(
         self, d_in, num_heads, num_kv_groups, head_dim=None, qk_norm=False, dtype=None
@@ -381,21 +441,53 @@ def load_weights_into_qwen(model, param_config, params):
         )
 
         # Feedforward weights
-        block.ff.fc1.weight = assign(
-            block.ff.fc1.weight,
-            params[f"model.layers.{l}.mlp.gate_proj.weight"],
-            f"model.layers.{l}.mlp.gate_proj.weight"
-        )
-        block.ff.fc2.weight = assign(
-            block.ff.fc2.weight,
-            params[f"model.layers.{l}.mlp.up_proj.weight"],
-            f"model.layers.{l}.mlp.up_proj.weight"
-        )
-        block.ff.fc3.weight = assign(
-            block.ff.fc3.weight,
-            params[f"model.layers.{l}.mlp.down_proj.weight"],
-            f"model.layers.{l}.mlp.down_proj.weight"
-        )
+        if "num_experts" in param_config:
+            # Load router (gating) weights
+            block.ff.gate.weight = assign(
+                block.ff.gate.weight,
+                params[f"model.layers.{l}.mlp.gate.weight"],
+                f"model.layers.{l}.mlp.gate.weight"
+            )
+            # Load expert weights
+            for e in range(param_config["num_experts"]):
+                prefix = f"model.layers.{l}.mlp.experts.{e}"
+                block.ff.fc1[e].weight = assign(
+                    block.ff.fc1[e].weight,
+                    params[f"{prefix}.gate_proj.weight"],
+                    f"{prefix}.gate_proj.weight"
+                )
+                block.ff.fc2[e].weight = assign(
+                    block.ff.fc2[e].weight,
+                    params[f"{prefix}.up_proj.weight"],
+                    f"{prefix}.up_proj.weight"
+                )
+                block.ff.fc3[e].weight = assign(
+                    block.ff.fc3[e].weight,
+                    params[f"{prefix}.down_proj.weight"],
+                    f"{prefix}.down_proj.weight"
+                )
+                # After assigning weights, move the expert layers from meta to CPU
+                block.ff.fc1[e] = block.ff.fc1[e].to("cpu")
+                block.ff.fc2[e] = block.ff.fc2[e].to("cpu")
+                block.ff.fc3[e] = block.ff.fc3[e].to("cpu")
+
+        else:
+            block.ff.fc1.weight = assign(
+                block.ff.fc1.weight,
+                params[f"model.layers.{l}.mlp.gate_proj.weight"],
+                f"model.layers.{l}.mlp.gate_proj.weight"
+            )
+            block.ff.fc2.weight = assign(
+                block.ff.fc2.weight,
+                params[f"model.layers.{l}.mlp.up_proj.weight"],
+                f"model.layers.{l}.mlp.up_proj.weight"
+            )
+            block.ff.fc3.weight = assign(
+                block.ff.fc3.weight,
+                params[f"model.layers.{l}.mlp.down_proj.weight"],
+                f"model.layers.{l}.mlp.down_proj.weight"
+            )
+
         block.norm2.scale = assign(
             block.norm2.scale,
             params[f"model.layers.{l}.post_attention_layernorm.weight"],
@@ -405,8 +497,12 @@ def load_weights_into_qwen(model, param_config, params):
     # Final normalization and output head
     model.final_norm.scale = assign(model.final_norm.scale, params["model.norm.weight"], "model.norm.weight")
 
-    # Model uses weight tying, hence we reuse the embedding layer weights here
-    model.out_head.weight = assign(model.out_head.weight, params["model.embed_tokens.weight"], "model.embed_tokens.weight")
+    if "lm_head.weight" in params:
+        model.out_head.weight = assign(model.out_head.weight, params["lm_head.weight"], "lm_head.weight")
+    else:
+        # Model uses weight tying, hence we reuse the embedding layer weights here
+        print("Model uses weight tying.")
+        model.out_head.weight = assign(model.out_head.weight, params["model.embed_tokens.weight"], "model.embed_tokens.weight")
 
 
 class Qwen3Tokenizer:

+ 88 - 0
pkg/llms_from_scratch/tests/test_qwen3.py

@@ -13,12 +13,14 @@ from llms_from_scratch.qwen3 import (
     Qwen3Tokenizer
 )
 from llms_from_scratch.kv_cache.qwen3 import Qwen3Model as Qwen3ModelKV
+from llms_from_scratch.kv_cache.utils import KVCache
 from llms_from_scratch.kv_cache.generate import generate_text_simple as generate_text_simple_cached
 
 from llms_from_scratch.kv_cache_batched.qwen3 import Qwen3Model as Qwen3ModelKVBatched
 from llms_from_scratch.kv_cache_batched.generate import generate_text_simple as generate_text_simple_batched
 
 import importlib
+import platform
 import pytest
 import torch
 import torch.nn as nn
@@ -50,6 +52,92 @@ class Qwen3RMSNorm(nn.Module):
 transformers_installed = importlib.util.find_spec("transformers") is not None
 
 
+@pytest.fixture
+def dummy_input():
+    torch.manual_seed(123)
+    return torch.randint(0, 100, (1, 8))  # batch size 1, seq length 8
+
+
+@pytest.fixture
+def dummy_cfg_base():
+    return {
+        "vocab_size": 100,
+        "emb_dim": 32,
+        "hidden_dim": 64,
+        "n_layers": 2,
+        "n_heads": 4,
+        "head_dim": 8,
+        "n_kv_groups": 1,
+        "qk_norm": False,
+        "dtype": torch.float32,
+        "rope_base": 10000,
+        "context_length": 64,
+        "num_experts": 0,
+    }
+
+
+@pytest.fixture
+def dummy_cfg_moe(dummy_cfg_base):
+    cfg = dummy_cfg_base.copy()
+    cfg.update({
+        "num_experts": 4,
+        "num_experts_per_tok": 2,
+        "moe_intermediate_size": 64,
+    })
+    return cfg
+
+
+def test_dummy_qwen3_forward(dummy_cfg_base, dummy_input):
+    torch.manual_seed(123)
+    model = Qwen3Model(dummy_cfg_base)
+    out = model(dummy_input)
+    assert out.shape == (1, dummy_input.size(1), dummy_cfg_base["vocab_size"]), \
+        f"Expected shape (1, seq_len, vocab_size), got {out.shape}"
+
+
+def test_dummy_qwen3_moe_forward(dummy_cfg_moe, dummy_input):
+    torch.manual_seed(123)
+    model = Qwen3Model(dummy_cfg_moe)
+    out = model(dummy_input)
+    assert out.shape == (1, dummy_input.size(1), dummy_cfg_moe["vocab_size"]), \
+        f"Expected shape (1, seq_len, vocab_size), got {out.shape}"
+    assert any(hasattr(block.ff, 'gate') for block in model.trf_blocks), \
+        "Expected MoEFeedForward in at least one transformer block"
+
+
+@pytest.mark.parametrize("cfg_name", ["dummy_cfg_base", "dummy_cfg_moe"])
+def test_qwen3_kvcache_equivalence(cfg_name, request):
+    cfg = request.getfixturevalue(cfg_name)
+
+    if cfg["num_experts"] > 0 and platform.system() == "Linux":
+        pytest.skip("Skipping MoE KV equivalence test on Linux due to nondeterministic expert routing")
+
+    torch.manual_seed(123)
+    model_regular = Qwen3Model(cfg)
+    model_regular.eval()
+
+    model_kv = Qwen3ModelKV(cfg)
+    model_kv.eval()
+    model_kv.load_state_dict(model_regular.state_dict())
+    model_kv.reset_kv_cache()
+    cache = KVCache(n_layers=cfg["n_layers"])
+
+    torch.manual_seed(123)
+    input_ids = torch.randint(0, cfg["vocab_size"], (1, 6))
+
+    out_full = model_regular(input_ids)
+
+    logits_stepwise = []
+    for t in range(input_ids.size(1)):
+        input_token = input_ids[:, t:t + 1]
+        logits = model_kv(input_token, cache=cache)
+        logits_stepwise.append(logits)
+    out_kv = torch.cat(logits_stepwise, dim=1)
+
+    assert out_full.shape == out_kv.shape, f"Shape mismatch: {out_full.shape} vs {out_kv.shape}"
+    assert torch.allclose(out_full, out_kv, atol=1e-5, rtol=1e-3)
+
+
 @pytest.mark.skipif(not transformers_installed, reason="transformers not installed")
 def test_rope():