Leaderboard
On-device LLM performance rankings powered by Glicko-2
Pixel 8 Pro
AndroidRank
#135
Rating
1,507
±14 RD
Win Rate
50.7%
Conservative Rating
1,479
TG Rating
1,495
PP Rating
1,630
Matches
1,401
Record
710W – 691L
Models Tested
| Model | TG Median (tok/s) | PP Median (tok/s) | TG Best | PP Best | Runs |
|---|---|---|---|---|---|
| Llama-3.2-1B-Instruct.IQ4_XS | 31.82 | 48.61 | 47.97 | 52.55 | 2 |
| Thinker-SmolLM2-135M-Instruct-Reasoning.i1-Q4_K_M | 25.40 | 218.10 | 25.40 | 218.10 | 1 |
| gemma-3-270m-it-IQ4_XS | 20.08 | 291.80 | 20.08 | 291.80 | 1 |
| SmolLM2-360M-Instruct-Q8_0 | 14.79 | 113.49 | 14.79 | 113.49 | 1 |
| gemma-3-1B-it-QAT-Q4_0 | 12.48 | 90.53 | 12.48 | 90.53 | 1 |
| Qwen3-0.6B-IQ4_XS | 12.15 | 55.39 | 12.15 | 55.39 | 1 |
| Llama-3.2-1B-Instruct.Q6_K | 11.93 | 40.55 | 13.14 | 49.58 | 3 |
| Llama-3.2-1B-Instruct-Uncensored.Q4_K_M | 10.67 | 51.07 | 10.67 | 51.07 | 1 |
| Qwen2-VL-2B-Instruct-Q4_0 | 8.74 | 56.75 | 8.74 | 56.75 | 1 |
| gemma-3-1b-it-Q2_K_L | 8.46 | 59.11 | 8.46 | 59.11 | 1 |
| SmolLM2-1.7B-Instruct-Q8_0 | 8.13 | 48.82 | 9.66 | 64.32 | 2 |
| DeepSeek-R1-Distill-Qwen-1.5B-Q8_0 | 8.06 | 47.41 | 8.06 | 47.41 | 1 |
| llama-3.2-1b-instruct-q8_0 | 7.71 | 52.73 | 8.78 | 56.01 | 2 |
| qwen2.5-1.5b-instruct-q8_0 | 7.38 | 53.82 | 8.02 | 57.27 | 3 |
| Ministral-3-3B-Instruct-2512-Q4_K_M | 6.60 | 26.45 | 7.56 | 29.93 | 2 |
| DeepSeek-R1-Distill-Qwen-1.5B-Q5_K_M | 6.59 | 22.42 | 6.59 | 22.42 | 1 |
| DeepSeek-R1-Distill-Qwen-1.5B-Q2_K_L | 6.56 | 23.32 | 6.56 | 23.32 | 1 |
| DeepSeek-R1-Distill-Qwen-1.5B-Q4_K_M | 6.35 | 28.71 | 6.41 | 30.94 | 2 |
| DeepHermes-3-Llama-3-3B-Preview-q4 | 5.68 | 25.99 | 5.68 | 25.99 | 1 |
| HuggingFaceTB_SmolLM3-3B-Q4_K_M | 5.67 | 16.42 | 5.67 | 16.42 | 1 |
| Qwen3.5-2B-Q4_K_M | 5.61 | 45.51 | 5.61 | 45.51 | 1 |
| DeepSeek-R1-Distill-Qwen-1.5B-f16 | 5.31 | 13.10 | 5.31 | 13.10 | 1 |
| DeepSeek-R1-Distill-Qwen-1.5B-Q8_0 | 5.23 | 36.56 | 5.23 | 36.56 | 1 |
| qwen2.5-3b-instruct-q5_k_m | 5.17 | 24.94 | 6.07 | 26.52 | 3 |
| Qwen3.5-2B.Q6_K | 5.04 | 43.85 | 5.04 | 43.85 | 1 |
| huihui-ai_gemma-3-1b-it-abliterated-Q8_0 | 5.02 | 74.19 | 5.02 | 74.19 | 1 |
| Llama-3.2-3B-Instruct-uncensored-Q4_K_M | 4.90 | 18.76 | 4.90 | 18.76 | 1 |
| Gemmasutra-Mini-2B-v1-Q6_K | 4.88 | 16.84 | 6.53 | 27.90 | 3 |
| gemma-2-2b-it-Q6_K | 4.74 | 17.85 | 6.73 | 35.67 | 9 |
| nsfw-flash-q4_k_m | 4.57 | 26.42 | 4.57 | 26.42 | 1 |
| gemma-3-4b-it-Q2_K | 4.53 | 11.55 | 4.53 | 11.55 | 1 |
| gemma-2-2b-it-Q4_K_M | 4.49 | 21.18 | 4.49 | 21.18 | 1 |
| DeepHermes-3-Llama-3-3B-Preview-q6 | 4.42 | 10.83 | 4.42 | 10.83 | 1 |
| gemma-3-4b-it-Q2_K_L | 4.39 | 11.15 | 4.39 | 11.15 | 1 |
| gemma-3-4b-it-Q4_K_M | 4.35 | 14.56 | 4.35 | 14.56 | 1 |
| Qwen3-1.7B.Q3_K_M | 4.24 | 20.69 | 4.24 | 20.69 | 1 |
| Qwen3.5-0.8B-BF16 | 4.21 | 18.47 | 4.21 | 18.47 | 1 |
| Phi-3.5-mini-instruct.Q4_K_M | 4.00 | 14.02 | 5.18 | 16.92 | 2 |
| gemma-3-4b-it.Q4_K_M | 3.55 | 16.08 | 3.55 | 16.08 | 1 |
| DarkIdol-Llama-3.1-8B-Instruct-1.2-Uncensored-Q4_K_M-imat | 3.46 | 10.82 | 3.46 | 10.82 | 1 |
| gemma-3-4b-it-qat-q4_0-unquantized-q4_k_m | 3.45 | 11.42 | 3.45 | 11.42 | 1 |
| gemma-3-4b-it-qat-Q8_0 | 3.33 | 15.65 | 3.33 | 15.65 | 1 |
| Llama-3.2-3B-Instruct-uncensored-Q6_K | 3.19 | 11.74 | 3.19 | 11.74 | 1 |
| Llama-3.2-3B-Instruct-Q6_K | 3.17 | 8.95 | 4.90 | 12.89 | 3 |
| Phi-4-mini-instruct.Q5_K_M | 2.91 | 8.54 | 2.91 | 8.54 | 1 |
| gemma-3-4b-it.Q6_K | 2.79 | 10.12 | 2.79 | 10.12 | 1 |
| DeepHermes-3-Llama-3-8B-q4 | 2.73 | 11.24 | 2.73 | 11.24 | 1 |
| DeepSeek-R1-Distill-Qwen-7B-IQ2_M | 2.33 | 3.90 | 2.33 | 3.90 | 1 |
| DeepSeek-R1-Distill-Qwen-7B-Q8_0 | 1.53 | 9.97 | 1.53 | 9.97 | 1 |
| Llama-3.2-4X3B-MOE-Hell-California-10B-D_AU-Q4_k_m | 1.40 | 5.87 | 1.40 | 5.87 | 1 |
1–50 of 53 rows
1 / 2
Head-to-Head Record
1–50 of 333 rows
1 / 7
Performance by App Version
ImprovedRegressed