Leaderboard
On-device LLM performance rankings powered by Glicko-2
iPad Pro 11 inch 5th Gen
iOSRank
#3
Rating
1,999
±15 RD
Win Rate
98.1%
Conservative Rating
1,969
TG Rating
1,997
PP Rating
1,999
Matches
1,233
Record
1210W – 23L
Models Tested
| Model | TG Median (tok/s) | PP Median (tok/s) | TG Best | PP Best | Runs |
|---|---|---|---|---|---|
| SmolLM2-135M-Instruct-Q8_0 | 206.51 | 5714.42 | 286.71 | 7139.20 | 2 |
| DeepSeek-R1-0528-Qwen3-8B-Q4_K_M | 96.31 | 1034.08 | 96.31 | 1034.08 | 1 |
| SmolLM2-135M-Instruct-f16 | 94.03 | 4845.97 | 94.03 | 4845.97 | 1 |
| gemma-3-270m-it-F16 | 86.34 | 4855.20 | 86.34 | 4855.20 | 1 |
| llama-3.2-1b-instruct-q8_0 | 68.85 | 1302.39 | 70.83 | 1341.92 | 5 |
| granite-3.1-1b-a400m-instruct-Q8_0 | 66.82 | 42.07 | 67.33 | 42.08 | 2 |
| qwen2.5-1.5b-instruct-q8_0 | 48.83 | 110.37 | 48.83 | 110.37 | 1 |
| SmolLM2-1.7B-Instruct-Q8_0 | 48.65 | 794.73 | 48.70 | 796.92 | 2 |
| Mistral-Nemo-Instruct-2407-IQ2_M | 43.15 | 372.14 | 78.76 | 672.79 | 2 |
| granite-3.1-3b-a800m-instruct-Q8_0 | 38.16 | 19.98 | 38.16 | 19.98 | 1 |
| Phi-4-mini-instruct.Q4_K_M | 34.93 | 363.97 | 34.93 | 363.97 | 1 |
| gemma-3-4b-it.Q4_K_M | 31.45 | 360.50 | 31.45 | 360.50 | 1 |
| DeepSeek-R1-Distill-Qwen-1.5B-Q8_0 | 29.79 | 648.74 | 29.79 | 648.74 | 1 |
| LFM2.5-1.2B-Instruct-BF16 | 29.24 | 950.94 | 42.35 | 1392.93 | 2 |
| Phi-4-mini-instruct.Q6_K | 28.81 | 355.50 | 28.81 | 355.50 | 1 |
| gemma-3n-E4B-it-absolute-heresy-iQ4_NL | 26.60 | 241.58 | 26.60 | 241.58 | 1 |
| gemma-2-2b-it-Q6_K | 25.80 | 47.55 | 41.05 | 584.10 | 9 |
| Qwen2.5-3B-Instruct-Q4_K_S | 25.52 | 278.36 | 25.52 | 278.36 | 1 |
| gemma-3n-E2B-it-UD-Q4_K_XL | 24.13 | 284.98 | 24.13 | 284.98 | 1 |
| Phi-3.5-mini-instruct.Q4_K_M | 24.05 | 31.80 | 36.43 | 328.74 | 8 |
| Qwen3-1.7B-Q8_0 | 23.31 | 64.69 | 23.31 | 64.69 | 1 |
| DeepSeek-R1-Distill-Qwen-7B-IQ3_M | 22.69 | 198.89 | 22.69 | 198.89 | 1 |
| DeepSeek-R1-Distill-Qwen-7B-IQ4_NL | 22.09 | 203.15 | 22.09 | 203.15 | 1 |
| DeepSeek-R1-Distill-Qwen-7B-Q4_0 | 21.92 | 203.81 | 22.13 | 205.75 | 2 |
| gemma-3n-E2B-it-Q4_K_M | 21.76 | 157.35 | 25.58 | 283.19 | 2 |
| Llama-3.2-3B-Instruct-Q6_K | 21.20 | 276.45 | 33.64 | 413.51 | 3 |
| gemma-3-4b-it-UD-Q4_K_XL | 19.71 | 240.75 | 19.71 | 240.75 | 1 |
| DeepSeek-R1-Distill-Qwen-1.5B-f16 | 18.53 | 657.67 | 18.53 | 657.67 | 1 |
| Llama-3.2-3B-Instruct-Q8_0 | 18.49 | 174.92 | 21.39 | 302.82 | 2 |
| Grok-3-reasoning-gemma3-4B-distilled-HF.Q8_0 | 17.36 | 45.07 | 17.36 | 45.07 | 1 |
| qwen2.5-3b-instruct-q5_k_m | 17.12 | 27.26 | 23.93 | 260.03 | 4 |
| DeepSeek-R1-Distill-Qwen-7B-Q4_K_S | 17.01 | 158.22 | 21.48 | 192.33 | 4 |
| gemma-3n-E4B-it-UD-Q4_K_XL | 15.69 | 149.06 | 15.69 | 149.06 | 1 |
| Phi-3-mini-4k-instruct-q4 | 14.36 | 233.41 | 14.36 | 233.41 | 1 |
| Qwen2.5-7B-Instruct-IQ3_XS | 13.70 | 115.55 | 13.71 | 115.76 | 2 |
| gemma-3n-E4B-it-Q4_K_M | 13.62 | 85.22 | 16.15 | 153.15 | 2 |
| Qwen2-7B-Instruct.IQ2_XS | 13.60 | 117.68 | 13.60 | 117.68 | 1 |
| Qwen3-MOE-4x1.7B-6.8B-Lucys-Song-Uncensored.Q4_K_M | 13.25 | 27.08 | 13.25 | 27.08 | 1 |
| Qwen3-4B.Q4_K_M | 12.78 | 18.29 | 12.78 | 18.29 | 1 |
| dungeon-escape-game-7B-GRPO-Model.i1-Q4_K_M | 12.57 | 122.03 | 12.57 | 122.03 | 1 |
| Qwen3.5-4B-Uncensored-HauhauCS-Aggressive-Q6_K | 8.91 | 147.00 | 8.91 | 147.00 | 1 |
| DeepSeek-R1-Distill-Qwen-7B-Q4_K_M | 7.54 | 72.48 | 21.44 | 194.87 | 3 |
| Qwen3-4B.Q5_K_M | 7.36 | 17.25 | 7.36 | 17.25 | 1 |
| DeepSeek-R1-Distill-Qwen-7B-IQ2_M | 7.03 | 9.29 | 7.03 | 9.29 | 1 |
| DeepSeek-R1-Distill-Llama-8B-Abliterated.IQ4_XS | 6.53 | 48.07 | 6.53 | 48.07 | 1 |
| DeepSeek-R1-0528-Qwen3-8B-IQ4_NL | 4.09 | 7.50 | 4.49 | 8.15 | 2 |
Head-to-Head Record
1–50 of 316 rows
1 / 7
Performance by App Version
ImprovedRegressed