Leaderboard
On-device LLM performance rankings powered by Glicko-2
Galaxy S23 Ultra
AndroidRank
#81
Rating
1,697
±14 RD
Win Rate
69.0%
Conservative Rating
1,669
TG Rating
1,705
PP Rating
1,609
Matches
1,366
Record
943W – 423L
Models Tested
| Model | TG Median (tok/s) | PP Median (tok/s) | TG Best | PP Best | Runs |
|---|---|---|---|---|---|
| code-smolLM2-135m-text-to-sql.IQ4_NL | 56.52 | 206.02 | 56.52 | 206.02 | 1 |
| autotrain-smollm2-135m-finetune-guanaco.Q4_K_M | 48.45 | 178.02 | 48.45 | 178.02 | 1 |
| baidu.ERNIE-4.5-0.3B-PT.Q4_K_M | 31.05 | 152.84 | 31.05 | 152.84 | 1 |
| qwen1_5-0_5b-chat-q2_k | 22.45 | 87.87 | 22.45 | 87.87 | 1 |
| LFM2.5-Audio-1.5B-Q4_0 | 22.31 | 400.52 | 22.31 | 400.52 | 1 |
| DeepSeek-R1-Distill-Qwen-1.5B-Fully-Uncensored.i1-Q4_K_M | 20.89 | 46.34 | 20.89 | 46.34 | 1 |
| SmolLM2-1.7-Persona.i1-Q4_K_M | 19.69 | 32.14 | 19.69 | 32.14 | 1 |
| Llama-3.2-1B-Instruct-Q8_0 | 18.53 | 73.77 | 18.53 | 73.77 | 1 |
| gemma-3-270m-it-F16 | 17.24 | 108.61 | 17.24 | 108.61 | 1 |
| llama-3.2-1b-instruct-q8_0 | 16.57 | 67.60 | 21.32 | 77.72 | 6 |
| gemma-3-1b-it-Q3_K_M | 13.40 | 62.04 | 13.40 | 62.04 | 1 |
| gemma-3-1b-it.Q2_K | 13.31 | 68.90 | 13.31 | 68.90 | 1 |
| SmolLM2-1.7B-Instruct-Q8_0 | 12.52 | 128.86 | 13.54 | 222.56 | 2 |
| agentica-org_DeepScaleR-1.5B-Preview-IQ4_NL | 12.14 | 47.10 | 12.14 | 47.10 | 1 |
| qwen2.5-1.5b-instruct-q8_0 | 12.12 | 44.83 | 14.53 | 55.34 | 7 |
| Qwen2.5-1.5B-Instruct.IQ1_M | 11.32 | 18.38 | 11.32 | 18.38 | 1 |
| huihui-internvl3-2b-abliterated-q4_k_m | 11.21 | 38.02 | 11.21 | 38.02 | 1 |
| Llama-3.2-1B-Instruct.IQ1_M | 10.68 | 18.55 | 10.68 | 18.55 | 1 |
| Llama-3.2-3B-Instruct-abliterated.Q2_K | 8.28 | 14.47 | 8.28 | 14.47 | 1 |
| Gemmasutra-Mini-2B-v1-Q6_K | 7.91 | 18.76 | 8.71 | 23.32 | 4 |
| Llama-3.2-3B-Instruct-uncensored-Q8_0 | 7.60 | 21.33 | 7.60 | 21.33 | 1 |
| Josiefied-Qwen2.5-3B-Instruct-abliterated-v1.Q8_0 | 7.27 | 18.94 | 7.27 | 18.94 | 1 |
| Phi-3.5-mini-instruct.Q4_K_M | 6.98 | 13.42 | 10.35 | 16.25 | 5 |
| Llama-3.2-3B-Instruct-uncensored-Q3_K_XL | 6.97 | 13.35 | 6.97 | 13.35 | 1 |
| qwen2.5-3b-instruct-q5_k_m | 6.96 | 13.24 | 9.14 | 16.68 | 11 |
| Llama-3.2-3B-Instruct.Q5_K_M | 6.92 | 10.04 | 6.92 | 10.04 | 1 |
| gemma-2-2b-it-Q6_K | 6.76 | 20.01 | 8.96 | 34.54 | 6 |
| DeepSeek-R1-Distill-Llama-3B-Q5_K_M | 6.61 | 12.28 | 6.61 | 12.28 | 1 |
| medgemma-4b-it-Q4_K_M | 6.22 | 14.70 | 6.22 | 14.70 | 1 |
| Llama-3.2-3B-Instruct-Q6_K | 5.89 | 12.35 | 7.49 | 16.41 | 7 |
| mistral-7b-instruct-v0.2.Q6_K | 4.09 | 5.84 | 4.09 | 5.84 | 1 |
| Dirty-Muse-Writer-v01-Uncensored-Erotica-NSFW.i1-IQ4_XS | 3.85 | 5.58 | 3.85 | 5.58 | 1 |
| DeepSeek-R1-Distill-Qwen-1.5B-f16 | 3.68 | 6.56 | 3.68 | 6.56 | 1 |
| Qwen3.5-0.8B-UD-Q8_K_XL | 2.96 | 11.56 | 2.96 | 11.56 | 1 |
| Qwen3.5-0.8B-Q8_0 | 2.60 | 28.08 | 2.60 | 28.08 | 1 |
| Qwen3.5-2B-Q8_0 | 2.48 | 33.70 | 2.48 | 33.70 | 1 |
| DeepSeek-R1-Distill-Qwen-7B-IQ4_NL | 2.42 | 11.83 | 2.42 | 11.83 | 1 |
| Qwen3.5-2B-Q4_0 | 2.16 | 24.15 | 2.16 | 24.15 | 1 |
| Qwen3.5-4B-Q8_0 | 1.54 | 18.55 | 1.54 | 18.55 | 1 |
Head-to-Head Record
1–50 of 333 rows
1 / 7
Performance by App Version
ImprovedRegressed