Model Statistics

Time Period

Weekly Token Usage by Model

Loading chart data...

Below is a list of AI models available on CodingFleet, along with statistics & benchmarks for the selected time period. The statistics are based on actual usage by CodingFleet users.

Legacy models are shown in the list but are grayed out and disabled. These models are no longer available.

The cost of the model is the number of credits required to use the model in a single request. For Unlimited and Elite users, the usage for 1-cost models is unlimited, while others are limited as shown in the pricing page.

Model Avg Speed (chars/s) The average speed of the model in characters per second, across all users utilizing this model. N° Tokens The total number of tokens in both the prompt and the completion combined, across all users utilizing this model. Cost The credit cost of utilizing this model in a request. Models with a cost of 2 or higher are considered premium models. Vote Score The average vote score given to this model by CodingFleet users, where -1 is the lowest score and 1 is the highest. This is displayed only if there are 10 or more votes for the model. LiveBench Coding The average LiveBench coding score (livebench.ai) LiveBench Avg The average LiveBench average score (livebench.ai) WebDev Arena WebDev Arena is an open-source benchmark evaluating AI capabilities in web development (https://web.lmarena.ai/leaderboard)
Claude Sonnet 4.5 Thinking
152.4 3.0B 0 7 0.5 80.4 78.3 1397.0
GPT-5 Mini
146.7 1.7B 0 1 0.5 - - -
GPT-5.2 Thinking High
102.0 1.4B 0 6 0.7 76.1 73.6 1485.0
Claude Opus 4.5 Thinking
170.2 1.4B 0 17 - 79.7 79.8 1518.0
Claude 4 Sonnet Thinking
149.0 1.2B 0 7 0.7 73.6 72.1 1381.8
GPT-5 Thinking High
71.2 872.3M 0 5 1.0 75.3 78.6 1480.5
Claude Sonnet 4.5
188.3 778.8M 0 6 0.2 - - -
Claude Haiku 4.5
383.9 694.1M 1 1 - - - -
DeepSeek V3.2
82.7 600.9M 2 1 0.3 68.5 62.6 -
Claude 4 Sonnet
193.2 575.6M 2 6 0.8 77.5 69.7 1381.8
GPT-4.1
234.4 535.3M 2 6 0.8 73.2 63.0 1256.5
GPT-5 Thinking
91.1 517.5M 2 5 - 73.3 76.5 -
Claude 3.5 Haiku
205.5 510.2M 5 1 -0.1 53.2 45.0 1133.8
GPT-4.1 mini
215.9 409.1M 3 1 0.1 72.1 59.1 1194.4
GPT-5.1 Thinking High
141.7 361.1M 3 5 - 72.5 72.5 1395.0
Gemini 2.5 Pro
180.5 354.7M 3 4 0.7 72.9 79.0 1409.1
Gemini 2.5 Flash
357.7 340.7M 3 1 0.3 60.3 69.9 1304.9
Claude Haiku 4.5 Thinking
296.1 335.5M 3 1 - 72.8 71.4 -
Claude 3.7 Sonnet
221.8 331.5M 3 6 0.5 74.3 58.5 1356.7
GPT-5.2
194.6 310.0M 4 4 - - - -
Gemini 3 Pro
172.0 309.8M 4 4 - 74.6 79.7 1481.0
GPT-5 Mini High
78.4 284.2M 4 2 1.0 66.4 72.2 -
GPT-5.2 Thinking xHigh
61.5 268.9M 4 8 - - - -
Gemini 2.0 Flash
524.3 267.8M 1 1 0.2 - - -
GPT-5.1 Thinking
168.1 243.4M 3 5 1.0 - - 1395.0
Claude 3.7 Sonnet Thinking
161.7 229.4M 4 7 - 73.2 67.4 1356.7
GPT-5
201.4 186.5M 5 3 1.0 72.5 75.3 -
Gemini 3 Flash
283.6 151.1M 6 1 - 73.9 73.6 1465.0
OpenAI o4-mini
185.7 137.8M 6 2 -0.1 - - 1095.1
GPT-5.2 Thinking
144.5 133.4M 6 5 - - - -
GPT-4o
256.5 125.6M 15 5 0.0 69.3 54.0 964.0
Claude 4.1 Opus Thinking
112.5 122.4M 5 60 - 74.0 73.5 1476.5
GPT-4o Mini
246.4 110.8M 18 1 -0.1 43.2 41.3 -
OpenAI o3-high
103.4 106.6M 6 5 1.0 76.7 74.6 1188.3
OpenAI o3
145.9 101.4M 6 4 1.0 77.9 72.0 1188.1
OpenAI o3-mini-high
194.0 83.4M 6 2 - 65.5 71.4 1136.2
GPT-5.1 Codex Max High
183.8 72.6M 7 5 - 81.4 75.2 -
Grok-4 Fast
369.3 68.8M 8 1 - - - -
Mistral Large 3
128.2 67.7M 6 1 1.0 62.9 50.3 -
OpenAI o4-mini-high
123.0 67.0M 7 2 0.5 - - -
Claude 3.5 Sonnet
207.1 64.3M 31 6 0.3 32.3 50.8 1239.3
OpenAI o3-mini
269.2 59.2M 7 2 - 58.4 67.2 1091.7
DeepSeek V3.2 Thinking
38.9 56.1M 5 1 0.0 64.6 66.6 -
GPT-5.2 Codex High
90.6 55.6M 7 6 - 83.6 74.3 -
Gemini 2.0 Pro (Exp)
343.4 42.3M 7 2 - 35.3 61.6 1088.6
GPT-5.1
257.0 41.9M 8 3 - - - -
Claude 4 Opus Thinking
125.9 35.2M 10 60 - 73.3 72.9 1405.5
Llama 3.3 70B
552.2 35.1M 8 1 0.5 24.1 45.7 -
Llama 4 Maverick
315.5 29.6M 9 1 - 54.2 55.2 998.5
Claude Opus 4.5
177.0 29.0M 9 15 - 77.5 76.0 1484.0
Grok-4.1 Fast
149.9 24.5M 9 1 - - - -
GPT-5.1 Codex Max
225.4 22.9M 9 4 - 81.4 75.2 -
Qwen3 235B A22B
166.8 21.6M 9 1 - 66.4 64.9 -
GPT-5.2 Codex xHigh
108.6 17.8M 12 8 - 83.6 74.3 -
Llama 3.1 405B
90.5 17.2M 2 2 - 42.7 52.4 813.7
Claude 4.1 Opus
131.2 15.9M 12 50 - - - -
Grok-4
132.8 15.2M 12 5 - 71.3 72.1 -
Claude 4 Opus
140.3 13.6M 13 50 - 72.9 71.5 1405.5
GPT-5.2 Codex
200.3 13.4M 13 8 - - - -
Codestral (2508)
816.2 13.4M 5 1 - - - -
GLM 4.6
158.5 13.0M 12 1 - - - -
GPT-5 Pro
35.0 12.9M 12 70 - 72.1 78.7 -
GPT-5.1 Codex Mini High
203.5 9.4M 12 1 - 71.8 69.8 1248.0
OpenAI o1
281.0 8.3M 6 18 - - - 1045.2
Grok-3 Mini (Beta)
276.5 8.0M 11 1 - 54.5 70.3 -
DeepSeek R1
30.5 6.6M 12 2 - - - -
Gemini 1.5 Pro
141.8 5.0M 28 2 - - - -
OpenAI o1 Mini
259.8 4.9M 25 2 - 48.1 57.8 1053.7
Grok-3 (Beta)
256.6 4.4M 11 3 - 73.6 63.2 -
GLM 4.7
94.7 3.5M 11 1 - - - -
Gemini 2.0 Flash Thinking
416.5 3.2M 11 1 - 35.7 62.1 1030.1
Qwen3 Coder
280.7 3.0M 12 2 - 73.2 60.5 -
Kimi K2 Thinking
143.6 2.9M 12 1 - - - -
Mistral Medium 3
279.4 2.6M 13 1 - 61.5 56.6 1160.1
GPT-5.1 Codex Mini
243.4 2.2M 13 1 - 71.8 69.8 1248.0
Qwen2.5-Coder 32B
293.2 1.7M 3 1 - 56.9 46.2 904.1
Kimi K2 (0905)
197.4 1.6M 12 1 - 71.8 62.7 -
Grok-2
226.9 1.3M 8 2 - 26.1 48.1 -
Llama 3.2 90B
183.2 436.0K 15 1 - - - -
Gemini 1.5 Flash
463.9 163.0K 47 1 - - - -
DeepSeek V3
68.0 90.4K 2 1 - - - -
GPT-4 Turbo
126.5 54.1K 32 10 - - - -
Claude 3 Haiku
448.5 52.9K 54 1 - - - -
GPT-3.5 Turbo
85.0 58 53 1 - - - -
OpenAI o1 Preview
0.0 0 18 3 - - - -
Llama 3.1 70B
0.0 0 23 1 - 33.5 44.9 -
GPT-4
0.0 0 49 3 - - - -
Claude 3 Sonnet
0.0 0 33 2 - - - -
Claude 3 Opus
0.0 0 12 3 - - - -