Model Statistics

Time Period

Weekly Token Usage by Model

Loading chart data...

Below is a list of AI models available on CodingFleet, along with statistics & benchmarks for the selected time period. The statistics are based on actual usage by CodingFleet users.

Legacy models are shown in the list but are grayed out and disabled. These models are no longer available.

The cost of the model is the number of credits required to use the model in a single request. For Unlimited and Elite users, the usage for 1-cost models is unlimited, while others are limited as shown in the pricing page.

Model Avg Speed (chars/s) The average speed of the model in characters per second, across all users utilizing this model. N° Tokens The total number of tokens in both the prompt and the completion combined, across all users utilizing this model. Cost The credit cost of utilizing this model in a request. Models with a cost of 2 or higher are considered premium models. Vote Score The average vote score given to this model by CodingFleet users, where -1 is the lowest score and 1 is the highest. This is displayed only if there are 10 or more votes for the model. LiveBench Coding The average LiveBench coding score (livebench.ai) LiveBench Avg The average LiveBench average score (livebench.ai) WebDev Arena WebDev Arena is an open-source benchmark evaluating AI capabilities in web development (https://web.lmarena.ai/leaderboard)
Claude 4 Opus
140.3 13.6M 11 45 - 72.9 71.5 1405.5
Gemini 2.0 Flash Thinking
416.5 3.2M 10 1 - 35.7 62.1 1030.1
Grok-3 (Beta)
256.6 4.4M 11 3 - 73.6 63.2 -
Claude Opus 4.5
188.2 10.0M 9 13 - 77.5 76.0 1479.0
Grok-4.1 Fast
104.7 5.3M 10 1 - - - -
OpenAI o1
264.7 14.3M 11 18 - - - 1045.2
Kimi K2 (0905)
185.8 1.5M 8 1 - 71.8 62.7 -
Claude 3 Haiku
436.7 1.2M 48 1 - - - -
GPT-5 Pro
35.0 12.9M 11 70 - 72.1 78.7 -
GLM 4.6
156.5 12.6M 11 1 - - - -
OpenAI o1 Preview
211.5 2.8M 9 3 - - - -
Claude 4.1 Opus
131.0 15.9M 11 50 - - - -
Mistral Medium 3
239.0 2.2M 8 1 - 61.5 56.6 1160.1
GPT-5.1 Codex Max Medium
281.6 1.7M 8 5 - 81.4 75.2 -
Gemini 1.5 Flash
409.0 997.4K 44 1 - - - -
GPT-4 Turbo
118.7 279.0K 32 10 - - - -
Qwen3 235B A22B
170.9 6.2M 10 1 - 66.4 64.9 -
DeepSeek V3
79.8 3.1M 10 1 - - - -
Claude 3 Sonnet
0.0 0 32 2 - - - -
GPT-4
0.0 0 47 3 - - - -
Claude 3 Opus
0.0 0 11 3 - - - -
Kimi K2 Thinking
124.5 582.6K 5 1 - - - -
GPT-3.5 Turbo
85.0 58 50 1 - - - -
Gemini 1.5 Pro
148.6 11.0M 20 2 - - - -
Qwen3 Coder
260.2 2.7M 9 2 - 73.2 60.5 -
Grok-3 Mini (Beta)
276.5 8.0M 10 1 - 54.5 70.3 -
GPT-5.1
288.6 41.5M 9 4 - - - -
Claude 4.1 Opus Thinking
118.5 122.4M 3 60 - 74.0 73.5 1476.5
Mistral Large 3
167.7 7.6M 8 1 - 62.9 50.3 -
Gemini 2.0 Pro (Exp)
343.4 42.3M 6 2 - 35.3 61.6 1088.6
Grok-2
231.5 2.8M 9 2 - 26.1 48.1 -
Gemini 3 Pro
176.2 84.4M 5 5 - 74.6 79.7 1473.0
Llama 3.1 405B
98.6 19.0M 1 2 - 42.7 52.4 813.7
Llama 3.2 90B
196.3 2.5M 13 1 - - - -
Claude Haiku 4.5
392.0 344.7M 3 2 - - - -
Claude Haiku 4.5 Thinking
272.8 64.4M 6 2 - 72.8 71.4 -
Codestral (2508)
676.3 12.4M 4 1 - - - -
Claude Opus 4.5 Thinking
190.5 233.1M 3 15 - 79.7 79.8 1493.0
Grok-4
136.5 14.0M 11 5 - 71.3 72.1 -
GPT-5.1 Codex Max High
186.9 39.6M 8 7 - 81.4 75.2 -
Grok-4 Fast
385.4 49.4M 6 1 - - - -
Claude 4 Opus Thinking
125.9 35.2M 8 50 - 73.3 72.9 1405.5
GPT-5.1 Thinking High
140.0 294.3M 3 7 - 72.5 72.5 1395.0
OpenAI o3-mini
269.2 59.2M 5 2 - 58.4 67.2 1091.7
OpenAI o3-mini-high
194.0 83.4M 5 2 - 65.5 71.4 1136.2
DeepSeek R1
30.5 6.6M 10 2 - - - -
GPT-5 Thinking
91.5 517.5M 1 5 - 73.3 76.5 -
Qwen2.5-Coder 32B
308.3 3.5M 9 1 - 56.9 46.2 904.1
Claude 3.7 Sonnet Thinking
161.6 229.4M 3 6 - 73.2 67.4 1356.7
Llama 3.1 70B
651.2 727.3K 20 1 - 33.5 44.9 -
Llama 4 Maverick
315.5 29.6M 8 1 - 54.2 55.2 998.5
GPT-4.1
237.8 469.9M 2 4 0.8 73.2 63.0 1256.5
OpenAI o4-mini-high
122.9 67.0M 6 2 0.5 80.0 71.5 -
OpenAI o3
145.9 101.4M 5 4 1.0 77.9 72.0 1188.1
GPT-5.1 Thinking
171.7 111.4M 4 5 1.0 - - 1395.0
Llama 3.3 70B
566.0 41.2M 7 1 0.6 24.1 45.7 -
DeepSeek V3.2 Thinking
41.4 59.5M 7 1 0.0 64.6 66.6 -
OpenAI o3-high
103.0 106.6M 5 5 1.0 76.7 74.6 1188.3
OpenAI o4-mini
185.7 137.8M 3 2 -0.1 74.2 66.9 1095.1
Claude 4 Sonnet
193.3 575.6M 1 6 0.8 77.5 69.7 1381.8
Gemini 2.5 Flash
356.7 319.0M 3 1 0.3 60.3 69.9 1304.9
Claude Sonnet 4.5
189.4 590.1M 1 6 0.3 - - -
GPT-5 Mini High
77.9 273.8M 3 2 1.0 66.4 72.2 -
GPT-4.1 mini
216.1 374.1M 3 1 0.1 72.1 59.1 1194.4
GPT-5 Mini
146.1 784.9M 0 1 0.6 - - -
DeepSeek V3.2
80.8 551.2M 1 1 0.3 68.5 62.6 -
GPT-5
201.6 186.5M 2 4 1.0 72.5 75.3 -
Claude 3.7 Sonnet
221.8 331.5M 3 6 0.5 74.3 58.5 1356.7
Claude 3.5 Sonnet
223.5 208.5M 17 6 0.0 32.3 50.8 1239.3
Claude Sonnet 4.5 Thinking
161.4 1.3B 0 7 0.6 80.4 78.3 1397.0
Gemini 2.0 Flash
511.2 326.2M 3 1 0.2 - - -
OpenAI o1 Mini
374.5 50.4M 1 2 0.5 48.1 57.8 1053.7
GPT-5 Thinking High
71.1 871.7M 0 6 1.0 75.3 78.6 1480.5
Claude 4 Sonnet Thinking
157.8 1.2B 0 6 0.6 73.6 72.1 1381.8
GPT-4o
256.3 265.2M 6 5 -0.2 69.3 54.0 964.0
GPT-4o Mini
246.5 215.7M 11 1 -0.1 43.2 41.3 -
Gemini 2.5 Pro
180.4 335.2M 3 4 0.7 72.9 79.0 1409.1
Claude 3.5 Haiku
205.4 626.3M 0 1 0.1 53.2 45.0 1133.8