Model Statistics

Time Period

Weekly Token Usage by Model

Loading chart data...

Below is a list of AI models available on CodingFleet, along with statistics & benchmarks for the selected time period. The statistics are based on actual usage by CodingFleet users.

Legacy models are shown in the list but are grayed out and disabled. These models are no longer available.

The cost of the model is the number of credits required to use the model in a single request. For Unlimited and Elite users, the usage for 1-cost models is unlimited, while others are limited as shown in the pricing page.

Model Avg Speed (chars/s) The average speed of the model in characters per second, across all users utilizing this model. N° Tokens The total number of tokens in both the prompt and the completion combined, across all users utilizing this model. Cost The credit cost of utilizing this model in a request. Models with a cost of 2 or higher are considered premium models. Vote Score The average vote score given to this model by CodingFleet users, where -1 is the lowest score and 1 is the highest. This is displayed only if there are 10 or more votes for the model. LiveBench Coding The average LiveBench coding score (livebench.ai) LiveBench Avg The average LiveBench average score (livebench.ai) WebDev Arena WebDev Arena is an open-source benchmark evaluating AI capabilities in web development (https://web.lmarena.ai/leaderboard)
Claude 3 Haiku
435.4 8.6M 31 1 - - - -
GPT-5
201.4 168.2M 3 4 1.0 72.5 75.3 -
GPT-5 Mini High
76.4 210.9M 2 2 1.0 66.4 72.2 -
Claude Haiku 4.5
381.1 127.1M 3 2 - - - -
Claude 4.1 Opus Thinking
115.9 74.6M 5 20 - 74.0 73.5 1476.5
OpenAI o3-high
103.0 106.6M 5 5 1.0 76.7 74.6 1188.3
Gemini 2.0 Pro (Exp)
343.4 42.3M 6 2 - 35.3 61.6 1088.6
Claude 4 Opus Thinking
126.3 35.2M 8 20 - 73.3 72.9 1405.5
Qwen2.5-Coder 32B
293.5 4.5M 5 1 - 56.9 46.2 904.1
GPT-5 Thinking High
68.3 677.3M 0 6 1.0 75.3 78.6 1480.5
OpenAI o3
145.9 101.4M 5 4 1.0 77.9 72.0 1188.1
Qwen3 Coder
259.9 2.6M 5 2 - 73.2 60.5 -
Claude 4 Sonnet
192.9 572.4M 1 6 0.8 77.5 69.7 1381.8
GPT-4.1
243.4 403.1M 3 4 0.8 73.2 63.0 1256.5
GPT-5 Mini
147.6 468.1M 3 1 0.7 - - -
Gemini 2.5 Pro
178.7 312.3M 2 4 0.7 72.9 79.0 1409.1
Llama 3.3 70B
565.6 41.4M 8 1 0.6 24.1 45.7 -
Claude Sonnet 4.5 Thinking
169.1 814.6M 0 7 0.6 80.4 78.3 1386.1
GPT-5 Thinking
93.9 478.6M 2 5 - 73.3 76.5 -
Claude 4 Sonnet Thinking
157.9 1.2B 0 6 0.6 73.6 72.1 1381.8
Claude 3.7 Sonnet Thinking
161.6 229.4M 1 6 - 73.2 67.4 1356.7
Claude 3.7 Sonnet
221.8 331.5M 3 6 0.5 74.3 58.5 1356.7
OpenAI o4-mini-high
122.9 67.0M 6 2 0.5 80.0 71.5 -
OpenAI o1 Mini
403.7 66.3M 4 2 0.5 48.1 57.8 1053.7
Llama 3.1 70B
424.0 14.3M 7 1 0.3 33.5 44.9 -
Llama 3.1 405B
103.4 26.2M 2 2 - 42.7 52.4 813.7
OpenAI o3-mini
269.2 59.2M 5 2 - 58.4 67.2 1091.7
Llama 3.2 90B
179.9 8.9M 2 1 - - - -
Gemini 2.5 Flash
354.2 242.6M 0 1 0.3 60.3 69.9 1304.9
DeepSeek V3
80.0 523.6M 1 1 0.3 - - -
DeepSeek V3.2 Exp
80.0 520.5M 1 1 0.3 68.5 62.6 -
Llama 4 Maverick
316.2 26.4M 8 1 - 54.2 55.2 998.5
Claude Sonnet 4.5
194.3 289.8M 1 6 0.2 - - -
Gemini 2.0 Flash
511.3 326.2M 3 1 0.2 - - -
GPT-4.1 mini
216.1 369.1M 3 1 0.1 72.1 59.1 1194.4
Claude 3.5 Sonnet
216.4 252.8M 13 6 0.1 32.3 50.8 1239.3
DeepSeek V3.2 Exp Thinking
40.5 57.6M 5 2 0.0 70.3 70.8 -
Mistral Medium 3
273.6 1.3M 5 1 - 61.5 56.6 1160.1
Kimi K2 (0905)
172.2 1.5M 5 1 - 71.8 62.7 -
OpenAI o1
264.7 14.3M 10 18 - - - 1045.2
Mistral Large 2
154.9 7.5M 4 3 - 62.9 50.3 -
GPT-5 Pro
23.4 111.0K 4 40 - 72.1 78.7 -
GPT-3.5 Turbo
85.0 58 44 1 - - - -
GLM 4.6
149.0 11.6M 7 1 - - - -
Kimi K2 Thinking
136.6 168.5K 4 1 - - - -
Claude 4 Opus
140.0 13.6M 9 18 - 72.9 71.5 1405.5
Claude 3 Opus
0.0 0 9 3 - - - -
GPT-4
0.0 0 39 3 - - - -
Qwen3 235B A22B
169.9 4.6M 5 1 - 66.4 64.9 -
Gemini 1.5 Flash
396.0 12.6M 22 1 - - - -
OpenAI o1 Preview
198.1 5.2M 5 3 - - - -
Gemini 2.0 Flash Thinking
416.5 3.2M 5 1 - 35.7 62.1 1030.1
Claude 4.1 Opus
125.2 13.8M 10 18 - - - -
GPT-4 Turbo
131.4 492.2K 24 10 - - - -
Grok-4 Fast
404.7 11.4M 7 1 - - - -
Grok-3 (Beta)
256.3 4.4M 5 3 - 73.6 63.2 -
Claude Haiku 4.5 Thinking
262.6 39.9M 8 2 - 72.8 71.4 -
Claude 3 Sonnet
0.0 0 27 2 - - - -
GPT-4o Mini
259.0 311.5M 7 1 0.0 43.2 41.3 -
Claude 3.5 Haiku
204.9 715.0M 0 1 0.0 53.2 45.0 1133.8
DeepSeek R1
39.7 64.2M 5 1 -0.1 71.4 70.1 1408.8
GPT-4o
255.4 321.9M 3 5 -0.1 69.3 54.0 964.0
OpenAI o4-mini
185.7 137.8M 3 2 -0.1 74.2 66.9 1095.1
Codestral (2508)
592.1 13.7M 7 1 - - - -
OpenAI o3-mini-high
194.0 83.4M 5 2 - 65.5 71.4 1136.2
Grok-2
231.5 2.8M 5 2 - 26.1 48.1 -
Grok-4
141.3 11.2M 7 5 - 71.3 72.1 -
Gemini 1.5 Pro
155.4 13.0M 16 2 - - - -
Grok-3 Mini (Beta)
276.5 8.0M 6 1 - 54.5 70.3 -