Model Statistics

Time Period

Weekly Token Usage by Model

Loading chart data...

Below is a list of AI models available on CodingFleet, along with statistics & benchmarks for the selected time period. The statistics are based on actual usage by CodingFleet users.

Legacy models are shown in the list but are grayed out and disabled. These models are no longer available.

The cost of the model is the number of credits required to use the model in a single request. For Unlimited and Elite users, the usage for 1-cost models is unlimited, while others are limited as shown in the pricing page.

Model Avg Speed (chars/s) The average speed of the model in characters per second, across all users utilizing this model. N° Tokens The total number of tokens in both the prompt and the completion combined, across all users utilizing this model. Cost The credit cost of utilizing this model in a request. Models with a cost of 2 or higher are considered premium models. Vote Score The average vote score given to this model by CodingFleet users, where -1 is the lowest score and 1 is the highest. This is displayed only if there are 10 or more votes for the model. LiveBench Coding The average LiveBench coding score (livebench.ai) LiveBench Avg The average LiveBench average score (livebench.ai) WebDev Arena WebDev Arena is an open-source benchmark evaluating AI capabilities in web development (https://web.lmarena.ai/leaderboard)
Gemini 2.5 Pro
178.28 50.25M 3 4 1.00 73.58 72.09 1433.16
DeepSeek R1
42.30 42.33M 3 1 0.23 71.40 70.10 1408.84
Claude 4 Opus
145.42 7.53M 3 17 - 72.87 71.52 1405.51
Claude 4 Opus Thinking
129.01 21.45M 4 20 - 73.25 72.93 1405.51
Claude 4 Sonnet Thinking
150.84 153.78M 3 6 0.75 73.58 72.08 1381.76
Claude 4 Sonnet
186.34 147.51M 3 5 - 77.54 69.65 1381.76
Claude 3.7 Sonnet Thinking
161.25 162.90M 3 6 - 73.19 67.43 1356.70
Claude 3.7 Sonnet
222.39 261.27M 1 5 0.55 74.28 58.48 1356.70
Gemini 2.5 Flash
287.52 4.25M 4 1 - 60.33 69.93 1304.86
GPT-4.1
250.32 253.04M 1 3 1.00 73.19 62.99 1256.52
Claude 3.5 Sonnet
225.11 503.02M 1 2 0.04 32.29 50.81 1239.33
DeepSeek V3
79.50 157.54M 3 1 0.36 68.91 62.82 1206.74
GPT-4.1 mini
216.47 188.96M 2 1 0.15 72.11 59.05 1194.40
OpenAI o3-high
101.04 61.60M 2 5 1.00 76.71 74.61 1188.34
OpenAI o3
155.71 60.06M 2 3 1.00 77.86 71.98 1188.12
Mistral Medium 3
241.88 347.62K 2 1 - 61.48 56.59 1160.10
OpenAI o3-mini-high
193.72 83.36M 2 2 - 65.48 71.37 1136.22
Claude 3.5 Haiku
203.78 404.88M 0 1 0.02 53.17 44.98 1133.77
OpenAI o4-mini
189.72 110.39M 3 2 0.08 74.22 66.87 1095.11
OpenAI o3-mini
269.28 59.17M 2 2 - 58.43 67.16 1091.71
Gemini 2.0 Pro (Exp)
343.12 42.26M 3 2 - 35.33 61.59 1088.60
OpenAI o1 Mini
395.93 73.35M 2 2 0.46 48.05 57.76 1053.69
OpenAI o1
264.68 14.28M 3 5 - - - 1045.23
Gemini 2.0 Flash
509.71 302.61M 0 1 0.18 64.74 60.05 1035.19
Gemini 2.0 Flash Thinking
416.52 3.18M 4 1 - 35.71 62.05 1030.05
Llama 4 Maverick
318.23 22.05M 4 1 0.33 54.19 55.19 998.51
GPT-4o
227.08 423.73M 0 3 -0.21 69.29 53.95 964.00
Qwen2.5-Coder 32B
292.62 4.37M 4 1 - 56.85 46.23 904.14
Llama 3.1 405B
113.57 41.88M 3 2 0.00 42.65 52.36 813.69
Claude 3 Opus
100.80 2.05M 5 3 -0.77 - - -
Gemini 1.5 Pro
161.00 103.61M 3 2 - - - -
Qwen3 235B A22B
124.69 815.97K 2 1 - 66.41 64.93 -
GPT-4 Turbo
114.91 19.23M 6 3 0.39 - - -
Claude 3 Sonnet
0.00 0 20 2 - - - -
GPT-4o Mini
231.08 513.78M 1 1 -0.09 43.15 41.26 -
Llama 3.2 90B
174.52 19.02M 3 1 0.33 - - -
Gemini 1.5 Flash
387.18 83.59M 5 1 -0.06 - - -
Grok-3 (Beta)
259.59 4.35M 4 3 - 73.58 63.17 -
OpenAI o1 Preview
139.15 16.34M 3 3 - - - -
Grok-4
146.70 411.45K 2 5 - 71.34 72.11 -
Claude 3 Haiku
470.24 122.75M 5 1 0.05 - - -
Mistral Large 2
135.82 7.42M 4 2 - 62.89 50.25 -
Grok-3 Mini (Beta)
268.45 5.30M 4 1 - 54.52 70.25 -
Codestral (2501)
351.67 7.06M 4 1 -0.33 - - -
GPT-3.5 Turbo
305.33 1.13M 36 1 0.60 - - -
Llama 3.3 70B
565.73 41.44M 4 1 0.64 24.05 45.68 -
Llama 3.1 70B
485.97 20.32M 4 1 -0.20 33.49 44.89 -
OpenAI o4-mini-high
122.34 65.06M 2 2 0.50 79.98 71.52 -
Grok-2
231.53 2.79M 4 2 - 26.14 48.11 -
GPT-4
0.00 0 33 3 - - - -