Models

Below is a list of AI models available on CodingFleet, along with some statistics & benchmarks about each model. The statistics are based on the usage of the model by CodingFleet users.

Legacy models are shown in the list but are grayed out and disabled. These models are no longer available.

The cost of the model is the number of credits required to use the model in a single request. For Unlimited and Elite users, the usage for 1-cost models is unlimited, while others are limited as shown in the pricing page.

Model Avg Speed (chars/s) The average speed of the model in characters per second, across all users utilizing this model. N° Tokens The total number of tokens in both the prompt and the completion combined, across all users utilizing this model. Cost The credit cost of utilizing this model in a request. Models with a cost of 2 or higher are considered premium models. Vote Score The average vote score given to this model by CodingFleet users, where -1 is the lowest score and 1 is the highest. This is displayed only if there are 10 or more votes for the model. LiveBench Coding The average LiveBench coding score (livebench.ai) LiveBench Avg The average LiveBench average score (livebench.ai) WebDev Arena WebDev Arena is an open-source benchmark evaluating AI capabilities in web development (https://web.lmarena.ai/leaderboard)
Anthropic Claude 3.5 Sonnet 229.38 515.58M 2 0.02 32.29 50.81 1239.33
OpenAI GPT-4o Mini 231.15 508.30M 1 -0.09 43.15 41.26 -1.00
OpenAI GPT-4o 229.92 467.68M 2 -0.18 69.29 53.95 964.00
Anthropic Claude 3.5 Haiku 206.21 345.80M 1 0.08 53.17 44.98 1133.77
Google Gemini 2.0 Flash 502.03 279.57M 1 0.27 64.74 60.05 1035.19
Anthropic Claude 3.7 Sonnet 223.95 239.53M 5 0.55 74.28 64.62 1356.70
Anthropic Claude 3 Haiku 432.16 231.38M 1 -0.12 -1.00 -1.00 -1.00
OpenAI GPT-3.5 Turbo 199.96 223.92M 1 -0.43 -1.00 -1.00 -1.00
Google Gemini 1.5 Flash 363.63 169.96M 1 0.15 -1.00 -1.00 -1.00
Anthropic Claude 3.7 Sonnet Thinking 163.81 147.62M 6 - 73.19 74.50 1356.70
OpenAI GPT-4.1 246.32 135.76M 3 - 73.19 62.99 1283.42
OpenAI GPT-4 77.48 119.90M 3 -0.13 -1.00 -1.00 -1.00
Google Gemini 1.5 Pro 148.05 111.05M 2 -0.06 -1.00 -1.00 -1.00
OpenAI OpenAI o3-mini-high 194.36 78.81M 2 - 65.48 71.37 1136.22
OpenAI OpenAI o1 Mini 395.93 73.35M 2 0.46 48.05 57.76 1053.69
OpenAI GPT-4.1 mini 232.21 70.72M 1 0.00 72.11 59.05 1194.40
OpenAI OpenAI o3-mini 267.26 56.89M 2 0.36 58.43 67.16 1091.71
OpenAI GPT-4 Turbo 101.93 56.52M 3 0.02 -1.00 -1.00 -1.00
DeepSeek DeepSeek V3 78.65 53.45M 1 0.07 68.91 62.82 1207.01
Google Gemini 2.0 Pro (Exp) 341.98 42.26M 2 - 35.33 61.59 1088.60
Anthropic Claude 3 Sonnet 178.25 41.80M 2 -0.05 -1.00 -1.00 -1.00
Meta Llama 3.1 405B 113.72 41.77M 2 0.00 42.65 52.36 813.69
Meta Llama 3.3 70B 565.59 41.39M 1 0.64 24.05 45.68 -1.00
OpenAI OpenAI o4-mini 181.59 38.92M 2 - 74.22 74.40 1095.11
OpenAI OpenAI o4-mini-high 126.48 35.16M 2 - 79.98 78.72 -1.00
OpenAI OpenAI o3 152.76 29.28M 10 1.00 77.86 79.25 1189.75
Google Gemini 2.5 Flash 315.43 28.49M 1 - 60.33 69.93 1172.73
DeepSeek DeepSeek R1 44.42 25.61M 1 0.29 74.98 72.49 1198.91
Meta Llama 3.1 70B 485.97 20.32M 1 -0.20 33.49 44.89 -1.00
Meta Llama 3.2 90B 174.66 19.02M 1 0.33 -1.00 -1.00 -1.00
OpenAI OpenAI o1 Preview 139.15 16.34M 3 - -1.00 -1.00 -1.00
OpenAI OpenAI o1 264.68 14.28M 5 - -1.00 -1.00 1045.23
Meta Llama 4 Maverick 307.74 8.02M 1 - 54.19 55.19 998.51
Anthropic Claude 3 Opus 89.45 7.57M 3 -0.42 -1.00 -1.00 -1.00
Mistral Mistral Large 2 126.21 7.00M 2 - 62.89 50.25 -1.00
Mistral Codestral (2501) 335.83 6.63M 1 -0.33 -1.00 -1.00 -1.00
Qwen Qwen2.5-Coder 32B 292.01 4.23M 1 - 56.85 46.23 904.14
xAI Grok-3 Mini (Beta) 239.08 4.03M 1 - 54.52 70.25 -1.00
Google Gemini 2.0 Flash Thinking 425.15 3.18M 1 - 35.71 62.05 1030.05
Google Gemini 2.5 Pro (Preview) 192.79 2.82M 3 - 72.87 78.99 1419.95
xAI Grok-2 231.82 2.79M 2 - 26.14 48.11 -1.00
xAI Grok-3 (Beta) 216.37 2.15M 3 - 73.58 63.17 -1.00
Qwen Qwen3 235B A22B 111.15 387.81K 1 - 65.32 73.23 -1.00
Mistral Mistral Medium 3 224.72 242.37K 1 - 61.48 56.59 -1.00