Model Statistics

Model Avg Speed (chars/s) The average speed of the model in characters per second, across all users utilizing this model. N° Characters The total number of characters in both the prompt and the completion combined, across all users utilizing this model. Cost The credit cost of utilizing this model in a request. Models with a cost of 2 or higher are considered premium models. Vote Score The average vote score given to this model by users, where -1 is the lowest score and 1 is the highest. This is displayed only if there are 10 or more votes for the model. N° Votes
Anthropic Claude 3.5 Sonnet 230.31 663.61M 2 0.01 281
OpenAI GPT-4o Mini 228.19 653.17M 1 -0.08 207
OpenAI GPT-4o 227.96 533.37M 2 -0.16 327
Anthropic Claude 3.5 Haiku 203.34 285.82M 1 0.17 166
OpenAI OpenAI o1 Mini 402.53 194.43M 2 0.60 57
Google Gemini 1.5 Pro 159.40 97.24M 2 - 2
Google Gemini 2.0 Flash 380.35 57.24M 1 0.08 30
Meta Llama 3.1 405B 154.51 48.08M 2 - 3
Meta Llama 3.3 70B 634.04 18.23M 1 - 7
Meta Llama 3.2 90B 173.32 15.93M 1 1.00 66
OpenAI OpenAI o1 241.26 9.37M 5 - 0
DeepSeek DeepSeek R1 80.44 9.13M 1 0.06 12
Mistral Codestral (2501) 286.50 8.61M 1 -0.50 10
DeepSeek DeepSeek V3 122.28 7.15M 1 - 0
Qwen Qwen2.5-Coder 32B 294.93 5.76M 1 - 9
xAI Grok-2 242.48 3.02M 2 - 2
Mistral Mistral Large 2 114.97 2.80M 2 - 3