AI Models Comparison
Compare AI model pricing and benchmarks across providers including OpenAI, Anthropic, Google, AWS Bedrock, Azure, Mistral, and more. Filter by model capabilities like vision, function calling, and reasoning support. Find the most cost-effective model for your use case. Currently tracking 1,870 models across 102 providers.
The data is based on LiteLLM, maintained by the open-source community, and benchmark data from Artificial Analysis. The latest update occurred on March 21, 2026 at 12:00 AM UTC
Model | Provider | Input Price, $ | Output Price, $ | Price Compare | Context | Max Output | Intelligence | Coding | |
|---|---|---|---|---|---|---|---|---|---|
| Kimi K2.5 | 0.500 | 2.80 | compare | 256K | 256K | 46.8#11 | 39.5#18 | ||
| Qwen3.5 397B A17B | 0.600 | 3.60 | compare | 262K | N/A | 45.0#13 | 41.3#16 | ||
| GLM 4.7 | 0.450 | 2.00 | compare | 200K | 200K | 42.1#20 | 36.3#28 | ||
| Qwen3 235B A22B Thinking 2507 | 0.650 | 3.00 | compare | 256K | N/A | 39.9#29 | 30.5#38 | ||
| GPT-oss-120b | 0.150 | 0.600 | compare | 128K | N/A | 33.3#39 | 28.6#45 | ||
| Kimi K2 Instruct 0905 | 1.00 | 3.00 | compare | 262K | N/A | 30.9#47 | 25.9#50 | ||
| GLM 4.6 | 0.600 | 2.20 | compare | 200K | 200K | 30.2#52 | 30.2#40 | ||
| DeepSeek V3.1 | 0.600 | 1.70 | compare | 128K | N/A | 28.1#59 | 28.4#46 | ||
| DeepSeek R1 | 3.00 | 7.00 | compare | 128K | 20K | 27.1#60 | 24.0#57 | ||
| Qwen3 Next 80B A3B Thinking | 0.150 | 1.50 | compare | 262K | N/A | 26.7#63 | 19.5#75 | ||
| Kimi K2 Instruct | 1.00 | 3.00 | compare | N/A | N/A | 26.3#65 | 22.1#65 | ||
| Qwen3 235B A22B Instruct 2507 Tput | 0.200 | 6.00 | compare | 262K | N/A | 25.0#70 | 22.1#65 | ||
| Qwen3 Coder 480B A35B Instruct FP8 | 2.00 | 2.00 | compare | 256K | N/A | 24.8#71 | 24.6#56 | ||
| GPT-oss-20b | 0.050 | 0.200 | compare | 128K | N/A | 24.5#72 | 18.5#80 | ||
| GLM 4.5 Air FP8 | 0.200 | 1.10 | compare | 128K | N/A | 23.2#76 | 23.8#58 | ||
| Qwen3 Next 80B A3B Instruct | 0.150 | 1.50 | compare | 262K | N/A | 20.1#89 | 15.3#97 | ||
| Meta Llama 3.1 405B Instruct Turbo | 3.50 | 3.50 | compare | N/A | N/A | 17.4#107 | 14.5#100 | ||
| Qwen3 235B A22B Fp8 Tput | 0.200 | 0.600 | compare | 40K | N/A | 17.0#112 | 14.0#105 | ||
| DeepSeek V3 | 1.25 | 1.25 | compare | 66K | 8K | 16.5#115 | 16.4#90 | ||
| Qwen2.5 72B Instruct Turbo | N/A | N/A | compare | N/A | N/A | 15.6#124 | 11.9#119 | ||
| Llama 3.3 70B Instruct Turbo | 0.880 | 0.880 | compare | N/A | N/A | 14.5#136 | 10.7#130 | ||
| Meta Llama 3.1 70B Instruct Turbo | 0.880 | 0.880 | compare | N/A | N/A | 12.5#161 | 10.9#126 | ||
| Meta Llama 3.1 8B Instruct Turbo | 0.180 | 0.180 | compare | N/A | N/A | 11.8#174 | 4.9#157 | ||
| Llama 3.2 3B Instruct Turbo | N/A | N/A | compare | N/A | N/A | 9.7#195 | N/A | ||
| Together Ai Up To 4B | 0.100 | 0.100 | compare | N/A | N/A | N/A | N/A | ||
| Together Ai 81.1B 110B | 1.80 | 1.80 | compare | N/A | N/A | N/A | N/A | ||
| Together Ai 8.1B 21B | 0.300 | 0.300 | compare | 1K | N/A | N/A | N/A | ||
| Together Ai 41.1B 80B | 0.900 | 0.900 | compare | N/A | N/A | N/A | N/A | ||
| Together Ai 4.1B 8B | 0.200 | 0.200 | compare | N/A | N/A | N/A | N/A | ||
| Together Ai 21.1B 41B | 0.800 | 0.800 | compare | N/A | N/A | N/A | N/A | ||
| CodeLlama 34B Instruct | N/A | N/A | compare | N/A | N/A | N/A | N/A | ||
| Qwen2.5 7B Instruct Turbo | N/A | N/A | compare | N/A | N/A | N/A | N/A | ||
| Mixtral 8x7B Instruct V0.1 | 0.600 | 0.600 | compare | N/A | N/A | N/A | N/A | ||
| Mistral Small 24B Instruct 2501 | N/A | N/A | compare | N/A | N/A | N/A | N/A | ||
| Mistral 7B Instruct V0.1 | N/A | N/A | compare | N/A | N/A | N/A | N/A | ||
| Llama 4 Scout 17B 16E Instruct | 0.180 | 0.590 | compare | N/A | N/A | N/A | N/A | ||
| Llama 4 Maverick 17B 128E Instruct FP8 | 0.270 | 0.850 | compare | N/A | N/A | N/A | N/A | ||
| DeepSeek R1 0528 Tput | 0.550 | 2.19 | compare | 128K | N/A | N/A | N/A |