AI Models Comparison
Compare AI model pricing and benchmarks across providers including OpenAI, Anthropic, Google, AWS Bedrock, Azure, Mistral, and more. Filter by model capabilities like vision, function calling, and reasoning support. Find the most cost-effective model for your use case. Currently tracking 1,771 models across 99 providers.
The data is based on LiteLLM, maintained by the open-source community, and benchmark data from Artificial Analysis. The latest update occurred on February 27, 2026 at 12:00 AM UTC
Model | Provider | Input Price, $ | Output Price, $ | Price Compare | Context | Max Output | Intelligence | Coding | |
|---|---|---|---|---|---|---|---|---|---|
| Gemini 3 Pro Preview | 2.00 | 12.00 | compare | 1.0M | 66K | 49.8 | 46.5 | ||
| GPT-5.2 | 1.75 | 14.00 | compare | 272K | 128K | 49.4 | 48.7 | ||
| Kimi K2.5 | 0.600 | 3.00 | compare | 262K | 262K | 46.8 | 39.5 | ||
| GPT-5-codex | 1.25 | 10.00 | compare | 272K | 128K | 44.6 | 38.9 | ||
| GPT-5 | 1.25 | 10.00 | compare | 272K | 128K | 44.6 | 36.0 | ||
| Glm 4.7 | 0.400 | 1.50 | compare | 203K | 64K | 42.1 | 36.3 | ||
| Minimax M2.5 | 0.300 | 1.10 | compare | 197K | 66K | 41.9 | 37.4 | ||
| GPT-5 mini | 0.250 | 2.00 | compare | 272K | 128K | 41.2 | 35.3 | ||
| Grok 4 | 3.00 | 15.00 | compare | 256K | 256K | 40.7 | 40.5 | ||
| Claude Sonnet 4.5 | 3.00 | 15.00 | compare | 1.0M | 1.0M | 37.1 | 33.5 | ||
| Minimax M2 | 0.255 | 1.02 | compare | 205K | 205K | 36.1 | 29.2 | ||
| Claude Opus 4.5 | 5.00 | 25.00 | compare | 200K | 32K | 35.3 | 42.9 | ||
| Gemini 3 Flash Preview | 0.500 | 3.00 | compare | 1.0M | 66K | 35.0 | 37.8 | ||
| GPT-oss-120b | 0.180 | 0.800 | compare | 131K | 33K | 33.3 | 28.6 | ||
| Claude Sonnet 4 | 3.00 | 15.00 | compare | 1.0M | 64K | 33.0 | 30.6 | ||
| DeepSeek V3.2 | 0.280 | 0.400 | compare | 164K | 164K | 32.1 | 34.6 | ||
| o1 | 15.00 | 60.00 | compare | 200K | 100K | 30.8 | 20.5 | ||
| Claude 3.7 Sonnet | 3.00 | 15.00 | compare | 200K | 128K | 30.8 | 26.7 | ||
| Mimo V2 Flash | 0.090 | 0.290 | compare | 262K | 16K | 30.4 | 25.8 | ||
| Gemini 2.5 Pro | 1.25 | 10.00 | compare | 1.0M | 8K | 30.3 | 46.7 | ||
| Glm 4.6 | 0.400 | 1.75 | compare | 203K | 131K | 30.2 | 30.2 | ||
| Glm 4.7 Flash | 0.070 | 0.400 | compare | 200K | 32K | 30.1 | 25.9 | ||
| DeepSeek R1 | 0.550 | 2.19 | compare | 65K | 8K | 27.1 | 24.0 | ||
| GPT-5 nano | 0.050 | 0.400 | compare | 272K | 128K | 26.8 | 20.3 | ||
| GPT-4.1 | 2.00 | 8.00 | compare | 1.0M | 33K | 26.3 | 21.8 | ||
| o3 mini | 1.10 | 4.40 | compare | 128K | 66K | 25.9 | 17.9 | ||
| Qwen3 235B A22b 2507 | 0.071 | 0.100 | compare | 262K | 262K | 25.0 | 22.1 | ||
| GPT-oss-20b | 0.020 | 0.100 | compare | 131K | 33K | 24.5 | 18.5 | ||
| Claude Opus 4.1 | 15.00 | 75.00 | compare | 200K | 32K | 23.6 | N/A | ||
| GPT-4.1 mini | 0.400 | 1.60 | compare | 1.0M | 33K | 22.2 | 18.5 | ||
| Claude Opus 4 | 15.00 | 75.00 | compare | 200K | 32K | 22.2 | N/A | ||
| Claude Haiku 4.5 | 1.00 | 5.00 | compare | 200K | 200K | 21.8 | 29.6 | ||
| Gemini 2.5 Flash | 0.300 | 2.50 | compare | 1.0M | 8K | 21.1 | 17.8 | ||
| Gemini 2.0 Flash-001 | 0.100 | 0.400 | compare | 1.0M | 8K | 17.6 | 13.6 | ||
| Ministral 14B 2512 | 0.200 | 0.200 | compare | 262K | 262K | 16.2 | 10.9 | ||
| Claude 3.5 Sonnet | 3.00 | 15.00 | compare | 200K | 8K | 15.9 | 30.2 | ||
| Ministral 8B 2512 | 0.150 | 0.150 | compare | 262K | 262K | 15.3 | 10.0 | ||
| Mistral Small 3.2 24B Instruct | 0.100 | 0.300 | compare | 32K | N/A | 15.1 | 13.3 | ||
| GPT-4.1 nano | 0.100 | 0.400 | compare | 1.0M | 33K | 14.9 | 11.2 | ||
| GPT-4o | 2.50 | 10.00 | compare | 128K | 4K | 14.8 | 16.7 | ||
| Mistral Small 3.1 24B Instruct | 0.100 | 0.300 | compare | 32K | N/A | 14.0 | 13.9 | ||
| Qwen 2.5 Coder 32B Instruct | 0.180 | 0.180 | compare | 34K | 34K | 12.9 | N/A | ||
| Ministral 3B 2512 | 0.100 | 0.100 | compare | 131K | 131K | 12.9 | 4.8 | ||
| GPT-4 | 30.00 | 60.00 | compare | 8K | N/A | 12.8 | 13.1 | ||
| Llama 3 70B Instruct | 0.590 | 0.790 | compare | 8K | N/A | 10.2 | 6.8 | ||
| Mistral Large | 8.00 | 24.00 | compare | 32K | N/A | 9.9 | N/A | ||
| Mixtral 8x22B Instruct | 0.650 | 0.650 | compare | 66K | N/A | 9.8 | N/A | ||
| Claude 3 Haiku | 0.250 | 1.25 | compare | 200K | N/A | 9.3 | 6.7 | ||
| GPT-3.5 Turbo | 1.50 | 2.00 | compare | 4K | N/A | 9.0 | 10.7 | ||
| Mistral 7B Instruct | 0.130 | 0.130 | compare | 8K | N/A | 7.4 | N/A | ||
| Remm Slerp L2 13B | 1.88 | 1.88 | compare | 6K | N/A | N/A | N/A | ||
| Router | 0.850 | 3.40 | compare | 131K | 131K | N/A | N/A | ||
| Qwen3 Coder | 0.220 | 0.950 | compare | 262K | 262K | N/A | N/A | ||
| Qwen3 235B A22b Thinking 2507 | 0.110 | 0.600 | compare | 262K | 262K | N/A | N/A | ||
| Qwen Vl Plus | 0.210 | 0.630 | compare | 8K | 2K | N/A | N/A | ||
| GPT-5.2-pro | 21.00 | 168.00 | compare | 272K | 128K | N/A | N/A | ||
| GPT-5.2-codex | 1.75 | 14.00 | compare | 272K | 128K | N/A | 43.0 | ||
| Mistral Large 2512 | 0.500 | 1.50 | compare | 262K | 262K | N/A | N/A | ||
| Devstral 2512 | 0.150 | 0.600 | compare | 262K | 66K | N/A | N/A | ||
| Minimax M2.1 | 0.270 | 1.20 | compare | 204K | 64K | N/A | N/A | ||
| Weaver | 5.63 | 5.63 | compare | 8K | N/A | N/A | N/A | ||
| Mythomax L2 13B | 1.88 | 1.88 | compare | 8K | N/A | N/A | N/A | ||
| DeepSeek R1 0528 | 0.500 | 2.15 | compare | 65K | 8K | N/A | N/A | ||
| DeepSeek Chat V3.1 | 0.200 | 0.800 | compare | 164K | 164K | N/A | N/A | ||
| DeepSeek Chat V3 0324 | 0.140 | 0.280 | compare | 66K | 8K | N/A | N/A | ||
| DeepSeek Chat | 0.140 | 0.280 | compare | 66K | 8K | N/A | N/A | ||
| Ui Tars 1.5 7B | 0.100 | 0.200 | compare | 131K | 2K | N/A | N/A |