AI Models Comparison
Compare AI model pricing and benchmarks across providers including OpenAI, Anthropic, Google, AWS Bedrock, Azure, Mistral, and more. Filter by model capabilities like vision, function calling, and reasoning support. Find the most cost-effective model for your use case. Currently tracking 1,909 models across 103 providers.
The data is based on LiteLLM, maintained by the open-source community, and benchmark data from Artificial Analysis. The latest update occurred on April 15, 2026 at 12:00 AM UTC
Model | Provider | Input Price, $ | Output Price, $ | Price Compare | Context | Max Output | Intelligence | Coding | |
|---|---|---|---|---|---|---|---|---|---|
| Claude Opus 4.6 | 5.00 | 25.00 | compare | 200K | 128K | 46.5#12 | 47.6#6 | ||
| Claude Sonnet 4.6 | 3.00 | 15.00 | compare | 1.0M | 64K | 44.4#16 | 46.4#9 | ||
| Claude Opus 4.5 | 5.00 | 25.00 | compare | 200K | 64K | 43.1#18 | 42.9#14 | ||
| Grok 4 | 3.00 | 15.00 | compare | 131K | 131K | 41.5#25 | 40.5#16 | ||
| Claude Sonnet 4.5 | 3.00 | 15.00 | compare | 200K | 64K | 37.1#32 | 33.5#35 | ||
| Claude Opus 4.1 | 15.00 | 75.00 | compare | 200K | 32K | 36.0#35 | N/A | ||
| GPT-oss-120b | 0.150 | 0.600 | compare | 131K | 131K | 33.3#39 | 28.6#45 | ||
| DeepSeek V3.2 | 0.580 | 1.68 | compare | 164K | 164K | 32.1#43 | 34.6#33 | ||
| Claude Haiku 4.5 | 1.00 | 5.00 | compare | 200K | 64K | 31.1#46 | 29.6#43 | ||
| DeepSeek V3.2 Speciale | 0.580 | 1.68 | compare | 164K | 164K | 29.4#55 | 37.9#22 | ||
| Grok Code Fast 1 | 0.200 | 1.50 | compare | 131K | 131K | 28.7#56 | 23.7#59 | ||
| DeepSeek R1 | 1.35 | 5.40 | compare | 128K | 8K | 27.1#60 | 24.0#57 | ||
| Grok 3 | 3.00 | 15.00 | compare | 131K | 131K | 25.2#69 | 19.8#74 | ||
| Grok 4 1 Fast Non Reasoning | 0.200 | 0.500 | compare | 131K | 131K | 23.6#75 | 19.5#75 | ||
| Kimi K2.5 | 0.600 | 3.00 | compare | 262K | 262K | 23.1#77 | 12.6#116 | ||
| Mistral Large 3 | 0.500 | 1.50 | compare | 256K | 8K | 22.8#81 | 22.7#64 | ||
| Meta Llama 3.1 405B Instruct | 5.33 | 16.00 | compare | 128K | 2K | 17.4#108 | 14.5#100 | ||
| DeepSeek V3 | 1.14 | 4.56 | compare | 128K | 8K | 16.5#116 | 16.4#90 | ||
| Llama 3.3 70B Instruct | 0.710 | 0.710 | compare | 128K | 2K | 14.5#137 | 10.7#131 | ||
| Mistral Large 2407 | 2.00 | 6.00 | compare | 128K | 4K | 13.0#153 | N/A | ||
| Meta Llama 3.1 70B Instruct | 2.68 | 3.54 | compare | 128K | 2K | 12.5#162 | 10.9#127 | ||
| Llama 3.2 90B Vision Instruct | 2.04 | 2.04 | compare | 128K | 2K | 11.9#174 | N/A | ||
| Meta Llama 3.1 8B Instruct | 0.300 | 0.610 | compare | 128K | 2K | 11.8#175 | 4.9#157 | ||
| Phi 4 | 0.125 | 0.500 | compare | 16K | 16K | 10.4#184 | 11.2#122 | ||
| Mistral Small | 1.00 | 3.00 | compare | 32K | 8K | 10.2#188 | N/A | ||
| Phi 3 Mini 128K Instruct | 0.130 | 0.520 | compare | 128K | 4K | 10.1#189 | 3.0#166 | ||
| Phi 4 Multimodal Instruct | 0.080 | 0.320 | compare | 131K | 4K | 10.0#192 | N/A | ||
| Mistral Large | 2.00 | 6.00 | compare | 128K | 4K | 9.9#194 | N/A | ||
| Meta Llama 3 70B Instruct | 1.10 | 0.370 | compare | 8K | 2K | 8.9#204 | 6.8#151 | ||
| Llama 3.2 11B Vision Instruct | 0.370 | 0.370 | compare | 128K | 2K | 8.7#209 | 4.3#159 | ||
| Phi 4 Mini Instruct | 0.075 | 0.300 | compare | 131K | 4K | 8.4#212 | 3.6#162 | ||
| Phi 4 Reasoning | 0.125 | 0.500 | compare | 33K | 4K | N/A | N/A | ||
| Phi 4 Mini Reasoning | 0.080 | 0.320 | compare | 131K | 4K | N/A | N/A | ||
| Phi 3.5 Vision Instruct | 0.130 | 0.520 | compare | 128K | 4K | N/A | N/A | ||
| Phi 3.5 MoE Instruct | 0.160 | 0.640 | compare | 128K | 4K | N/A | N/A | ||
| Phi 3.5 Mini Instruct | 0.130 | 0.520 | compare | 128K | 4K | N/A | N/A | ||
| Phi 3 Small 128K Instruct | 0.150 | 0.600 | compare | 128K | 4K | N/A | N/A | ||
| Phi 3 Medium 128K Instruct | 0.170 | 0.680 | compare | 128K | 4K | N/A | N/A | ||
| Model Router | 0.140 | N/A | compare | N/A | N/A | N/A | N/A | ||
| Mistral Small 2503 | 0.100 | 0.300 | compare | 128K | 128K | N/A | N/A | ||
| Mistral Nemo | 0.150 | 0.150 | compare | 131K | 4K | N/A | N/A | ||
| Mistral Medium 2505 | 0.400 | 2.00 | compare | 131K | 8K | N/A | N/A | ||
| Ministral 3B | 0.040 | 0.040 | compare | 128K | 4K | N/A | N/A | ||
| MAI DS R1 | 1.35 | 5.40 | compare | 128K | 8K | N/A | N/A | ||
| Llama 4 Scout 17B 16E Instruct | 0.200 | 0.780 | compare | 10.0M | 16K | N/A | N/A | ||
| Llama 4 Maverick 17B 128E Instruct FP8 | 1.41 | 0.350 | compare | 1.0M | 16K | N/A | N/A | ||
| Jamba Instruct | 0.500 | 0.700 | compare | 70K | 4K | N/A | N/A | ||
| JAIS 30B Chat | 0.0032 | 0.0097 | compare | 8K | 8K | N/A | N/A | ||
| Grok 4 Fast Non Reasoning | 0.200 | 0.500 | compare | 131K | 131K | N/A | N/A | ||
| Grok 3 Mini | 0.250 | 1.27 | compare | 131K | 131K | N/A | N/A |