AI Model Comparison
Compare AI model pricing and benchmarks across providers including OpenAI, Anthropic, Google, Amazon Bedrock, Azure, Mistral, and more. Filter by model capabilities like vision, function calling, and reasoning support. Find the most cost-effective model for your use case. Currently tracking 2,529 models across 98 providers. Last update:
Model | Creator | Input Price, $ | Output Price, $ | Inference Providers | Context | Max Output | Intelligence | Coding | |
|---|---|---|---|---|---|---|---|---|---|
| Grok 4 | 3.00 | 15.00 | compare (6) | 2.0M | N/A | 41.5#1 | 40.5#1 | ||
| Gemini 3 Pro Preview | 2.00 | 12.00 | compare (2) | N/A | N/A | 41.3#2 | 39.4#2 | ||
| Claude Sonnet 4.5 | 3.00 | 15.00 | compare (10) | 1.0M | 64K | 37.1#3 | 33.5#3 | ||
| o4 Mini | 1.00 | 4.00 | compare (6) | 200K | 100K | 33.1#4 | 25.6#9 | ||
| Claude Sonnet 4 | 3.00 | 15.00 | compare (10) | 1.0M | 64K | 33.0#5 | 30.6#4 | ||
| Claude Haiku 4.5 | 1.00 | 5.00 | compare (9) | 200K | 64K | 31.1#6 | 29.6#5 | ||
| o1 | 15.00 | 60.00 | compare (5) | 200K | 100K | 30.8#8 | 20.5#13 | ||
| Claude Sonnet 3.7 | 3.00 | 15.00 | compare (10) | 200K | 128K | 30.8#7 | 26.7#7 | ||
| DeepSeek V3.1 | 0.150 | 0.750 | compare (10) | 164K | 66K | 27.7#9 | 28.4#6 | ||
| GPT-4.1 | 2.00 | 8.00 | compare (7) | 1.0M | 33K | 26.3#10 | 21.8#11 | ||
| GPT OSS 120B | 0.039 | 0.190 | compare (17) | 131K | 131K | 24.5#11 | 15.5#18 | ||
| GPT-4.1 Mini | 0.400 | 1.60 | compare (5) | 1.0M | 33K | 22.9#12 | 18.5#14 | ||
| GPT-5 | 1.25 | 10.00 | compare (9) | 410K | 128K | 21.8#13 | 21.2#12 | ||
| GPT OSS 20B | 0.030 | 0.140 | compare (13) | 131K | 131K | 20.8#14 | 14.4#19 | ||
| GPT-5 Mini | 0.250 | 2.00 | compare (8) | 400K | 128K | 20.7#15 | 21.9#10 | ||
| o1 Mini | 1.10 | 4.40 | compare (3) | 128K | 66K | 20.4#16 | N/A | ||
| Claude Haiku 3.5 | 0.800 | 4.00 | compare (7) | 200K | 8K | 18.7#17 | 10.7#23 | ||
| Gemini 2.5 Flash | 0.150 | 0.600 | compare (9) | 1.0M | 66K | 17.8#18 | 17.8#15 | ||
| Qwen3 235B A22B Instruct | 0.090 | 0.580 | compare (7) | 262K | 33K | 17.0#19 | 14.0#21 | ||
| DeepSeek V3 | 0.200 | 0.200 | compare (12) | 164K | 82K | 16.5#20 | 16.4#17 | ||
| DeepSeek R1 | 0.280 | 0.400 | compare (14) | 164K | 66K | 16.4#21 | 7.8#24 | ||
| Claude Sonnet 3.5 | 3.00 | 15.00 | compare (6) | 1.0M | 8K | 14.2#22 | 26.0#8 | ||
| GPT-4o | 2.50 | 10.00 | compare (7) | 131K | 16K | 14.1#23 | 16.6#16 | ||
| GPT-5 Nano | 0.050 | 0.400 | compare (7) | 400K | 128K | 13.8#24 | 14.2#20 | ||
| GPT-4.1 Nano | 0.100 | 0.400 | compare (5) | 1.0M | 33K | 13.0#25 | 11.2#22 | ||
| GPT-4o Mini | 0.150 | 0.600 | compare (7) | 131K | 16K | 12.6#26 | N/A | ||
| Llama 2 7B Chat | 0.050 | 0.200 | compare (3) | 4K | 4K | 9.7#27 | N/A | ||
| Llama 3 70B Instruct | 0.120 | 0.300 | compare (7) | 131K | 8K | 8.9#28 | 6.8#25 | ||
| Llama 2 70B Chat | 0.500 | 0.900 | compare (6) | 4K | 4K | 8.4#30 | N/A | ||
| Llama 2 13B Chat | 0.100 | 0.200 | compare (3) | 4K | 4K | 8.4#29 | N/A | ||
| Mixtral 8x7B Instruct | 0.070 | 0.280 | compare (9) | 33K | 16K | 7.7#31 | N/A | ||
| Mistral 7B Instruct | 0.010 | 0.100 | compare (8) | 127K | 16K | 7.4#32 | N/A | ||
| Granite 3.3 8B | 0.030 | 0.200 | compare (2) | 8K | 8K | 7.0#33 | 3.4#27 | ||
| Llama 3 8B Instruct | 0.030 | 0.040 | compare (7) | 32K | 8K | 6.4#34 | 4.0#26 | ||
| Llama 2 13B | 0.100 | 0.200 | compare (2) | 4K | 4K | N/A | N/A | ||
| Llama 2 70B | 0.100 | 0.100 | compare (2) | 4K | 4K | N/A | N/A | ||
| Llama 2 7B | 0.050 | 0.200 | compare (2) | 4K | 4K | N/A | N/A | ||
| Llama 3 70B | 0.590 | 0.790 | compare (3) | 8K | 8K | N/A | N/A | ||
| Llama 3 8B | 0.050 | 0.080 | compare (4) | 8K | 8K | N/A | N/A | ||
| Mistral 7B | 0.050 | 0.200 | compare (4) | 33K | 8K | N/A | N/A |