AI Model Comparison
Compare AI model pricing and benchmarks across providers including OpenAI, Anthropic, Google, Amazon Bedrock, Azure, Mistral, and more. Filter by model capabilities like vision, function calling, and reasoning support. Find the most cost-effective model for your use case. Currently tracking 2,529 models across 98 providers. Last update:
Model | Creator | Input Price, $ | Output Price, $ | Inference Providers | Context | Max Output | Intelligence | Coding | |
|---|---|---|---|---|---|---|---|---|---|
| Claude Opus 4.5 | 5.00 | 25.00 | compare (9) | 410K | 64K | 43.1#1 | 42.9#1 | ||
| Kimi K2 Thinking | 0.574 | 1.20 | compare (9) | 262K | 33K | 40.9#2 | 34.8#2 | ||
| MiniMax M2.1 | 0.290 | 0.950 | compare (7) | 1.0M | 131K | 39.4#3 | 32.8#6 | ||
| Claude Sonnet 4.5 | 3.00 | 15.00 | compare (10) | 1.0M | 64K | 37.1#4 | 33.5#5 | ||
| GPT-5.2 | 1.75 | 14.00 | compare (7) | 410K | 128K | 33.6#5 | 34.7#3 | ||
| Claude Sonnet 4 | 3.00 | 15.00 | compare (10) | 1.0M | 64K | 33.0#7 | 30.6#7 | ||
| Claude Opus 4 | 15.00 | 75.00 | compare (9) | 410K | 32K | 33.0#6 | 34.0#4 | ||
| DeepSeek V3.2 | 0.259 | 0.400 | compare (11) | 164K | 66K | 28.4#8 | 30.0#8 | ||
| GPT-5.1 | 1.25 | 10.00 | compare (7) | 410K | 128K | 27.4#9 | 27.3#9 | ||
| DeepSeek V3 0324 | 0.200 | 0.400 | compare (9) | 164K | 16K | 22.3#10 | 22.0#10 | ||
| GPT-5 | 1.25 | 10.00 | compare (9) | 410K | 128K | 21.8#11 | 21.2#11 | ||
| GPT-4o | 2.50 | 10.00 | compare (7) | 131K | 16K | 14.1#12 | 16.6#12 | ||
| GPT-4o Mini | 0.150 | 0.600 | compare (7) | 131K | 16K | 12.6#13 | N/A | ||
| Gemini 3 Flash Preview | 0.500 | 3.00 | compare (4) | 1.0M | 66K | N/A | N/A | ||
| Gemini 3 Pro Preview | 2.00 | 12.00 | compare (6) | 1.0M | 66K | N/A | N/A | ||
| GLM-4.7 FP8 | 0.400 | 2.00 | compare (1) | 203K | 16K | N/A | N/A | ||
| Qwen3 VL 235B A22B Instruct FP8 | 0.300 | 1.40 | compare (1) | 262K | 16K | N/A | N/A |