AI Models Comparison
Compare AI model pricing and benchmarks across providers including OpenAI, Anthropic, Google, AWS Bedrock, Azure, Mistral, and more. Filter by model capabilities like vision, function calling, and reasoning support. Find the most cost-effective model for your use case. Currently tracking 1,863 models across 102 providers.
The data is based on LiteLLM, maintained by the open-source community, and benchmark data from Artificial Analysis. The latest update occurred on March 16, 2026 at 12:00 AM UTC
Model | Provider | Input Price, $ | Output Price, $ | Price Compare | Context | Max Output | Intelligence | Coding | |
|---|---|---|---|---|---|---|---|---|---|
| GPT-5.2 | 1.75 | 14.00 | compare | 410K | 32K | 51.3#4 | 48.7#4 | ||
| Gemini 3 Pro Preview | 2.00 | 12.00 | compare | 1.0M | 66K | 48.4#7 | 46.5#7 | ||
| GPT-5.1 | 1.25 | 10.00 | compare | 410K | 32K | 47.7#8 | 44.7#9 | ||
| GPT-5 | 1.25 | 10.00 | compare | 410K | 32K | 44.6#12 | 36.0#26 | ||
| Claude Opus 4.5 | 5.00 | 25.00 | compare | 410K | 32K | 43.1#15 | 42.9#12 | ||
| GLM 4.7 FP8 | 0.400 | 2.00 | compare | 203K | 16K | 42.1#17 | 36.3#25 | ||
| Claude Sonnet 4.5 | 3.00 | 15.00 | compare | 410K | 32K | 37.1#29 | 33.5#32 | ||
| Gemini 3 Flash Preview | 0.500 | 3.00 | compare | 1.0M | 66K | 35.0#33 | 37.8#20 | ||
| Claude Sonnet 4 | 3.00 | 15.00 | compare | 410K | 32K | 33.0#37 | 30.6#34 | ||
| DeepSeek V3.2 | 0.280 | 0.400 | compare | 164K | 16K | 32.1#38 | 34.6#30 | ||
| Kimi K2 Thinking | 0.800 | 1.20 | compare | 262K | 16K | 24.1#68 | 15.5#93 | ||
| DeepSeek V3 0324 | 0.280 | 0.880 | compare | 164K | 16K | 22.3#78 | 22.0#64 | ||
| Claude Opus 4 | 15.00 | 75.00 | compare | 410K | 32K | 22.2#79 | N/A | ||
| GPT-4o | 2.50 | 10.00 | compare | 131K | 16K | 17.3#105 | 16.7#85 | ||
| Qwen3 VL 235B A22B Instruct FP8 | 0.300 | 1.40 | compare | 262K | 16K | 17.0#109 | 14.0#102 | ||
| GPT-4o mini | 0.150 | 0.600 | compare | 131K | 16K | 12.6#157 | N/A | ||
| MiniMax M2.1 | 0.300 | 1.20 | compare | 197K | 16K | N/A | N/A |