AI Models Comparison
Compare AI model pricing and benchmarks across providers including OpenAI, Anthropic, Google, AWS Bedrock, Azure, Mistral, and more. Filter by model capabilities like vision, function calling, and reasoning support. Find the most cost-effective model for your use case. Currently tracking 1,872 models across 102 providers.
The data is based on LiteLLM, maintained by the open-source community, and benchmark data from Artificial Analysis. The latest update occurred on March 25, 2026 at 12:00 AM UTC
Model | Provider | Input Price, $ | Output Price, $ | Price Compare | Context | Max Output | Intelligence | Coding | |
|---|---|---|---|---|---|---|---|---|---|
| GPT-oss-120b | 0.080 | 0.400 | compare | 131K | 131K | 33.3#39 | 28.6#45 | ||
| GPT-oss-20b | 0.040 | 0.150 | compare | 131K | 131K | 24.5#72 | 18.5#80 | ||
| DeepSeek R1 Distill Llama 70B | 0.670 | 0.670 | compare | 131K | 131K | 16.0#119 | 11.4#120 | ||
| Mistral Small 3.2 24B Instruct 2506 | 0.090 | 0.280 | compare | 128K | 128K | 15.1#128 | 13.3#112 | ||
| Qwen3 32B | 0.080 | 0.230 | compare | 32K | 32K | 14.5#136 | N/A | ||
| Meta Llama 3 3 70B Instruct | 0.670 | 0.670 | compare | 131K | 131K | 14.5#136 | 10.7#130 | ||
| Qwen2.5 Coder 32B Instruct | 0.870 | 0.870 | compare | 32K | 32K | 12.9#154 | N/A | ||
| Meta Llama 3 1 70B Instruct | 0.670 | 0.670 | compare | 131K | 131K | 12.5#161 | 10.9#126 | ||
| Llama 3.1 8B Instruct | 0.100 | 0.100 | compare | 131K | 131K | 11.8#174 | 4.9#157 | ||
| Qwen2.5 VL 72B Instruct | 0.910 | 0.910 | compare | 32K | 32K | N/A | N/A | ||
| Mixtral 8x7B Instruct V0.1 | 0.630 | 0.630 | compare | 32K | 32K | N/A | N/A | ||
| Mistral Nemo Instruct 2407 | 0.130 | 0.130 | compare | 118K | 118K | N/A | N/A | ||
| Mistral 7B Instruct V0.3 | 0.100 | 0.100 | compare | 127K | 127K | N/A | N/A | ||
| Mamba Codestral 7B V0.1 | 0.190 | 0.190 | compare | 256K | 256K | N/A | N/A | ||
| Llava V1.6 Mistral 7B Hf | 0.290 | 0.290 | compare | 32K | 32K | N/A | N/A |