AI Models Comparison
Compare AI model pricing and benchmarks across providers including OpenAI, Anthropic, Google, AWS Bedrock, Azure, Mistral, and more. Filter by model capabilities like vision, function calling, and reasoning support. Find the most cost-effective model for your use case. Currently tracking 1,870 models across 102 providers.
The data is based on LiteLLM, maintained by the open-source community, and benchmark data from Artificial Analysis. The latest update occurred on March 21, 2026 at 12:00 AM UTC
Model | Provider | Input Price, $ | Output Price, $ | Price Compare | Context | Max Output | Intelligence | Coding | |
|---|---|---|---|---|---|---|---|---|---|
| DeepSeek R1 | Nebius | 0.800 | 2.40 | compare | 128K | 128K | 27.1#60 | 24.0#57 | |
| QwQ 32B | Nebius | 0.150 | 0.450 | compare | 33K | 33K | 19.7#92 | N/A | |
| Hermes 3 Llama 3.1 405B | Nebius | 1.00 | 3.00 | compare | 128K | 128K | 17.6#106 | 18.1#84 | |
| Meta Llama 3.1 405B Instruct | Nebius | 1.00 | 3.00 | compare | 128K | 128K | 17.4#107 | 14.5#100 | |
| Qwen3 235B A22B | Nebius | 0.200 | 0.600 | compare | 262K | 262K | 17.0#112 | 14.0#105 | |
| DeepSeek V3 | Nebius | 0.500 | 1.50 | compare | 128K | 128K | 16.5#115 | 16.4#90 | |
| DeepSeek R1 Distill Llama 70B | Nebius | 0.250 | 0.750 | compare | 128K | 128K | 16.0#119 | 11.4#120 | |
| Qwen2.5 72B Instruct | Nebius | 0.130 | 0.400 | compare | 128K | 128K | 15.6#124 | 11.9#119 | |
| Llama 3.1 Nemotron Ultra 253B V1 | Nebius | 0.600 | 1.80 | compare | 128K | 128K | 15.0#130 | 13.1#114 | |
| Qwen3 32B | Nebius | 0.100 | 0.300 | compare | 33K | 33K | 14.5#136 | N/A | |
| Llama 3.3 70B Instruct | Nebius | 0.130 | 0.400 | compare | 128K | 128K | 14.5#136 | 10.7#130 | |
| Llama 3.3 Nemotron Super 49B V1 | Nebius | 0.100 | 0.400 | compare | 131K | 131K | 14.3#139 | 7.6#144 | |
| Qwen2.5 32B Instruct | Nebius | 0.060 | 0.200 | compare | 128K | 128K | 13.2#149 | N/A | |
| Qwen3 4B | Nebius | 0.080 | 0.240 | compare | 33K | 33K | 12.5#161 | N/A | |
| Qwen3 30B A3B | Nebius | 0.100 | 0.300 | compare | 33K | 33K | 12.5#161 | 13.3#112 | |
| Meta Llama 3.1 70B Instruct | Nebius | 0.130 | 0.400 | compare | 128K | 128K | 12.5#161 | 10.9#126 | |
| Meta Llama 3.1 8B Instruct | Nebius | 0.020 | 0.060 | compare | 128K | 128K | 11.8#174 | 4.9#157 | |
| Gemma 3 27B It | Nebius | 0.060 | 0.200 | compare | 128K | 128K | 10.3#184 | 9.6#138 | |
| Qwen2.5 Coder 7B | Nebius | 0.010 | 0.030 | compare | 33K | 33K | 10.0#191 | N/A | |
| Qwen3 14B | Nebius | 0.080 | 0.240 | compare | 33K | 33K | 7.4#225 | N/A | |
| Qwen2.5 VL 72B Instruct | Nebius | 0.130 | 0.400 | compare | 131K | 131K | N/A | N/A | |
| Qwen2 VL 7B Instruct | Nebius | 0.020 | 0.060 | compare | 131K | 131K | N/A | N/A | |
| Qwen2 VL 72B Instruct | Nebius | 0.130 | 0.400 | compare | 131K | 131K | N/A | N/A | |
| Mistral Nemo Instruct 2407 | Nebius | 0.040 | 0.120 | compare | 128K | 128K | N/A | N/A | |
| Llama Guard 3 8B | Nebius | 0.020 | 0.060 | compare | 128K | 128K | N/A | N/A | |
| DeepSeek V3 0324 | Nebius | 0.500 | 1.50 | compare | 128K | 128K | N/A | N/A | |
| DeepSeek R1 0528 | Nebius | 0.800 | 2.40 | compare | 164K | 164K | N/A | N/A |