Compare AI model pricing and benchmarks across providers including OpenAI, Anthropic, Google, AWS Bedrock, Azure, Mistral, and more. Filter by model capabilities like vision, function calling, and reasoning support. Find the most cost-effective model for your use case. Currently tracking 1,872 models across 102 providers.

The data is based on LiteLLM, maintained by the open-source community, and benchmark data from Artificial Analysis. The latest update occurred on March 25, 2026 at 12:00 AM UTC

Input/1M
to
Output/1M
to
Model
Provider
Input Price, $
Output Price, $
Price Compare
Context
Max Output
Intelligence
Coding
Qwen3 235B A22B Thinking 2507Weights & Biases0.0100.010compare262K262K39.9#2930.5#38
GPT-oss-120bWeights & Biases0.0150.060compare131K131K33.3#3928.6#45
DeepSeek V3.1Weights & Biases0.0550.165compare128K128K28.1#5928.4#46
GLM 4.5Weights & Biases0.0550.200compare131K131K26.4#6426.3#49
Kimi K2 InstructWeights & Biases0.6002.50compare128K128K26.3#6522.1#65
Qwen3 235B A22B Instruct 2507Weights & Biases0.0100.010compare262K262K25.0#7022.1#65
Qwen3 Coder 480B A35B InstructWeights & Biases0.1000.150compare262K262K24.8#7124.6#56
GPT-oss-20bWeights & Biases0.00500.020compare131K131K24.5#7218.5#80
DeepSeek V3 0324Weights & Biases0.1140.275compare161K161K22.3#8222.0#67
Llama 3.3 70B InstructWeights & Biases0.0710.071compare128K128K14.5#13610.7#130
Llama 3.1 8B InstructWeights & Biases0.0220.022compare128K128K11.8#1744.9#157
Phi 4 Mini InstructWeights & Biases0.00800.035compare128K128K8.4#2113.6#162
Llama 4 Scout 17B 16E InstructWeights & Biases0.0170.066compare64K64KN/AN/A
DeepSeek R1 0528Weights & Biases0.1350.540compare161K161KN/AN/A