Compare AI model pricing and benchmarks across providers including OpenAI, Anthropic, Google, AWS Bedrock, Azure, Mistral, and more. Filter by model capabilities like vision, function calling, and reasoning support. Find the most cost-effective model for your use case. Currently tracking 1,771 models across 99 providers.

The data is based on LiteLLM, maintained by the open-source community, and benchmark data from Artificial Analysis. The latest update occurred on February 27, 2026 at 12:00 AM UTC

Input/1M
to
Output/1M
to
Model
Provider
Input Price, $
Output Price, $
Price Compare
Context
Max Output
Intelligence
Coding
Sonar ReasoningPerplexity1.005.00compare128KN/A17.9N/A
SonarPerplexity1.001.00compare128KN/A15.5N/A
Sonar Reasoning ProPerplexity2.008.00compare128KN/A15.2N/A
Llama 3.1 70B InstructPerplexity1.001.00compare131K131K13.110.9
Llama 3.1 8B InstructPerplexity0.2000.200compare131K131K11.34.9
Llama 2 70B ChatPerplexity0.7002.80compare4K4K8.4N/A
Mixtral 8x7B InstructPerplexity0.0700.280compare4K4K7.7N/A
Mistral 7B InstructPerplexity0.0700.280compare4K4K7.4N/A
Sonar Small ChatPerplexity0.0700.280compare16K16KN/AN/A
Sonar Medium ChatPerplexity0.6001.80compare16K16KN/AN/A
Sonar Deep ResearchPerplexity2.008.00compare128KN/AN/AN/A
Pplx 7B ChatPerplexity0.0700.280compare8K8KN/AN/A
Pplx 70B ChatPerplexity0.7002.80compare4K4KN/AN/A
Llama 3.1 Sonar Small 128K ChatPerplexity0.2000.200compare131K131KN/AN/A
Llama 3.1 Sonar Large 128K ChatPerplexity1.001.00compare131K131KN/AN/A
Llama 3.1 Sonar Huge 128K OnlinePerplexity5.005.00compare127K127KN/AN/A
Codellama 70B InstructPerplexity0.7002.80compare16K16KN/AN/A
Codellama 34B InstructPerplexity0.3501.40compare16K16KN/AN/A