Llama 4 Scout 17B 16e Instruct
Llama 4 Scout 17B 16e Instruct is a text model from
Groq with a context window of 131K tokens and max output of 8K tokens. Pricing starts at 0.11 per million input tokens and 0.34 per million output tokens (cheapest at Lambda).
Capabilities
✓ Vision✓ Function Calling✗ Reasoning✓ JSON Schema✗ System Messages✗ Web Search✗ Prompt Caching✗ Audio Input✗ Audio Output
Specifications
| Model Key | groq/meta-llama/llama-4-scout-17b-16e-instruct |
| Provider | |
| Provider ID | groq |
| Mode | Text |
| Canonical Name | llama-scout-4-17b |
| Context Window | 131K tokens |
| Max Output | 8K tokens |
Pricing
| Type | Per 1K Tokens | Per 1M Tokens |
|---|---|---|
| Input Tokens | 0.000110 | 0.110 |
| Output Tokens | 0.000340 | 0.340 |
Benchmarks
No benchmark data is available for this model.
Price Comparison by Provider
Compare prices for Llama 4 Scout 17B 16e Instruct across different providers. The same model may be available through multiple providers at different price points.
Provider | Model Key | Input Price, $ | Output Price, $ |
|---|---|---|---|
| together_ai/meta-llama/Llama-4-Scout-17B-16E-Instruct | 0.180 | 0.590 | |
| sambanova/Llama-4-Scout-17B-16E-Instruct | 0.400 | 0.700 | |
| oci/meta.llama-4-scout-17b-16e-instruct | 0.720 | 0.720 | |
| nscale/meta-llama/Llama-4-Scout-17B-16E-Instruct | 0.090 | 0.290 | |
| meta.llama4-scout-17b-instruct-v1:0 | 0.170 | 0.660 | |
| meta_llama/Llama-4-Scout-17B-16E-Instruct-FP8 | N/A | N/A | |
| lambda_ai/llama-4-scout-17b-16e-instruct | 0.050 | 0.100 | |
| groq/meta-llama/llama-4-scout-17b-16e-instruct | 0.110 | 0.340 | |
| fireworks_ai/accounts/fireworks/models/llama4-scout-instruct-basic | 0.150 | 0.600 | |
| deepinfra/meta-llama/Llama-4-Scout-17B-16E-Instruct | 0.080 | 0.300 | |
| azure_ai/Llama-4-Scout-17B-16E-Instruct | 0.200 | 0.780 |
All Variants
All available versions, regions, and API endpoints for Llama 4 Scout 17B 16e Instruct.