Llama 4 Scout 17B 16E Instruct
Llama 4 Scout 17B 16E Instruct is a text model from
Nscale. Pricing starts at 0.09 per million input tokens and 0.29 per million output tokens (cheapest at Lambda).
Capabilities
✗ Vision✗ Function Calling✗ Reasoning✗ JSON Schema✗ System Messages✗ Web Search✗ Prompt Caching✗ Audio Input✗ Audio Output
Specifications
| Model Key | nscale/meta-llama/Llama-4-Scout-17B-16E-Instruct |
| Provider | |
| Provider ID | nscale |
| Mode | Text |
| Canonical Name | llama-scout-4-17b |
| Context Window | N/A tokens |
| Max Output | N/A |
Pricing
| Type | Per 1K Tokens | Per 1M Tokens |
|---|---|---|
| Input Tokens | 0.000090 | 0.090 |
| Output Tokens | 0.000290 | 0.290 |
Benchmarks
No benchmark data is available for this model.
Price Comparison by Provider
Compare prices for Llama 4 Scout 17B 16E Instruct across different providers. The same model may be available through multiple providers at different price points.
Provider | Model Key | Input Price, $ | Output Price, $ |
|---|---|---|---|
| together_ai/meta-llama/Llama-4-Scout-17B-16E-Instruct | 0.180 | 0.590 | |
| sambanova/Llama-4-Scout-17B-16E-Instruct | 0.400 | 0.700 | |
| oci/meta.llama-4-scout-17b-16e-instruct | 0.720 | 0.720 | |
| nscale/meta-llama/Llama-4-Scout-17B-16E-Instruct | 0.090 | 0.290 | |
| meta.llama4-scout-17b-instruct-v1:0 | 0.170 | 0.660 | |
| meta_llama/Llama-4-Scout-17B-16E-Instruct-FP8 | N/A | N/A | |
| lambda_ai/llama-4-scout-17b-16e-instruct | 0.050 | 0.100 | |
| groq/meta-llama/llama-4-scout-17b-16e-instruct | 0.110 | 0.340 | |
| fireworks_ai/accounts/fireworks/models/llama4-scout-instruct-basic | 0.150 | 0.600 | |
| deepinfra/meta-llama/Llama-4-Scout-17B-16E-Instruct | 0.080 | 0.300 | |
| azure_ai/Llama-4-Scout-17B-16E-Instruct | 0.200 | 0.780 |
All Variants
All available versions, regions, and API endpoints for Llama 4 Scout 17B 16E Instruct.