Llama 4 Maverick 17B 128E Instruct FP8
Llama 4 Maverick 17B 128E Instruct FP8 is a text model from
Together AI. Pricing starts at 0.27 per million input tokens and 0.85 per million output tokens (cheapest at Lambda).
Capabilities
✗ Vision✓ Function Calling✗ Reasoning✓ JSON Schema✗ System Messages✗ Web Search✗ Prompt Caching✗ Audio Input✗ Audio Output
Specifications
| Model Key | together_ai/meta-llama/Llama-4-Maverick-17B-128E-Instruct-FP8 |
| Provider | |
| Provider ID | together_ai |
| Mode | Text |
| Canonical Name | llama-maverick-4-17b |
| Context Window | N/A tokens |
| Max Output | N/A |
Pricing
| Type | Per 1K Tokens | Per 1M Tokens |
|---|---|---|
| Input Tokens | 0.000270 | 0.270 |
| Output Tokens | 0.000850 | 0.850 |
Benchmarks
No benchmark data is available for this model.
Price Comparison by Provider
Compare prices for Llama 4 Maverick 17B 128E Instruct FP8 across different providers. The same model may be available through multiple providers at different price points.
Provider | Model Key | Input Price, $ | Output Price, $ |
|---|---|---|---|
| watsonx/meta-llama/llama-4-maverick-17b | 0.350 | 1.40 | |
| together_ai/meta-llama/Llama-4-Maverick-17B-128E-Instruct-FP8 | 0.270 | 0.850 | |
| sambanova/Llama-4-Maverick-17B-128E-Instruct | 0.630 | 1.80 | |
| oci/meta.llama-4-maverick-17b-128e-instruct-fp8 | 0.720 | 0.720 | |
| meta.llama4-maverick-17b-instruct-v1:0 | 0.240 | 0.970 | |
| meta_llama/Llama-4-Maverick-17B-128E-Instruct-FP8 | N/A | N/A | |
| lambda_ai/llama-4-maverick-17b-128e-instruct-fp8 | 0.050 | 0.100 | |
| groq/meta-llama/llama-4-maverick-17b-128e-instruct | 0.200 | 0.600 | |
| fireworks_ai/accounts/fireworks/models/llama4-maverick-instruct-basic | 0.220 | 0.880 | |
| deepinfra/meta-llama/Llama-4-Maverick-17B-128E-Instruct-FP8 | 0.150 | 0.600 | |
| databricks/databricks-llama-4-maverick | 0.500 | 1.50 | |
| azure_ai/Llama-4-Maverick-17B-128E-Instruct-FP8 | 1.41 | 0.350 |
All Variants
All available versions, regions, and API endpoints for Llama 4 Maverick 17B 128E Instruct FP8.