Llama 4 Maverick 17B 128e Instruct Fp8
Llama 4 Maverick 17B 128e Instruct Fp8 is a text model from
Novita AI with a context window of 1.0M tokens and max output of 8K tokens. Pricing starts at 0.27 per million input tokens and 0.85 per million output tokens (cheapest at Novita AI).
Capabilities
✓ Vision✗ Function Calling✗ Reasoning✗ JSON Schema✓ System Messages✗ Web Search✗ Prompt Caching✗ Audio Input✗ Audio Output
Specifications
| Model Key | novita/meta-llama/llama-4-maverick-17b-128e-instruct-fp8 |
| Provider | |
| Provider ID | novita |
| Mode | Text |
| Canonical Name | llama-maverick-4-17b-128e |
| Context Window | 1.0M tokens |
| Max Output | 8K tokens |
Pricing
| Type | Per 1K Tokens | Per 1M Tokens |
|---|---|---|
| Input Tokens | 0.000270 | 0.270 |
| Output Tokens | 0.000850 | 0.850 |
Benchmarks
No benchmark data is available for this model.
Price Comparison by Provider
Compare prices for Llama 4 Maverick 17B 128e Instruct Fp8 across different providers. The same model may be available through multiple providers at different price points.
Provider | Model Key | Input Price, $ | Output Price, $ |
|---|---|---|---|
| vertex_ai/meta/llama-4-maverick-17b-128e-instruct-maas | 0.350 | 1.15 | |
| novita/meta-llama/llama-4-maverick-17b-128e-instruct-fp8 | 0.270 | 0.850 |
All Variants
All available versions, regions, and API endpoints for Llama 4 Maverick 17B 128e Instruct Fp8.
Model Key | Provider | Mode | Input Price, $ | Output Price, $ | Context | Max Output | Vision | Functions |
|---|---|---|---|---|---|---|---|---|
| novita/meta-llama/llama-4-maverick-17b-128e-instruct-fp8 | Text | 0.270 | 0.850 | 1.0M | 8K | yes | no | |
| vertex_ai/meta/llama-4-maverick-17b-128e-instruct-maas | Text | 0.350 | 1.15 | 1.0M | 1.0M | no | yes |