Mixtral 8x7B is
Mistral AI's language model with a 33K context window and up to 8K output tokens, available from 3 providers, starting at $0.460 / 1M input and $0.500 / 1M output. Mistral's foundational sparse Mixture-of-Experts LLM with 8 experts of 7B parameters each, delivering strong generative performance with efficient active-parameter usage.
mistral-mixtral-8x7b |
| Language |
| Active |
| 33K tokens |
| 8K tokens |
| Text |
| Text |
| 7B |
Capabilities
Input1/5
✓
·
·
·
·
Output1/5
✓
·
·
·
·
Capabilities3/13
·
·
✓
·
✓
·
·
·
·
·
·
·
✓
Pricing by Provider
| Provider | Standard | Batch | ||
|---|---|---|---|---|
| Input $ / 1M | Output $ / 1M | Input $ / 1M | Output $ / 1M | |
Snowflake | $0.460 | $0.700 | $0.230 | $0.350 |
Fireworks AI | $0.500 | $0.500 | — | — |
Mistral AI | $0.700 | $0.700 | — | — |
Cost Calculator
Preset:
Compares every provider & tier in USD
Versions
| Version | Released | Context | Input / 1M | Output / 1M | Status |
|---|---|---|---|---|---|
| Dolphin Mixtral 2.7 8x7B | — | — | — | — | Available |
| Dolphin 2.6 Mixtral 8x7B | — | 33K | — | — | Available |
| Dolphin Mixtral 2.5 8x7B | — | — | — | — | Available |
| Hermes 2 8x7B DPO | — | 33K | — | — | Available |
| MiniMax M2.1 | 1.0M | $0.290 | $0.950 | Available | |
| MiniMax M2 | 205K | $0.255 | $1.00 | Available | |
| Mixtral 8x22B Instruct | 66K | $1.20 | $1.20 | Available | |
| Mixtral 8x7B Instruct | 33K | $0.070 | $0.280 | Available | |
| Mixtral 8x7B | — | 33K | $0.460 | $0.500 | Current |
| Hermes 2 Mixtral 8x7B DPO | — | 33K | $0.500 | $0.500 | Available |
| KARAKURI LM 8x7B Instruct | — | — | — | — | Available |