Mixtral 8x22B is
Mistral AI's language model with a 66K context window and up to 8K output tokens, available from 2 providers, starting at $1.20 / 1M input and $1.20 / 1M output. A large-scale sparse MoE LLM with 8 experts and 22B parameters each, using ~39B active parameters out of 141B total for cost-efficient high-quality text generation.
mistral-mixtral-8x22b |
| Language |
| Active |
| 66K tokens |
| 8K tokens |
| Text |
| Text |
| 22B |
Capabilities
Input1/5
✓
·
·
·
·
Output1/5
✓
·
·
·
·
Capabilities3/13
·
·
✓
·
✓
·
·
·
·
·
·
·
✓
Pricing by Provider
| Provider | Standard | |
|---|---|---|
| Input $ / 1M | Output $ / 1M | |
Fireworks AI | $1.20 | $1.20 |
Mistral AI | $2.00 | $6.00 |
Cost Calculator
Preset:
Compares every provider & tier in USD
Versions
| Version | Released | Context | Input / 1M | Output / 1M | Status |
|---|---|---|---|---|---|
| Dolphin Mixtral 2.7 8x7B | — | — | — | — | Available |
| Dolphin 2.6 Mixtral 8x7B | — | 33K | — | — | Available |
| Dolphin Mixtral 2.5 8x7B | — | — | — | — | Available |
| Hermes 2 8x7B DPO | — | 33K | — | — | Available |
| MiniMax M2.1 | 1.0M | $0.290 | $0.950 | Available | |
| MiniMax M2 | 205K | $0.255 | $1.00 | Available | |
| Mixtral 8x22B Instruct | 66K | $1.20 | $1.20 | Available | |
| Mixtral 8x7B Instruct | 33K | $0.070 | $0.280 | Available | |
| Mixtral 8x22B | — | 66K | $1.20 | $1.20 | Current |
| Hermes 2 Mixtral 8x7B DPO | — | 33K | $0.500 | $0.500 | Available |
| KARAKURI LM 8x7B Instruct | — | — | — | — | Available |