Mercury 2 is
Inception's language model with a 128K context window and up to 50K output tokens, available from 2 providers, starting at $0.250 / 1M input and $0.750 / 1M output. The first reasoning diffusion LLM (dLLM), producing and refining multiple tokens in parallel to deliver fast reasoning with tool-use capabilities.
32.8#52 |
30.6#47 |
0.8#59 |
0.2#36 |
0.7#19 |
4.57s#307 |
0.4#54 |
0.4#97 |
0.3#46 |
0.7#57 |
981.0#1 |
Capabilities
Input1/5
✓
·
·
·
·
Output1/5
✓
·
·
·
·
Capabilities2/13
✓
·
✓
·
·
·
·
·
·
·
·
·
·
Pricing by Provider
| Provider | Standard | ||
|---|---|---|---|
| Input $ / 1M | Output $ / 1M | Cache Read $ / 1M | |
OpenRouter | $0.250 | $0.750 | $0.025 |
Vercel AI Gateway | $0.250 | $0.750 | $0.025 |
Cost Calculator
Preset:
Compares every provider & tier in USD
Versions
| Version | Released | Context | Input / 1M | Output / 1M | Status |
|---|---|---|---|---|---|
| Mercury 2 | 128K | $0.250 | $0.750 | Current | |
| Mercury Coder Small Beta | 32K | $0.250 | $1.00 | Available | |
| Mercury | — | — | — | — | Available |
| Mercury Coder | — | — | — | — | Available |