Nemotron 2 Nano 12B is
NVIDIA's language model with a 131K context window and up to 8K output tokens, available from 2 providers. A 12B-parameter Nemotron Nano v2 LLM trained from scratch by NVIDIA using a hybrid Transformer-Mamba architecture for unified reasoning and non-reasoning tasks.
Capabilities
Input2/5
✓
✓
·
·
·
Output1/5
✓
·
·
·
·
Capabilities0/13
·
·
·
·
·
·
·
·
·
·
·
·
·
Pricing by Provider
| Provider | Standard | Batch | Flex | Priority | ||||
|---|---|---|---|---|---|---|---|---|
| Input $ / 1M | Output $ / 1M | Input $ / 1M | Output $ / 1M | Input $ / 1M | Output $ / 1M | Input $ / 1M | Output $ / 1M | |
Amazon Bedrock | $0.200 | $0.600 | $0.100 | $0.300 | $0.100 | $0.300 | $0.350 | $1.05 |
Fireworks AI | $0.200 | $0.200 | — | — | — | — | — | — |
Cost Calculator
Preset:
Compares every provider & tier in USD
Other models
| Model | Tier | Released | Context | Input / 1M | Output / 1M |
|---|---|---|---|---|---|
| Nemotron Nano 2 12B | — | 131K | $0.200 | $0.200 | |
| Nemotron Nano 2 9B | — | 131K | $0.040 | $0.160 |