Llama Guard is
Meta's language model. Meta's foundational content safety classifier built on Llama, designed to detect unsafe content in both LLM inputs and outputs.
Capabilities
Input1/5
✓
·
·
·
·
Output1/5
✓
·
·
·
·
Capabilities0/13
·
·
·
·
·
·
·
·
·
·
·
·
·
Versions
| Version | Released | Context | Input / 1M | Output / 1M | Status |
|---|---|---|---|---|---|
| Llama Guard 3 11B Vision | — | 128K | $0.350 | $0.350 | Available |
| Llama Guard | — | — | — | — | Current |