LlamaGuard is
Meta's language model. Meta's LLM-based content safety classifier for detecting harmful or policy-violating content in both LLM inputs and outputs.
Capabilities
Input1/5
✓
·
·
·
·
Output1/5
✓
·
·
·
·
Capabilities0/13
·
·
·
·
·
·
·
·
·
·
·
·
·
Versions
| Version | Released | Context | Input / 1M | Output / 1M | Status |
|---|---|---|---|---|---|
| LlamaGuard 3 11B Vision | — | 128K | $0.350 | $0.350 | Available |
| LlamaGuard | — | — | — | — | Current |