Meta: LlamaGuard 2 8B is an advanced 8-billion-parameter safeguard model, building upon the Llama 3 family architecture. Similar to its predecessor, LlamaGuard 1, this model excels at both prompt and response classification, ensuring content safety across various applications. Operating much like a standard Large Language Model (LLM), LlamaGuard 2 generates textual output indicating whether a given input or output is deemed safe or unsafe. Crucially, if content is classified as unsafe, the model further specifies the content categories that have been violated. For optimal performance, it is recommended to use raw prompt input or the `/completions` endpoint rather than the chat API. This model has demonstrated strong performance in human evaluations, rivaling leading closed-source alternatives. With a context window of 8K tokens and a max output of 4K tokens, LlamaGuard 2 offers efficient processing. It supports streaming capabilities and is available for free access on Multi AI, priced at $0.20 per 1M input tokens and $0.20 per 1M output tokens. Usage is subject to Meta's Acceptable Use Policy.
✅ Best For
🚀 Capabilities
❌ Limitations
Specifications
| Provider | meta-llama |
| Context Window | 8,192 tokens |
| Max Output | 4,096 tokens |
| Minimum Plan | Economy |
Pricing
| Input Price | $0.2000 / 1M tokens |
| Output Price | $0.2000 / 1M tokens |
💡 With PRO subscription, cost is reduced by 20%