{ "title": "Model Providers", "content": "OpenClaw can use many LLM providers. Pick one, authenticate, then set the default\nmodel as `provider/model`.\n\n## Highlight: Venice (Venice AI)\n\nVenice is our recommended Venice AI setup for privacy-first inference with an option to use Opus for the hardest tasks.\n\n* Default: `venice/llama-3.3-70b`\n* Best overall: `venice/claude-opus-45` (Opus remains the strongest)\n\nSee [Venice AI](/providers/venice).\n\n## Quick start (two steps)\n\n1. Authenticate with the provider (usually via `openclaw onboard`).\n2. Set the default model:\n\n## Supported providers (starter set)\n\n* [OpenAI (API + Codex)](/providers/openai)\n* [Anthropic (API + Claude Code CLI)](/providers/anthropic)\n* [OpenRouter](/providers/openrouter)\n* [Vercel AI Gateway](/providers/vercel-ai-gateway)\n* [Cloudflare AI Gateway](/providers/cloudflare-ai-gateway)\n* [Moonshot AI (Kimi + Kimi Coding)](/providers/moonshot)\n* [Synthetic](/providers/synthetic)\n* [OpenCode Zen](/providers/opencode)\n* [Z.AI](/providers/zai)\n* [GLM models](/providers/glm)\n* [MiniMax](/providers/minimax)\n* [Venice (Venice AI)](/providers/venice)\n* [Amazon Bedrock](/bedrock)\n\nFor the full provider catalog (xAI, Groq, Mistral, etc.) and advanced configuration,\nsee [Model providers](/concepts/model-providers).", "code_samples": [], "headings": [ { "level": "h2", "text": "Highlight: Venice (Venice AI)", "id": "highlight:-venice-(venice-ai)" }, { "level": "h2", "text": "Quick start (two steps)", "id": "quick-start-(two-steps)" }, { "level": "h2", "text": "Supported providers (starter set)", "id": "supported-providers-(starter-set)" } ], "url": "llms-txt#model-providers", "links": [] }