Initial commit: OpenClaw Skill Collection
6 custom skills (assign-task, dispatch-webhook, daily-briefing, task-capture, qmd-brain, tts-voice) with technical documentation. Compatible with Claude Code, OpenClaw, Codex CLI, and OpenCode.
This commit is contained in:
144
openclaw-knowhow-skill/docs/models/anthropic.md
Normal file
144
openclaw-knowhow-skill/docs/models/anthropic.md
Normal file
@@ -0,0 +1,144 @@
|
||||
# Anthropic (Claude)
|
||||
|
||||
Anthropic builds the **Claude** model family and provides access via an API.
|
||||
In OpenClaw you can authenticate with an API key or a **setup-token**.
|
||||
|
||||
## Option A: Anthropic API key
|
||||
|
||||
**Best for:** standard API access and usage-based billing.
|
||||
Create your API key in the Anthropic Console.
|
||||
|
||||
### CLI setup
|
||||
|
||||
```bash
|
||||
openclaw onboard
|
||||
# choose: Anthropic API key
|
||||
|
||||
# or non-interactive
|
||||
openclaw onboard --anthropic-api-key "$ANTHROPIC_API_KEY"
|
||||
```
|
||||
|
||||
### Config snippet
|
||||
|
||||
```json5
|
||||
{
|
||||
env: { ANTHROPIC_API_KEY: "sk-ant-..." },
|
||||
agents: { defaults: { model: { primary: "anthropic/claude-opus-4-5" } } },
|
||||
}
|
||||
```
|
||||
|
||||
## Prompt caching (Anthropic API)
|
||||
|
||||
OpenClaw supports Anthropic's prompt caching feature. This is **API-only**; subscription auth does not honor cache settings.
|
||||
|
||||
### Configuration
|
||||
|
||||
Use the `cacheRetention` parameter in your model config:
|
||||
|
||||
| Value | Cache Duration | Description |
|
||||
| ------- | -------------- | ----------------------------------- |
|
||||
| `none` | No caching | Disable prompt caching |
|
||||
| `short` | 5 minutes | Default for API Key auth |
|
||||
| `long` | 1 hour | Extended cache (requires beta flag) |
|
||||
|
||||
```json5
|
||||
{
|
||||
agents: {
|
||||
defaults: {
|
||||
models: {
|
||||
"anthropic/claude-opus-4-5": {
|
||||
params: { cacheRetention: "long" },
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
```
|
||||
|
||||
### Defaults
|
||||
|
||||
When using Anthropic API Key authentication, OpenClaw automatically applies `cacheRetention: "short"` (5-minute cache) for all Anthropic models. You can override this by explicitly setting `cacheRetention` in your config.
|
||||
|
||||
### Legacy parameter
|
||||
|
||||
The older `cacheControlTtl` parameter is still supported for backwards compatibility:
|
||||
|
||||
* `"5m"` maps to `short`
|
||||
* `"1h"` maps to `long`
|
||||
|
||||
We recommend migrating to the new `cacheRetention` parameter.
|
||||
|
||||
OpenClaw includes the `extended-cache-ttl-2025-04-11` beta flag for Anthropic API
|
||||
requests; keep it if you override provider headers (see [/gateway/configuration](/gateway/configuration)).
|
||||
|
||||
## Option B: Claude setup-token
|
||||
|
||||
**Best for:** using your Claude subscription.
|
||||
|
||||
### Where to get a setup-token
|
||||
|
||||
Setup-tokens are created by the **Claude Code CLI**, not the Anthropic Console. You can run this on **any machine**:
|
||||
|
||||
```bash
|
||||
claude setup-token
|
||||
```
|
||||
|
||||
Paste the token into OpenClaw (wizard: **Anthropic token (paste setup-token)**), or run it on the gateway host:
|
||||
|
||||
```bash
|
||||
openclaw models auth setup-token --provider anthropic
|
||||
```
|
||||
|
||||
If you generated the token on a different machine, paste it:
|
||||
|
||||
```bash
|
||||
openclaw models auth paste-token --provider anthropic
|
||||
```
|
||||
|
||||
### CLI setup
|
||||
|
||||
```bash
|
||||
# Paste a setup-token during onboarding
|
||||
openclaw onboard --auth-choice setup-token
|
||||
```
|
||||
|
||||
### Config snippet
|
||||
|
||||
```json5
|
||||
{
|
||||
agents: { defaults: { model: { primary: "anthropic/claude-opus-4-5" } } },
|
||||
}
|
||||
```
|
||||
|
||||
## Notes
|
||||
|
||||
* Generate the setup-token with `claude setup-token` and paste it, or run `openclaw models auth setup-token` on the gateway host.
|
||||
* If you see "OAuth token refresh failed ..." on a Claude subscription, re-auth with a setup-token. See [/gateway/troubleshooting#oauth-token-refresh-failed-anthropic-claude-subscription](/gateway/troubleshooting#oauth-token-refresh-failed-anthropic-claude-subscription).
|
||||
* Auth details + reuse rules are in [/concepts/oauth](/concepts/oauth).
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
**401 errors / token suddenly invalid**
|
||||
|
||||
* Claude subscription auth can expire or be revoked. Re-run `claude setup-token`
|
||||
and paste it into the **gateway host**.
|
||||
* If the Claude CLI login lives on a different machine, use
|
||||
`openclaw models auth paste-token --provider anthropic` on the gateway host.
|
||||
|
||||
**No API key found for provider "anthropic"**
|
||||
|
||||
* Auth is **per agent**. New agents don't inherit the main agent's keys.
|
||||
* Re-run onboarding for that agent, or paste a setup-token / API key on the
|
||||
gateway host, then verify with `openclaw models status`.
|
||||
|
||||
**No credentials found for profile `anthropic:default`**
|
||||
|
||||
* Run `openclaw models status` to see which auth profile is active.
|
||||
* Re-run onboarding, or paste a setup-token / API key for that profile.
|
||||
|
||||
**No available auth profile (all in cooldown/unavailable)**
|
||||
|
||||
* Check `openclaw models status --json` for `auth.unusableProfiles`.
|
||||
* Add another Anthropic profile or wait for cooldown.
|
||||
|
||||
More: [/gateway/troubleshooting](/gateway/troubleshooting) and [/help/faq](/help/faq).
|
||||
168
openclaw-knowhow-skill/docs/models/bedrock.md
Normal file
168
openclaw-knowhow-skill/docs/models/bedrock.md
Normal file
@@ -0,0 +1,168 @@
|
||||
# Amazon Bedrock
|
||||
|
||||
OpenClaw can use **Amazon Bedrock** models via pi-ai's **Bedrock Converse**
|
||||
streaming provider. Bedrock auth uses the **AWS SDK default credential chain**,
|
||||
not an API key.
|
||||
|
||||
## What pi-ai supports
|
||||
|
||||
* Provider: `amazon-bedrock`
|
||||
* API: `bedrock-converse-stream`
|
||||
* Auth: AWS credentials (env vars, shared config, or instance role)
|
||||
* Region: `AWS_REGION` or `AWS_DEFAULT_REGION` (default: `us-east-1`)
|
||||
|
||||
## Automatic model discovery
|
||||
|
||||
If AWS credentials are detected, OpenClaw can automatically discover Bedrock
|
||||
models that support **streaming** and **text output**. Discovery uses
|
||||
`bedrock:ListFoundationModels` and is cached (default: 1 hour).
|
||||
|
||||
Config options live under `models.bedrockDiscovery`:
|
||||
|
||||
```json5
|
||||
{
|
||||
models: {
|
||||
bedrockDiscovery: {
|
||||
enabled: true,
|
||||
region: "us-east-1",
|
||||
providerFilter: ["anthropic", "amazon"],
|
||||
refreshInterval: 3600,
|
||||
defaultContextWindow: 32000,
|
||||
defaultMaxTokens: 4096,
|
||||
},
|
||||
},
|
||||
}
|
||||
```
|
||||
|
||||
Notes:
|
||||
|
||||
* `enabled` defaults to `true` when AWS credentials are present.
|
||||
* `region` defaults to `AWS_REGION` or `AWS_DEFAULT_REGION`, then `us-east-1`.
|
||||
* `providerFilter` matches Bedrock provider names (for example `anthropic`).
|
||||
* `refreshInterval` is seconds; set to `0` to disable caching.
|
||||
* `defaultContextWindow` (default: `32000`) and `defaultMaxTokens` (default: `4096`)
|
||||
are used for discovered models (override if you know your model limits).
|
||||
|
||||
## Setup (manual)
|
||||
|
||||
1. Ensure AWS credentials are available on the **gateway host**:
|
||||
|
||||
```bash
|
||||
export AWS_ACCESS_KEY_ID="AKIA..."
|
||||
export AWS_SECRET_ACCESS_KEY="..."
|
||||
export AWS_REGION="us-east-1"
|
||||
# Optional:
|
||||
export AWS_SESSION_TOKEN="..."
|
||||
export AWS_PROFILE="your-profile"
|
||||
# Optional (Bedrock API key/bearer token):
|
||||
export AWS_BEARER_TOKEN_BEDROCK="..."
|
||||
```
|
||||
|
||||
2. Add a Bedrock provider and model to your config (no `apiKey` required):
|
||||
|
||||
```json5
|
||||
{
|
||||
models: {
|
||||
providers: {
|
||||
"amazon-bedrock": {
|
||||
baseUrl: "https://bedrock-runtime.us-east-1.amazonaws.com",
|
||||
api: "bedrock-converse-stream",
|
||||
auth: "aws-sdk",
|
||||
models: [
|
||||
{
|
||||
id: "anthropic.claude-opus-4-5-20251101-v1:0",
|
||||
name: "Claude Opus 4.5 (Bedrock)",
|
||||
reasoning: true,
|
||||
input: ["text", "image"],
|
||||
cost: { input: 0, output: 0, cacheRead: 0, cacheWrite: 0 },
|
||||
contextWindow: 200000,
|
||||
maxTokens: 8192,
|
||||
},
|
||||
],
|
||||
},
|
||||
},
|
||||
},
|
||||
agents: {
|
||||
defaults: {
|
||||
model: { primary: "amazon-bedrock/anthropic.claude-opus-4-5-20251101-v1:0" },
|
||||
},
|
||||
},
|
||||
}
|
||||
```
|
||||
|
||||
## EC2 Instance Roles
|
||||
|
||||
When running OpenClaw on an EC2 instance with an IAM role attached, the AWS SDK
|
||||
will automatically use the instance metadata service (IMDS) for authentication.
|
||||
However, OpenClaw's credential detection currently only checks for environment
|
||||
variables, not IMDS credentials.
|
||||
|
||||
**Workaround:** Set `AWS_PROFILE=default` to signal that AWS credentials are
|
||||
available. The actual authentication still uses the instance role via IMDS.
|
||||
|
||||
```bash
|
||||
# Add to ~/.bashrc or your shell profile
|
||||
export AWS_PROFILE=default
|
||||
export AWS_REGION=us-east-1
|
||||
```
|
||||
|
||||
**Required IAM permissions** for the EC2 instance role:
|
||||
|
||||
* `bedrock:InvokeModel`
|
||||
* `bedrock:InvokeModelWithResponseStream`
|
||||
* `bedrock:ListFoundationModels` (for automatic discovery)
|
||||
|
||||
Or attach the managed policy `AmazonBedrockFullAccess`.
|
||||
|
||||
**Quick setup:**
|
||||
|
||||
```bash
|
||||
# 1. Create IAM role and instance profile
|
||||
aws iam create-role --role-name EC2-Bedrock-Access \
|
||||
--assume-role-policy-document '{
|
||||
"Version": "2012-10-17",
|
||||
"Statement": [{
|
||||
"Effect": "Allow",
|
||||
"Principal": {"Service": "ec2.amazonaws.com"},
|
||||
"Action": "sts:AssumeRole"
|
||||
}]
|
||||
}'
|
||||
|
||||
aws iam attach-role-policy --role-name EC2-Bedrock-Access \
|
||||
--policy-arn arn:aws:iam::aws:policy/AmazonBedrockFullAccess
|
||||
|
||||
aws iam create-instance-profile --instance-profile-name EC2-Bedrock-Access
|
||||
aws iam add-role-to-instance-profile \
|
||||
--instance-profile-name EC2-Bedrock-Access \
|
||||
--role-name EC2-Bedrock-Access
|
||||
|
||||
# 2. Attach to your EC2 instance
|
||||
aws ec2 associate-iam-instance-profile \
|
||||
--instance-id i-xxxxx \
|
||||
--iam-instance-profile Name=EC2-Bedrock-Access
|
||||
|
||||
# 3. On the EC2 instance, enable discovery
|
||||
openclaw config set models.bedrockDiscovery.enabled true
|
||||
openclaw config set models.bedrockDiscovery.region us-east-1
|
||||
|
||||
# 4. Set the workaround env vars
|
||||
echo 'export AWS_PROFILE=default' >> ~/.bashrc
|
||||
echo 'export AWS_REGION=us-east-1' >> ~/.bashrc
|
||||
source ~/.bashrc
|
||||
|
||||
# 5. Verify models are discovered
|
||||
openclaw models list
|
||||
```
|
||||
|
||||
## Notes
|
||||
|
||||
* Bedrock requires **model access** enabled in your AWS account/region.
|
||||
* Automatic discovery needs the `bedrock:ListFoundationModels` permission.
|
||||
* If you use profiles, set `AWS_PROFILE` on the gateway host.
|
||||
* OpenClaw surfaces the credential source in this order: `AWS_BEARER_TOKEN_BEDROCK`,
|
||||
then `AWS_ACCESS_KEY_ID` + `AWS_SECRET_ACCESS_KEY`, then `AWS_PROFILE`, then the
|
||||
default AWS SDK chain.
|
||||
* Reasoning support depends on the model; check the Bedrock model card for
|
||||
current capabilities.
|
||||
* If you prefer a managed key flow, you can also place an OpenAI-compatible
|
||||
proxy in front of Bedrock and configure it as an OpenAI provider instead.
|
||||
28
openclaw-knowhow-skill/docs/models/glm.md
Normal file
28
openclaw-knowhow-skill/docs/models/glm.md
Normal file
@@ -0,0 +1,28 @@
|
||||
# GLM Models
|
||||
|
||||
## Overview
|
||||
|
||||
GLM represents a model family accessible through the Z.AI platform. Within OpenClaw, you access these models via the `zai` provider using identifiers such as `zai/glm-4.7`.
|
||||
|
||||
## Setup Instructions
|
||||
|
||||
**CLI Configuration:**
|
||||
|
||||
```bash
|
||||
openclaw onboard --auth-choice zai-api-key
|
||||
```
|
||||
|
||||
**Configuration File:**
|
||||
|
||||
```json5
|
||||
{
|
||||
env: { ZAI_API_KEY: "sk-..." },
|
||||
agents: { defaults: { model: { primary: "zai/glm-4.7" } } },
|
||||
}
|
||||
```
|
||||
|
||||
## Key Considerations
|
||||
|
||||
- Model versions and availability may change; consult Z.AI documentation for current options
|
||||
- Supported model identifiers include `glm-4.7` and `glm-4.6`
|
||||
- Additional provider information is available in the `/providers/zai` documentation
|
||||
23
openclaw-knowhow-skill/docs/models/index.md
Normal file
23
openclaw-knowhow-skill/docs/models/index.md
Normal file
@@ -0,0 +1,23 @@
|
||||
# Model Providers
|
||||
|
||||
OpenClaw supports multiple LLM providers. Users authenticate with their chosen provider and configure a default model using the format `provider/model`.
|
||||
|
||||
## Featured Provider
|
||||
|
||||
Venice AI is highlighted as the recommended privacy-first option, with `venice/llama-3.3-70b` as the default and `venice/claude-opus-45` noted as "the strongest" choice.
|
||||
|
||||
## Setup Steps
|
||||
|
||||
Configuration requires two actions: authenticating with a provider (typically through `openclaw onboard`) and setting the default model in the configuration file using the specified format.
|
||||
|
||||
## Available Providers
|
||||
|
||||
The platform integrates with major services including OpenAI, Anthropic, Qwen, and OpenRouter, alongside specialized options like Cloudflare AI Gateway, Moonshot AI, and Amazon Bedrock. Local model support is available through Ollama.
|
||||
|
||||
## Additional Services
|
||||
|
||||
Deepgram provides audio transcription capabilities. A community tool enables Claude subscribers to expose their account as an OpenAI-compatible endpoint.
|
||||
|
||||
## Documentation Access
|
||||
|
||||
A complete provider catalog with advanced configuration details is available through the main Model Providers concepts page.
|
||||
200
openclaw-knowhow-skill/docs/models/minimax.md
Normal file
200
openclaw-knowhow-skill/docs/models/minimax.md
Normal file
@@ -0,0 +1,200 @@
|
||||
# MiniMax
|
||||
|
||||
MiniMax is an AI company that builds the **M2/M2.1** model family. The current
|
||||
coding-focused release is **MiniMax M2.1** (December 23, 2025), built for
|
||||
real-world complex tasks.
|
||||
|
||||
Source: [MiniMax M2.1 release note](https://www.minimax.io/news/minimax-m21)
|
||||
|
||||
## Model overview (M2.1)
|
||||
|
||||
MiniMax highlights these improvements in M2.1:
|
||||
|
||||
* Stronger **multi-language coding** (Rust, Java, Go, C++, Kotlin, Objective-C, TS/JS).
|
||||
* Better **web/app development** and aesthetic output quality (including native mobile).
|
||||
* Improved **composite instruction** handling for office-style workflows, building on
|
||||
interleaved thinking and integrated constraint execution.
|
||||
* **More concise responses** with lower token usage and faster iteration loops.
|
||||
* Stronger **tool/agent framework** compatibility and context management (Claude Code,
|
||||
Droid/Factory AI, Cline, Kilo Code, Roo Code, BlackBox).
|
||||
* Higher-quality **dialogue and technical writing** outputs.
|
||||
|
||||
## MiniMax M2.1 vs MiniMax M2.1 Lightning
|
||||
|
||||
* **Speed:** Lightning is the "fast" variant in MiniMax's pricing docs.
|
||||
* **Cost:** Pricing shows the same input cost, but Lightning has higher output cost.
|
||||
* **Coding plan routing:** The Lightning back-end isn't directly available on the MiniMax
|
||||
coding plan. MiniMax auto-routes most requests to Lightning, but falls back to the
|
||||
regular M2.1 back-end during traffic spikes.
|
||||
|
||||
## Choose a setup
|
||||
|
||||
### MiniMax OAuth (Coding Plan) — recommended
|
||||
|
||||
**Best for:** quick setup with MiniMax Coding Plan via OAuth, no API key required.
|
||||
|
||||
Enable the bundled OAuth plugin and authenticate:
|
||||
|
||||
```bash
|
||||
openclaw plugins enable minimax-portal-auth
|
||||
openclaw gateway restart
|
||||
openclaw onboard --auth-choice minimax-portal
|
||||
```
|
||||
|
||||
You will be prompted to select an endpoint:
|
||||
|
||||
* **Global** - International users (`api.minimax.io`)
|
||||
* **CN** - Users in China (`api.minimaxi.com`)
|
||||
|
||||
See [MiniMax OAuth plugin README](https://github.com/openclaw/openclaw/tree/main/extensions/minimax-portal-auth) for details.
|
||||
|
||||
### MiniMax M2.1 (API key)
|
||||
|
||||
**Best for:** hosted MiniMax with Anthropic-compatible API.
|
||||
|
||||
Configure via CLI:
|
||||
|
||||
* Run `openclaw configure`
|
||||
* Select **Model/auth**
|
||||
* Choose **MiniMax M2.1**
|
||||
|
||||
```json5
|
||||
{
|
||||
env: { MINIMAX_API_KEY: "sk-..." },
|
||||
agents: { defaults: { model: { primary: "minimax/MiniMax-M2.1" } } },
|
||||
models: {
|
||||
mode: "merge",
|
||||
providers: {
|
||||
minimax: {
|
||||
baseUrl: "https://api.minimax.io/anthropic",
|
||||
apiKey: "${MINIMAX_API_KEY}",
|
||||
api: "anthropic-messages",
|
||||
models: [
|
||||
{
|
||||
id: "MiniMax-M2.1",
|
||||
name: "MiniMax M2.1",
|
||||
reasoning: false,
|
||||
input: ["text"],
|
||||
cost: { input: 15, output: 60, cacheRead: 2, cacheWrite: 10 },
|
||||
contextWindow: 200000,
|
||||
maxTokens: 8192,
|
||||
},
|
||||
],
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
```
|
||||
|
||||
### MiniMax M2.1 as fallback (Opus primary)
|
||||
|
||||
**Best for:** keep Opus 4.5 as primary, fail over to MiniMax M2.1.
|
||||
|
||||
```json5
|
||||
{
|
||||
env: { MINIMAX_API_KEY: "sk-..." },
|
||||
agents: {
|
||||
defaults: {
|
||||
models: {
|
||||
"anthropic/claude-opus-4-5": { alias: "opus" },
|
||||
"minimax/MiniMax-M2.1": { alias: "minimax" },
|
||||
},
|
||||
model: {
|
||||
primary: "anthropic/claude-opus-4-5",
|
||||
fallbacks: ["minimax/MiniMax-M2.1"],
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
```
|
||||
|
||||
### Optional: Local via LM Studio (manual)
|
||||
|
||||
**Best for:** local inference with LM Studio.
|
||||
We have seen strong results with MiniMax M2.1 on powerful hardware (e.g. a
|
||||
desktop/server) using LM Studio's local server.
|
||||
|
||||
Configure manually via `openclaw.json`:
|
||||
|
||||
```json5
|
||||
{
|
||||
agents: {
|
||||
defaults: {
|
||||
model: { primary: "lmstudio/minimax-m2.1-gs32" },
|
||||
models: { "lmstudio/minimax-m2.1-gs32": { alias: "Minimax" } },
|
||||
},
|
||||
},
|
||||
models: {
|
||||
mode: "merge",
|
||||
providers: {
|
||||
lmstudio: {
|
||||
baseUrl: "http://127.0.0.1:1234/v1",
|
||||
apiKey: "lmstudio",
|
||||
api: "openai-responses",
|
||||
models: [
|
||||
{
|
||||
id: "minimax-m2.1-gs32",
|
||||
name: "MiniMax M2.1 GS32",
|
||||
reasoning: false,
|
||||
input: ["text"],
|
||||
cost: { input: 0, output: 0, cacheRead: 0, cacheWrite: 0 },
|
||||
contextWindow: 196608,
|
||||
maxTokens: 8192,
|
||||
},
|
||||
],
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
```
|
||||
|
||||
## Configure via `openclaw configure`
|
||||
|
||||
Use the interactive config wizard to set MiniMax without editing JSON:
|
||||
|
||||
1. Run `openclaw configure`.
|
||||
2. Select **Model/auth**.
|
||||
3. Choose **MiniMax M2.1**.
|
||||
4. Pick your default model when prompted.
|
||||
|
||||
## Configuration options
|
||||
|
||||
* `models.providers.minimax.baseUrl`: prefer `https://api.minimax.io/anthropic` (Anthropic-compatible); `https://api.minimax.io/v1` is optional for OpenAI-compatible payloads.
|
||||
* `models.providers.minimax.api`: prefer `anthropic-messages`; `openai-completions` is optional for OpenAI-compatible payloads.
|
||||
* `models.providers.minimax.apiKey`: MiniMax API key (`MINIMAX_API_KEY`).
|
||||
* `models.providers.minimax.models`: define `id`, `name`, `reasoning`, `contextWindow`, `maxTokens`, `cost`.
|
||||
* `agents.defaults.models`: alias models you want in the allowlist.
|
||||
* `models.mode`: keep `merge` if you want to add MiniMax alongside built-ins.
|
||||
|
||||
## Notes
|
||||
|
||||
* Model refs are `minimax/<model>`.
|
||||
* Coding Plan usage API: `https://api.minimaxi.com/v1/api/openplatform/coding_plan/remains` (requires a coding plan key).
|
||||
* Update pricing values in `models.json` if you need exact cost tracking.
|
||||
* Referral link for MiniMax Coding Plan (10% off): [https://platform.minimax.io/subscribe/coding-plan?code=DbXJTRClnb&source=link](https://platform.minimax.io/subscribe/coding-plan?code=DbXJTRClnb&source=link)
|
||||
* See [/concepts/model-providers](/concepts/model-providers) for provider rules.
|
||||
* Use `openclaw models list` and `openclaw models set minimax/MiniMax-M2.1` to switch.
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### "Unknown model: minimax/MiniMax-M2.1"
|
||||
|
||||
This usually means the **MiniMax provider isn't configured** (no provider entry
|
||||
and no MiniMax auth profile/env key found). A fix for this detection is in
|
||||
**2026.1.12** (unreleased at the time of writing). Fix by:
|
||||
|
||||
* Upgrading to **2026.1.12** (or run from source `main`), then restarting the gateway.
|
||||
* Running `openclaw configure` and selecting **MiniMax M2.1**, or
|
||||
* Adding the `models.providers.minimax` block manually, or
|
||||
* Setting `MINIMAX_API_KEY` (or a MiniMax auth profile) so the provider can be injected.
|
||||
|
||||
Make sure the model id is **case-sensitive**:
|
||||
|
||||
* `minimax/MiniMax-M2.1`
|
||||
* `minimax/MiniMax-M2.1-lightning`
|
||||
|
||||
Then recheck with:
|
||||
|
||||
```bash
|
||||
openclaw models list
|
||||
```
|
||||
45
openclaw-knowhow-skill/docs/models/models.md
Normal file
45
openclaw-knowhow-skill/docs/models/models.md
Normal file
@@ -0,0 +1,45 @@
|
||||
# Model Provider Quickstart
|
||||
|
||||
# Model Providers
|
||||
|
||||
OpenClaw can use many LLM providers. Pick one, authenticate, then set the default
|
||||
model as `provider/model`.
|
||||
|
||||
## Highlight: Venice (Venice AI)
|
||||
|
||||
Venice is our recommended Venice AI setup for privacy-first inference with an option to use Opus for the hardest tasks.
|
||||
|
||||
* Default: `venice/llama-3.3-70b`
|
||||
* Best overall: `venice/claude-opus-45` (Opus remains the strongest)
|
||||
|
||||
See [Venice AI](/providers/venice).
|
||||
|
||||
## Quick start (two steps)
|
||||
|
||||
1. Authenticate with the provider (usually via `openclaw onboard`).
|
||||
2. Set the default model:
|
||||
|
||||
```json5
|
||||
{
|
||||
agents: { defaults: { model: { primary: "anthropic/claude-opus-4-5" } } },
|
||||
}
|
||||
```
|
||||
|
||||
## Supported providers (starter set)
|
||||
|
||||
* [OpenAI (API + Codex)](/providers/openai)
|
||||
* [Anthropic (API + Claude Code CLI)](/providers/anthropic)
|
||||
* [OpenRouter](/providers/openrouter)
|
||||
* [Vercel AI Gateway](/providers/vercel-ai-gateway)
|
||||
* [Cloudflare AI Gateway](/providers/cloudflare-ai-gateway)
|
||||
* [Moonshot AI (Kimi + Kimi Coding)](/providers/moonshot)
|
||||
* [Synthetic](/providers/synthetic)
|
||||
* [OpenCode Zen](/providers/opencode)
|
||||
* [Z.AI](/providers/zai)
|
||||
* [GLM models](/providers/glm)
|
||||
* [MiniMax](/providers/minimax)
|
||||
* [Venice (Venice AI)](/providers/venice)
|
||||
* [Amazon Bedrock](/bedrock)
|
||||
|
||||
For the full provider catalog (xAI, Groq, Mistral, etc.) and advanced configuration,
|
||||
see [Model providers](/concepts/model-providers).
|
||||
125
openclaw-knowhow-skill/docs/models/moonshot.md
Normal file
125
openclaw-knowhow-skill/docs/models/moonshot.md
Normal file
@@ -0,0 +1,125 @@
|
||||
# Moonshot AI (Kimi)
|
||||
|
||||
Moonshot provides the Kimi API with OpenAI-compatible endpoints. Configure the
|
||||
provider and set the default model to `moonshot/kimi-k2.5`, or use
|
||||
Kimi Coding with `kimi-coding/k2p5`.
|
||||
|
||||
Current Kimi K2 model IDs:
|
||||
|
||||
* `kimi-k2.5`
|
||||
* `kimi-k2-0905-preview`
|
||||
* `kimi-k2-turbo-preview`
|
||||
* `kimi-k2-thinking`
|
||||
* `kimi-k2-thinking-turbo`
|
||||
|
||||
```bash
|
||||
openclaw onboard --auth-choice moonshot-api-key
|
||||
```
|
||||
|
||||
Kimi Coding:
|
||||
|
||||
```bash
|
||||
openclaw onboard --auth-choice kimi-code-api-key
|
||||
```
|
||||
|
||||
Note: Moonshot and Kimi Coding are separate providers. Keys are not interchangeable, endpoints differ, and model refs differ (Moonshot uses `moonshot/...`, Kimi Coding uses `kimi-coding/...`).
|
||||
|
||||
## Config snippet (Moonshot API)
|
||||
|
||||
```json5
|
||||
{
|
||||
env: { MOONSHOT_API_KEY: "sk-..." },
|
||||
agents: {
|
||||
defaults: {
|
||||
model: { primary: "moonshot/kimi-k2.5" },
|
||||
models: {
|
||||
"moonshot/kimi-k2.5": { alias: "Kimi K2.5" },
|
||||
"moonshot/kimi-k2-0905-preview": { alias: "Kimi K2" },
|
||||
"moonshot/kimi-k2-turbo-preview": { alias: "Kimi K2 Turbo" },
|
||||
"moonshot/kimi-k2-thinking": { alias: "Kimi K2 Thinking" },
|
||||
"moonshot/kimi-k2-thinking-turbo": { alias: "Kimi K2 Thinking Turbo" },
|
||||
},
|
||||
},
|
||||
},
|
||||
models: {
|
||||
mode: "merge",
|
||||
providers: {
|
||||
moonshot: {
|
||||
baseUrl: "https://api.moonshot.ai/v1",
|
||||
apiKey: "${MOONSHOT_API_KEY}",
|
||||
api: "openai-completions",
|
||||
models: [
|
||||
{
|
||||
id: "kimi-k2.5",
|
||||
name: "Kimi K2.5",
|
||||
reasoning: false,
|
||||
input: ["text"],
|
||||
cost: { input: 0, output: 0, cacheRead: 0, cacheWrite: 0 },
|
||||
contextWindow: 256000,
|
||||
maxTokens: 8192,
|
||||
},
|
||||
{
|
||||
id: "kimi-k2-0905-preview",
|
||||
name: "Kimi K2 0905 Preview",
|
||||
reasoning: false,
|
||||
input: ["text"],
|
||||
cost: { input: 0, output: 0, cacheRead: 0, cacheWrite: 0 },
|
||||
contextWindow: 256000,
|
||||
maxTokens: 8192,
|
||||
},
|
||||
{
|
||||
id: "kimi-k2-turbo-preview",
|
||||
name: "Kimi K2 Turbo",
|
||||
reasoning: false,
|
||||
input: ["text"],
|
||||
cost: { input: 0, output: 0, cacheRead: 0, cacheWrite: 0 },
|
||||
contextWindow: 256000,
|
||||
maxTokens: 8192,
|
||||
},
|
||||
{
|
||||
id: "kimi-k2-thinking",
|
||||
name: "Kimi K2 Thinking",
|
||||
reasoning: true,
|
||||
input: ["text"],
|
||||
cost: { input: 0, output: 0, cacheRead: 0, cacheWrite: 0 },
|
||||
contextWindow: 256000,
|
||||
maxTokens: 8192,
|
||||
},
|
||||
{
|
||||
id: "kimi-k2-thinking-turbo",
|
||||
name: "Kimi K2 Thinking Turbo",
|
||||
reasoning: true,
|
||||
input: ["text"],
|
||||
cost: { input: 0, output: 0, cacheRead: 0, cacheWrite: 0 },
|
||||
contextWindow: 256000,
|
||||
maxTokens: 8192,
|
||||
},
|
||||
],
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
```
|
||||
|
||||
## Kimi Coding
|
||||
|
||||
```json5
|
||||
{
|
||||
env: { KIMI_API_KEY: "sk-..." },
|
||||
agents: {
|
||||
defaults: {
|
||||
model: { primary: "kimi-coding/k2p5" },
|
||||
models: {
|
||||
"kimi-coding/k2p5": { alias: "Kimi K2.5" },
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
```
|
||||
|
||||
## Notes
|
||||
|
||||
* Moonshot model refs use `moonshot/<modelId>`. Kimi Coding model refs use `kimi-coding/<modelId>`.
|
||||
* Override pricing and context metadata in `models.providers` if needed.
|
||||
* If Moonshot publishes different context limits for a model, adjust `contextWindow` accordingly.
|
||||
* Use `https://api.moonshot.ai/v1` for the international endpoint, and `https://api.moonshot.cn/v1` for the China endpoint.
|
||||
36
openclaw-knowhow-skill/docs/models/openai.md
Normal file
36
openclaw-knowhow-skill/docs/models/openai.md
Normal file
@@ -0,0 +1,36 @@
|
||||
# OpenAI
|
||||
|
||||
This documentation covers two authentication approaches for OpenAI access within the Codex platform.
|
||||
|
||||
## Authentication Options
|
||||
|
||||
**API Key Method**: Users can configure direct API access through OpenAI's platform using usage-based billing. Setup requires obtaining credentials from the OpenAI dashboard and running the onboarding wizard with the appropriate flag.
|
||||
|
||||
**ChatGPT Subscription Method**: An alternative approach leverages existing ChatGPT or Codex subscriptions rather than direct API credentials. This method requires OAuth authentication through the Codex CLI or cloud interface.
|
||||
|
||||
## Configuration Details
|
||||
|
||||
Both approaches require specifying a model reference in the format `provider/model`. Model refs always use `provider/model`.
|
||||
|
||||
## Setup Process
|
||||
|
||||
The onboarding CLI accepts either direct API key input or interactive authentication selection. Users can bypass the interactive wizard by providing credentials as command-line arguments during setup.
|
||||
|
||||
```bash
|
||||
openclaw onboard --auth-choice openai-api-key
|
||||
```
|
||||
|
||||
Or with the API key directly:
|
||||
|
||||
```bash
|
||||
openclaw onboard --openai-api-key "$OPENAI_API_KEY"
|
||||
```
|
||||
|
||||
## Config snippet
|
||||
|
||||
```json5
|
||||
{
|
||||
env: { OPENAI_API_KEY: "sk-..." },
|
||||
agents: { defaults: { model: { primary: "openai/gpt-4o" } } },
|
||||
}
|
||||
```
|
||||
36
openclaw-knowhow-skill/docs/models/opencode.md
Normal file
36
openclaw-knowhow-skill/docs/models/opencode.md
Normal file
@@ -0,0 +1,36 @@
|
||||
# OpenCode Zen
|
||||
|
||||
## Overview
|
||||
|
||||
OpenCode Zen represents a **curated list of models** recommended by the OpenCode team for coding agents. It functions as an optional hosted model access pathway requiring an API key and the `opencode` provider, currently in beta status.
|
||||
|
||||
## Setup Instructions
|
||||
|
||||
**CLI Configuration (Interactive):**
|
||||
|
||||
```bash
|
||||
openclaw onboard --auth-choice opencode-zen
|
||||
```
|
||||
|
||||
**CLI Configuration (Non-Interactive):**
|
||||
|
||||
```bash
|
||||
openclaw onboard --opencode-zen-api-key "$OPENCODE_ZEN_API_KEY"
|
||||
```
|
||||
|
||||
**Configuration File:**
|
||||
|
||||
```json5
|
||||
{
|
||||
env: { OPENCODE_ZEN_API_KEY: "sk-..." },
|
||||
agents: { defaults: { model: { primary: "opencode/claude-opus-4-5" } } },
|
||||
}
|
||||
```
|
||||
|
||||
## Key Details
|
||||
|
||||
The service accepts either `OPENCODE_ZEN_API_KEY` or `OPENCODE_API_KEY` for authentication. Users establish their account through Zen's login portal, configure billing information, and retrieve their API credentials from the platform.
|
||||
|
||||
## Billing Model
|
||||
|
||||
The service operates on a per-request pricing structure. Cost details and usage monitoring are available through the OpenCode dashboard.
|
||||
30
openclaw-knowhow-skill/docs/models/openrouter.md
Normal file
30
openclaw-knowhow-skill/docs/models/openrouter.md
Normal file
@@ -0,0 +1,30 @@
|
||||
# OpenRouter
|
||||
|
||||
OpenRouter is a unified API service that consolidates access to numerous language models through a single endpoint and authentication key. The platform maintains OpenAI compatibility, enabling users to leverage existing OpenAI SDKs by simply modifying the base URL configuration.
|
||||
|
||||
## Setup Instructions
|
||||
|
||||
**Command line onboarding:**
|
||||
|
||||
```bash
|
||||
openclaw onboard --auth-choice apiKey --token-provider openrouter --token "$OPENROUTER_API_KEY"
|
||||
```
|
||||
|
||||
**Configuration example:**
|
||||
|
||||
```json5
|
||||
{
|
||||
env: { OPENROUTER_API_KEY: "sk-or-..." },
|
||||
agents: {
|
||||
defaults: {
|
||||
model: { primary: "openrouter/anthropic/claude-sonnet-4-5" },
|
||||
},
|
||||
},
|
||||
}
|
||||
```
|
||||
|
||||
## Key Points
|
||||
|
||||
- Model identifiers follow the format: `openrouter/<provider>/<model>`
|
||||
- Additional model and provider options are documented in the model-providers concepts section
|
||||
- Authentication uses Bearer token methodology with your API key
|
||||
91
openclaw-knowhow-skill/docs/models/synthetic.md
Normal file
91
openclaw-knowhow-skill/docs/models/synthetic.md
Normal file
@@ -0,0 +1,91 @@
|
||||
# Synthetic
|
||||
|
||||
Synthetic exposes Anthropic-compatible endpoints. OpenClaw registers it as the
|
||||
`synthetic` provider and uses the Anthropic Messages API.
|
||||
|
||||
## Quick setup
|
||||
|
||||
1. Set `SYNTHETIC_API_KEY` (or run the wizard below).
|
||||
2. Run onboarding:
|
||||
|
||||
```bash
|
||||
openclaw onboard --auth-choice synthetic-api-key
|
||||
```
|
||||
|
||||
The default model is set to:
|
||||
|
||||
```
|
||||
synthetic/hf:MiniMaxAI/MiniMax-M2.1
|
||||
```
|
||||
|
||||
## Config example
|
||||
|
||||
```json5
|
||||
{
|
||||
env: { SYNTHETIC_API_KEY: "sk-..." },
|
||||
agents: {
|
||||
defaults: {
|
||||
model: { primary: "synthetic/hf:MiniMaxAI/MiniMax-M2.1" },
|
||||
models: { "synthetic/hf:MiniMaxAI/MiniMax-M2.1": { alias: "MiniMax M2.1" } },
|
||||
},
|
||||
},
|
||||
models: {
|
||||
mode: "merge",
|
||||
providers: {
|
||||
synthetic: {
|
||||
baseUrl: "https://api.synthetic.new/anthropic",
|
||||
apiKey: "${SYNTHETIC_API_KEY}",
|
||||
api: "anthropic-messages",
|
||||
models: [
|
||||
{
|
||||
id: "hf:MiniMaxAI/MiniMax-M2.1",
|
||||
name: "MiniMax M2.1",
|
||||
reasoning: false,
|
||||
input: ["text"],
|
||||
cost: { input: 0, output: 0, cacheRead: 0, cacheWrite: 0 },
|
||||
contextWindow: 192000,
|
||||
maxTokens: 65536,
|
||||
},
|
||||
],
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
```
|
||||
|
||||
Note: OpenClaw's Anthropic client appends `/v1` to the base URL, so use
|
||||
`https://api.synthetic.new/anthropic` (not `/anthropic/v1`). If Synthetic changes
|
||||
its base URL, override `models.providers.synthetic.baseUrl`.
|
||||
|
||||
## Model catalog
|
||||
|
||||
All models below use cost `0` (input/output/cache).
|
||||
|
||||
| Model ID | Context window | Max tokens | Reasoning | Input |
|
||||
| ------------------------------------------------------ | -------------- | ---------- | --------- | ------------ |
|
||||
| `hf:MiniMaxAI/MiniMax-M2.1` | 192000 | 65536 | false | text |
|
||||
| `hf:moonshotai/Kimi-K2-Thinking` | 256000 | 8192 | true | text |
|
||||
| `hf:zai-org/GLM-4.7` | 198000 | 128000 | false | text |
|
||||
| `hf:deepseek-ai/DeepSeek-R1-0528` | 128000 | 8192 | false | text |
|
||||
| `hf:deepseek-ai/DeepSeek-V3-0324` | 128000 | 8192 | false | text |
|
||||
| `hf:deepseek-ai/DeepSeek-V3.1` | 128000 | 8192 | false | text |
|
||||
| `hf:deepseek-ai/DeepSeek-V3.1-Terminus` | 128000 | 8192 | false | text |
|
||||
| `hf:deepseek-ai/DeepSeek-V3.2` | 159000 | 8192 | false | text |
|
||||
| `hf:meta-llama/Llama-3.3-70B-Instruct` | 128000 | 8192 | false | text |
|
||||
| `hf:meta-llama/Llama-4-Maverick-17B-128E-Instruct-FP8` | 524000 | 8192 | false | text |
|
||||
| `hf:moonshotai/Kimi-K2-Instruct-0905` | 256000 | 8192 | false | text |
|
||||
| `hf:openai/gpt-oss-120b` | 128000 | 8192 | false | text |
|
||||
| `hf:Qwen/Qwen3-235B-A22B-Instruct-2507` | 256000 | 8192 | false | text |
|
||||
| `hf:Qwen/Qwen3-Coder-480B-A35B-Instruct` | 256000 | 8192 | false | text |
|
||||
| `hf:Qwen/Qwen3-VL-235B-A22B-Instruct` | 250000 | 8192 | false | text + image |
|
||||
| `hf:zai-org/GLM-4.5` | 128000 | 128000 | false | text |
|
||||
| `hf:zai-org/GLM-4.6` | 198000 | 128000 | false | text |
|
||||
| `hf:deepseek-ai/DeepSeek-V3` | 128000 | 8192 | false | text |
|
||||
| `hf:Qwen/Qwen3-235B-A22B-Thinking-2507` | 256000 | 8192 | true | text |
|
||||
|
||||
## Notes
|
||||
|
||||
* Model refs use `synthetic/<modelId>`.
|
||||
* If you enable a model allowlist (`agents.defaults.models`), add every model you
|
||||
plan to use.
|
||||
* See [Model providers](/concepts/model-providers) for provider rules.
|
||||
42
openclaw-knowhow-skill/docs/models/vercel-ai-gateway.md
Normal file
42
openclaw-knowhow-skill/docs/models/vercel-ai-gateway.md
Normal file
@@ -0,0 +1,42 @@
|
||||
# Vercel AI Gateway
|
||||
|
||||
The [Vercel AI Gateway](https://vercel.com/ai-gateway) provides a unified API to access hundreds of models through a single endpoint.
|
||||
|
||||
* Provider: `vercel-ai-gateway`
|
||||
* Auth: `AI_GATEWAY_API_KEY`
|
||||
* API: Anthropic Messages compatible
|
||||
|
||||
## Quick start
|
||||
|
||||
1. Set the API key (recommended: store it for the Gateway):
|
||||
|
||||
```bash
|
||||
openclaw onboard --auth-choice ai-gateway-api-key
|
||||
```
|
||||
|
||||
2. Set a default model:
|
||||
|
||||
```json5
|
||||
{
|
||||
agents: {
|
||||
defaults: {
|
||||
model: { primary: "vercel-ai-gateway/anthropic/claude-opus-4.5" },
|
||||
},
|
||||
},
|
||||
}
|
||||
```
|
||||
|
||||
## Non-interactive example
|
||||
|
||||
```bash
|
||||
openclaw onboard --non-interactive \
|
||||
--mode local \
|
||||
--auth-choice ai-gateway-api-key \
|
||||
--ai-gateway-api-key "$AI_GATEWAY_API_KEY"
|
||||
```
|
||||
|
||||
## Environment note
|
||||
|
||||
If the Gateway runs as a daemon (launchd/systemd), make sure `AI_GATEWAY_API_KEY`
|
||||
is available to that process (for example, in `~/.openclaw/.env` or via
|
||||
`env.shellEnv`).
|
||||
28
openclaw-knowhow-skill/docs/models/zai.md
Normal file
28
openclaw-knowhow-skill/docs/models/zai.md
Normal file
@@ -0,0 +1,28 @@
|
||||
# Z.AI
|
||||
|
||||
Z.AI is the API platform for **GLM** models. It provides REST APIs for GLM and uses API keys
|
||||
for authentication. Create your API key in the Z.AI console. OpenClaw uses the `zai` provider
|
||||
with a Z.AI API key.
|
||||
|
||||
## CLI setup
|
||||
|
||||
```bash
|
||||
openclaw onboard --auth-choice zai-api-key
|
||||
# or non-interactive
|
||||
openclaw onboard --zai-api-key "$ZAI_API_KEY"
|
||||
```
|
||||
|
||||
## Config snippet
|
||||
|
||||
```json5
|
||||
{
|
||||
env: { ZAI_API_KEY: "sk-..." },
|
||||
agents: { defaults: { model: { primary: "zai/glm-4.7" } } },
|
||||
}
|
||||
```
|
||||
|
||||
## Notes
|
||||
|
||||
* GLM models are available as `zai/<model>` (example: `zai/glm-4.7`).
|
||||
* See [/providers/glm](/providers/glm) for the model family overview.
|
||||
* Z.AI uses Bearer auth with your API key.
|
||||
Reference in New Issue
Block a user