Initial commit: OpenClaw Skill Collection

6 custom skills (assign-task, dispatch-webhook, daily-briefing,
task-capture, qmd-brain, tts-voice) with technical documentation.
Compatible with Claude Code, OpenClaw, Codex CLI, and OpenCode.
This commit is contained in:
2026-03-13 10:58:30 +08:00
commit 4c966a3ad2
884 changed files with 140761 additions and 0 deletions

View File

@@ -0,0 +1,19 @@
# agent
# `openclaw agent`
Run an agent turn via the Gateway (use `--local` for embedded).
Use `--agent <id>` to target a configured agent directly.
Related:
* Agent send tool: [Agent send](/tools/agent-send)
## Examples
```bash
openclaw agent --to +15555550123 --message "status update" --deliver
openclaw agent --agent ops --message "Summarize logs"
openclaw agent --session-id 1234 --message "Summarize inbox" --thinking medium
openclaw agent --agent ops --message "Generate report" --deliver --reply-channel slack --reply-to "#reports"
```

View File

@@ -0,0 +1,29 @@
# agents
# `openclaw agents`
Manage isolated agents (workspaces + auth + routing).
## Key Features
**Agent Management Commands:**
The tool supports several operations including listing agents, adding new ones to specific workspaces, configuring identities, and deleting agents.
**Identity Configuration:**
Each agent workspace can contain an `IDENTITY.md` file at its root. Agents can be customized with fields such as name, theme, emoji, and avatar. Avatar files can reference workspace-relative paths, URLs, or data URIs.
**Identity Setup Methods:**
Users can load identity settings from an `IDENTITY.md` file or override specific properties directly via command-line arguments.
**Configuration Structure:**
Identity information is stored in the configuration under `agents.list[].identity`, containing customizable display properties that personalize each agent's presentation.
## Commands
```bash
openclaw agents list
openclaw agents add <name> --workspace <path>
openclaw agents delete <id>
```
Related concepts: multi-agent routing and agent workspace architecture.

View File

@@ -0,0 +1,44 @@
# approvals
# `openclaw approvals`
Manage exec approvals for the **local host**, **gateway host**, or a **node host**.
By default, commands target the local approvals file on disk. Use `--gateway` to target the gateway, or `--node` to target a specific node.
Related:
* Exec approvals: [Exec approvals](/tools/exec-approvals)
* Nodes: [Nodes](/nodes)
## Common commands
```bash
openclaw approvals get
openclaw approvals get --node <id|name|ip>
openclaw approvals get --gateway
```
## Replace approvals from a file
```bash
openclaw approvals set --file ./exec-approvals.json
openclaw approvals set --node <id|name|ip> --file ./exec-approvals.json
openclaw approvals set --gateway --file ./exec-approvals.json
```
## Allowlist helpers
```bash
openclaw approvals allowlist add "~/Projects/**/bin/rg"
openclaw approvals allowlist add --agent main --node <id|name|ip> "/usr/bin/uptime"
openclaw approvals allowlist add --agent "*" "/usr/bin/uname"
openclaw approvals allowlist remove "~/Projects/**/bin/rg"
```
## Notes
* `--node` uses the same resolver as `openclaw nodes` (id, name, ip, or id prefix).
* `--agent` defaults to `"*"`, which applies to all agents.
* The node host must advertise `system.execApprovals.get/set` (macOS app or headless node host).
* Approvals files are stored per host at `~/.openclaw/exec-approvals.json`.

View File

@@ -0,0 +1,52 @@
# browser
# `openclaw browser`
Manage browser control servers and enable automated browser interactions.
## Overview
OpenClaw's browser command manages browser control servers and enables automated browser interactions. The system supports two primary browser profiles:
- **openclaw**: Launches a dedicated, isolated Chrome instance managed by OpenClaw
- **chrome**: Controls existing Chrome tabs through a Chrome extension relay
## Key Capabilities
The tool supports standard browser automation tasks including:
- Tab management (list, open, focus, close)
- Navigation
- Screenshots and snapshots
- UI element interaction through reference-based automation
## Usage Patterns
Common operations include:
- Managing tabs (list, open, focus, close)
- Visual capture via snapshots and screenshots
- Programmatic actions like clicking and typing on referenced UI elements
- Remote browser control through node host proxies when browsers run on different machines
## Commands
```bash
openclaw browser status
openclaw browser start
openclaw browser stop
openclaw browser screenshot
openclaw browser snapshot
openclaw browser navigate <url>
openclaw browser click --ref <ref>
openclaw browser type --ref <ref> --text "content"
```
## Configuration
Users can create custom browser profiles with specific names and colors, and specify profiles via the `--browser-profile` flag. Standard options include timeout settings, gateway URLs, and output formatting.
## Extension Integration
The Chrome extension relay allows manual attachment of existing Chrome tabs, requiring installation through the unpacked extension method rather than auto-attachment.
For comprehensive setup details, see additional guides covering remote access, security considerations, and Tailscale integration.

View File

@@ -0,0 +1,73 @@
# channels
# `openclaw channels`
Manage chat channel accounts and their runtime status on the Gateway.
Related docs:
* Channel guides: [Channels](/channels/index)
* Gateway configuration: [Configuration](/gateway/configuration)
## Common commands
```bash
openclaw channels list
openclaw channels status
openclaw channels capabilities
openclaw channels capabilities --channel discord --target channel:123
openclaw channels resolve --channel slack "#general" "@jane"
openclaw channels logs --channel all
```
## Add / remove accounts
```bash
openclaw channels add --channel telegram --token <bot-token>
openclaw channels remove --channel telegram --delete
```
Tip: `openclaw channels add --help` shows per-channel flags (token, app token, signal-cli paths, etc).
## Login / logout (interactive)
```bash
openclaw channels login --channel whatsapp
openclaw channels logout --channel whatsapp
```
## Troubleshooting
* Run `openclaw status --deep` for a broad probe.
* Use `openclaw doctor` for guided fixes.
* `openclaw channels list` prints `Claude: HTTP 403 ... user:profile` → usage snapshot needs the `user:profile` scope. Use `--no-usage`, or provide a claude.ai session key (`CLAUDE_WEB_SESSION_KEY` / `CLAUDE_WEB_COOKIE`), or re-auth via Claude Code CLI.
## Capabilities probe
Fetch provider capability hints (intents/scopes where available) plus static feature support:
```bash
openclaw channels capabilities
openclaw channels capabilities --channel discord --target channel:123
```
Notes:
* `--channel` is optional; omit it to list every channel (including extensions).
* `--target` accepts `channel:<id>` or a raw numeric channel id and only applies to Discord.
* Probes are provider-specific: Discord intents + optional channel permissions; Slack bot + user scopes; Telegram bot flags + webhook; Signal daemon version; MS Teams app token + Graph roles/scopes (annotated where known). Channels without probes report `Probe: unavailable`.
## Resolve names to IDs
Resolve channel/user names to IDs using the provider directory:
```bash
openclaw channels resolve --channel slack "#general" "@jane"
openclaw channels resolve --channel discord "My Server/#support" "@someone"
openclaw channels resolve --channel matrix "Project Room"
```
Notes:
* Use `--kind user|group|auto` to force the target type.
* Resolution prefers active matches when multiple entries share the same name.

View File

@@ -0,0 +1,28 @@
# configure
# `openclaw configure`
Interactive prompt to set up credentials, devices, and agent defaults.
Note: The **Model** section now includes a multi-select for the
`agents.defaults.models` allowlist (what shows up in `/model` and the model picker).
Tip: `openclaw config` without a subcommand opens the same wizard. Use
`openclaw config get|set|unset` for non-interactive edits.
Related:
* Gateway configuration reference: [Configuration](/gateway/configuration)
* Config CLI: [Config](/cli/config)
Notes:
* Choosing where the Gateway runs always updates `gateway.mode`. You can select "Continue" without other sections if that is all you need.
* Channel-oriented services (Slack/Discord/Matrix/Microsoft Teams) prompt for channel/room allowlists during setup. You can enter names or IDs; the wizard resolves names to IDs when possible.
## Examples
```bash
openclaw configure
openclaw configure --section models --section channels
```

View File

@@ -0,0 +1,36 @@
# cron
# `openclaw cron`
Manage cron jobs for the Gateway scheduler.
Related:
* Cron jobs: [Cron jobs](/automation/cron-jobs)
Tip: run `openclaw cron --help` for the full command surface.
Note: isolated `cron add` jobs default to `--announce` delivery. Use `--no-deliver` to keep
output internal. `--deliver` remains as a deprecated alias for `--announce`.
Note: one-shot (`--at`) jobs delete after success by default. Use `--keep-after-run` to keep them.
## Common edits
Update delivery settings without changing the message:
```bash
openclaw cron edit <job-id> --announce --channel telegram --to "123456789"
```
Disable delivery for an isolated job:
```bash
openclaw cron edit <job-id> --no-deliver
```
Announce to a specific channel:
```bash
openclaw cron edit <job-id> --announce --channel slack --to "channel:C1234567890"
```

View File

@@ -0,0 +1,10 @@
# dashboard
# `openclaw dashboard`
Open the Control UI using your current auth.
```bash
openclaw dashboard
openclaw dashboard --no-open
```

View File

@@ -0,0 +1,57 @@
# directory
# `openclaw directory`
Directory lookups for channels that support it (contacts/peers, groups, and "me").
## Common flags
* `--channel <name>`: channel id/alias (required when multiple channels are configured; auto when only one is configured)
* `--account <id>`: account id (default: channel default)
* `--json`: output JSON
## Notes
* `directory` is meant to help you find IDs you can paste into other commands (especially `openclaw message send --target ...`).
* For many channels, results are config-backed (allowlists / configured groups) rather than a live provider directory.
* Default output is `id` (and sometimes `name`) separated by a tab; use `--json` for scripting.
## Using results with `message send`
```bash
openclaw directory peers list --channel slack --query "U0"
openclaw message send --channel slack --target user:U012ABCDEF --message "hello"
```
## ID formats (by channel)
* WhatsApp: `+15551234567` (DM), `1234567890-1234567890@g.us` (group)
* Telegram: `@username` or numeric chat id; groups are numeric ids
* Slack: `user:U…` and `channel:C…`
* Discord: `user:<id>` and `channel:<id>`
* Matrix (plugin): `user:@user:server`, `room:!roomId:server`, or `#alias:server`
* Microsoft Teams (plugin): `user:<id>` and `conversation:<id>`
* Zalo (plugin): user id (Bot API)
* Zalo Personal / `zalouser` (plugin): thread id (DM/group) from `zca` (`me`, `friend list`, `group list`)
## Self ("me")
```bash
openclaw directory self --channel zalouser
```
## Peers (contacts/users)
```bash
openclaw directory peers list --channel zalouser
openclaw directory peers list --channel zalouser --query "name"
openclaw directory peers list --channel zalouser --limit 50
```
## Groups
```bash
openclaw directory groups list --channel zalouser
openclaw directory groups list --channel zalouser --query "work"
openclaw directory groups members --channel zalouser --group-id <id>
```

View File

@@ -0,0 +1,17 @@
# dns
# `openclaw dns`
DNS helpers for wide-area discovery (Tailscale + CoreDNS). Currently focused on macOS + Homebrew CoreDNS.
Related:
* Gateway discovery: [Discovery](/gateway/discovery)
* Wide-area discovery config: [Configuration](/gateway/configuration)
## Setup
```bash
openclaw dns setup
openclaw dns setup --apply
```

View File

@@ -0,0 +1,10 @@
# docs
# `openclaw docs`
Search the live docs index.
```bash
openclaw docs browser extension
openclaw docs sandbox allowHostControl
```

View File

@@ -0,0 +1,35 @@
# doctor
# `openclaw doctor`
Health checks + quick fixes for the gateway and channels.
Related:
* Troubleshooting: [Troubleshooting](/gateway/troubleshooting)
* Security audit: [Security](/gateway/security)
## Examples
```bash
openclaw doctor
openclaw doctor --repair
openclaw doctor --deep
```
Notes:
* Interactive prompts (like keychain/OAuth fixes) only run when stdin is a TTY and `--non-interactive` is **not** set. Headless runs (cron, Telegram, no terminal) will skip prompts.
* `--fix` (alias for `--repair`) writes a backup to `~/.openclaw/openclaw.json.bak` and drops unknown config keys, listing each removal.
## macOS: `launchctl` env overrides
If you previously ran `launchctl setenv OPENCLAW_GATEWAY_TOKEN ...` (or `...PASSWORD`), that value overrides your config file and can cause persistent "unauthorized" errors.
```bash
launchctl getenv OPENCLAW_GATEWAY_TOKEN
launchctl getenv OPENCLAW_GATEWAY_PASSWORD
launchctl unsetenv OPENCLAW_GATEWAY_TOKEN
launchctl unsetenv OPENCLAW_GATEWAY_PASSWORD
```

View File

@@ -0,0 +1,43 @@
# gateway
# `openclaw gateway`
The Gateway serves as OpenClaw's WebSocket server managing channels, nodes, sessions, and hooks. All subcommands operate under the `openclaw gateway` namespace.
## Running the Gateway
Launch a local Gateway with:
```bash
openclaw gateway
```
Key startup requirements include setting `gateway.mode=local` in configuration, though `--allow-unconfigured` bypasses this for development. The system blocks loopback binding without authentication as a safety measure.
## Query Commands
Gateway queries use WebSocket RPC with flexible output formatting (human-readable by default, JSON via `--json` flag).
**Available commands:**
- `gateway health`: checks Gateway connectivity
- `gateway status`: displays service status plus optional RPC probe
- `gateway probe`: comprehensive debug command scanning configured and localhost gateways
- `gateway call <method>`: low-level RPC helper for custom operations
## Service Management
Standard lifecycle commands include:
```bash
openclaw gateway install
openclaw gateway start
openclaw gateway stop
openclaw gateway restart
openclaw gateway uninstall
```
## Gateway Discovery
The `gateway discover` command scans for Gateway beacons using multicast DNS-SD (`_openclaw-gw._tcp`). Discovery records include details like WebSocket ports, SSH configuration, and TLS fingerprints when applicable.
Related documentation sections: Bonjour configuration, discovery protocols, and system configuration settings.

View File

@@ -0,0 +1,16 @@
# health
# `openclaw health`
Fetch health from the running Gateway.
```bash
openclaw health
openclaw health --json
openclaw health --verbose
```
Notes:
* `--verbose` runs live probes and prints per-account timings when multiple accounts are configured.
* Output includes per-agent session stores when multiple agents are configured.

View File

@@ -0,0 +1,34 @@
# hooks
# `openclaw hooks`
Manage event-driven automations for gateway commands and startup events.
## Core Commands
**Listing and Information:**
- `openclaw hooks list` displays all discovered hooks with readiness status
- `openclaw hooks info <name>` provides detailed information about specific hooks
- `openclaw hooks check` shows overall eligibility summary
**Management:**
- `openclaw hooks enable <name>` activates a hook in your config
- `openclaw hooks disable <name>` deactivates a hook
- `openclaw hooks install <path-or-spec>` adds new hook packs from local directories, archives, or npm
- `openclaw hooks update <id>` refreshes installed npm-based hook packs
## Available Bundled Hooks
The system includes four pre-built hooks:
1. **session-memory** - Saves session context when `/new` command executes, storing output to `~/.openclaw/workspace/memory/`
2. **command-logger** - Records all command events to a centralized audit file at `~/.openclaw/logs/commands.log`
3. **soul-evil** - Swaps `SOUL.md` content with `SOUL_EVIL.md` during specified windows or randomly
4. **boot-md** - Executes `BOOT.md` upon gateway startup (triggered by `gateway:startup` event)
## Important Notes
After enabling or disabling hooks, restart your gateway for changes to take effect. Plugin-managed hooks cannot be toggled directly through these commands - manage the parent plugin instead.

View File

@@ -0,0 +1,171 @@
# Complete CLI Reference Documentation
## Overview
This documentation provides comprehensive reference material for OpenClaw's command-line interface, detailing all available commands, options, and usage patterns.
## Core Structure
The CLI organizes functionality through a primary command with subcommands:
```
openclaw [--dev] [--profile <name>] <command>
```
### Global Options
The system supports several flags applicable across commands:
- `--dev`: Isolates state to `~/.openclaw-dev` with shifted default ports
- `--profile <name>`: Isolates state to `~/.openclaw-<name>`
- `--no-color`: Disables ANSI styling
- `--version`: Displays version information
### Output Formatting
ANSI colors and progress indicators only render in TTY sessions. OSC-8 hyperlinks render as clickable links in supported terminals; otherwise we fall back to plain URLs.
## Command Categories
### Setup & Configuration
**`setup`** initializes configuration and workspace with options for workspace path, wizard mode, and remote gateway configuration.
**`onboard`** provides an interactive wizard supporting multiple flows (quickstart, advanced, manual) with authentication provider selection and gateway binding options.
**`configure`** launches an interactive configuration wizard for models, channels, skills, and gateway settings.
**`config`** offers non-interactive helpers for retrieving, setting, or removing configuration values using dot/bracket path notation.
**`doctor`** performs health checks and applies quick fixes for configuration, gateway, and legacy services.
### Channel Management
**`channels`** manages chat channel accounts across platforms including WhatsApp, Telegram, Discord, Slack, Signal, iMessage, and Microsoft Teams.
Subcommands include:
- `list`: Display configured channels
- `status`: Check gateway reachability
- `add`: Wizard-style or non-interactive setup
- `remove`: Disable or delete configurations
- `login`/`logout`: Interactive authentication (platform-dependent)
- `logs`: Display recent channel activity
### Skill & Plugin Management
**`skills`** lists available skills and readiness information:
- `list`: Enumerate all skills
- `info <name>`: Display specific skill details
- `check`: Summary of ready versus missing requirements
**`plugins`** manages extensions and their configuration:
- `list`: Discover available plugins
- `install`: Add plugins from various sources
- `enable`/`disable`: Toggle activation
- `doctor`: Report load errors
### Messaging & Agent Control
**`message`** provides unified outbound messaging with subcommands for sending, polling, reacting, editing, deleting, and managing permissions across channels.
**`agent`** executes a single agent turn via the Gateway with options for message content, destination, session tracking, and verbose output.
**`agents`** manages isolated agent workspaces:
- `list`: Display configured agents
- `add`: Create new isolated agent
- `delete`: Remove agent and prune state
### Gateway Operations
**`gateway`** runs the WebSocket Gateway with binding, authentication, and Tailscale options.
Gateway service management includes:
- `status`: Probe gateway health
- `install`: Install service
- `start`/`stop`/`restart`: Control service state
**`logs`** tails Gateway file logs via RPC, with support for colorized structured output in TTY sessions and JSON formatting.
### System & Monitoring
**`status`** displays linked session health and recent recipients with options for comprehensive diagnostics and provider usage information.
**`health`** fetches current health status from the running Gateway.
**`sessions`** lists stored conversation sessions with filtering by activity duration.
**`system`** manages system-level operations:
- `event`: Enqueue system events
- `heartbeat`: Control heartbeat functionality
- `presence`: List system presence entries
### Model Configuration
**`models`** manages AI model selection and authentication:
- `list`: Enumerate available models
- `status`: Display current configuration
- `set`: Designate primary model
- `scan`: Discover new models with filtering options
- `auth`: Configure authentication credentials
- `aliases`: Create model shortcuts
- `fallbacks`: Define backup models
### Automation
**`cron`** manages scheduled jobs with support for:
- Time-based scheduling (`--at`, `--every`, `--cron`)
- System events or messaging payloads
- Job lifecycle management (enable/disable/edit)
### Node Management
**`node`** operates headless node hosts or manages them as background services.
**`nodes`** communicates with Gateway-paired nodes supporting:
- Status monitoring and connection filtering
- Command invocation with timeout control
- Camera operations (snap, clip)
- Canvas and screen management
- Location tracking
### Browser Control
**`browser`** controls dedicated Chrome/Brave/Edge/Chromium instances:
Management: `status`, `start`, `stop`, `reset-profile`, `profiles`, `create-profile`, `delete-profile`
Inspection: `screenshot`, `snapshot` with format and selector options
Actions: `navigate`, `click`, `type`, `press`, `hover`, `drag`, `select`, `upload`, `fill`, `dialog`, `wait`, `evaluate`, `console`, `pdf`
### Additional Tools
**`memory`** enables vector search over markdown files:
- `status`: Display index statistics
- `index`: Rebuild index
- `search`: Execute semantic queries
**`approvals`** manages approval configurations with allowlist operations.
**`security`** provides audit functionality with optional deep probing and automatic fixes.
**`tui`** opens an interactive terminal user interface connected to the Gateway.
**`docs`** searches live documentation.
## Reset & Cleanup
**`reset`** clears local configuration and state with scoping options (config only, config+credentials+sessions, or full reset).
**`uninstall`** removes the gateway service and associated data while preserving the CLI installation.
**`update`** manages version upgrades.
## Color Palette
OpenClaw employs a distinctive color scheme for terminal output:
- accent (#FF5A2D): headings, labels, primary highlights
- accentBright (#FF7A3D): command names, emphasis
- success (#2FBF71): success states
- error (#E23D2D): errors, failures
This reference encompasses the complete command taxonomy with emphasis on practical usage patterns and option availability across the OpenClaw platform.

View File

@@ -0,0 +1,18 @@
# logs
# `openclaw logs`
Tail Gateway file logs over RPC (works in remote mode).
Related:
* Logging overview: [Logging](/logging)
## Examples
```bash
openclaw logs
openclaw logs --follow
openclaw logs --json
openclaw logs --limit 500
```

View File

@@ -0,0 +1,34 @@
# memory
# `openclaw memory`
Semantic memory indexing and search operations through the active memory plugin system.
## Core Functionality
This tool provides three primary capabilities:
1. **Status Monitoring**: Check memory system health with `openclaw memory status`
2. **Indexing**: Build or rebuild the semantic index via `openclaw memory index`
3. **Search**: Query indexed content with `openclaw memory search`
## Examples
```bash
openclaw memory status
openclaw memory status --deep
openclaw memory index
openclaw memory search "query terms"
```
## Key Command Options
Users can scope operations to individual agents using `--agent <id>` or apply verbose logging with the `--verbose` flag for detailed diagnostic output.
## Advanced Features
The `--deep` flag enables probes for vector + embedding availability, while combining `--deep --index` triggers automatic reindexing when the storage is marked as dirty. The system also respects extra paths configured through `memorySearch.extraPaths`.
## Plugin Architecture
Memory functionality depends on the active memory plugin (defaulting to `memory-core`), which can be disabled by setting `plugins.slots.memory = "none"`.

View File

@@ -0,0 +1,233 @@
# message
# `openclaw message`
Single outbound command for sending messages and channel actions
(Discord/Google Chat/Slack/Mattermost (plugin)/Telegram/WhatsApp/Signal/iMessage/MS Teams).
## Usage
```
openclaw message <subcommand> [flags]
```
Channel selection:
* `--channel` required if more than one channel is configured.
* If exactly one channel is configured, it becomes the default.
* Values: `whatsapp|telegram|discord|googlechat|slack|mattermost|signal|imessage|msteams` (Mattermost requires plugin)
Target formats (`--target`):
* WhatsApp: E.164 or group JID
* Telegram: chat id or `@username`
* Discord: `channel:<id>` or `user:<id>` (or `<@id>` mention; raw numeric ids are treated as channels)
* Google Chat: `spaces/<spaceId>` or `users/<userId>`
* Slack: `channel:<id>` or `user:<id>` (raw channel id is accepted)
* Mattermost (plugin): `channel:<id>`, `user:<id>`, or `@username` (bare ids are treated as channels)
* Signal: `+E.164`, `group:<id>`, `signal:+E.164`, `signal:group:<id>`, or `username:<name>`/`u:<name>`
* iMessage: handle, `chat_id:<id>`, `chat_guid:<guid>`, or `chat_identifier:<id>`
* MS Teams: conversation id (`19:...@thread.tacv2`) or `conversation:<id>` or `user:<aad-object-id>`
Name lookup:
* For supported providers (Discord/Slack/etc), channel names like `Help` or `#help` are resolved via the directory cache.
* On cache miss, OpenClaw will attempt a live directory lookup when the provider supports it.
## Common flags
* `--channel <name>`
* `--account <id>`
* `--target <dest>` (target channel or user for send/poll/read/etc)
* `--targets <name>` (repeat; broadcast only)
* `--json`
* `--dry-run`
* `--verbose`
## Actions
### Core
* `send`
* Channels: WhatsApp/Telegram/Discord/Google Chat/Slack/Mattermost (plugin)/Signal/iMessage/MS Teams
* Required: `--target`, plus `--message` or `--media`
* Optional: `--media`, `--reply-to`, `--thread-id`, `--gif-playback`
* Telegram only: `--buttons` (requires `channels.telegram.capabilities.inlineButtons` to allow it)
* Telegram only: `--thread-id` (forum topic id)
* Slack only: `--thread-id` (thread timestamp; `--reply-to` uses the same field)
* WhatsApp only: `--gif-playback`
* `poll`
* Channels: WhatsApp/Discord/MS Teams
* Required: `--target`, `--poll-question`, `--poll-option` (repeat)
* Optional: `--poll-multi`
* Discord only: `--poll-duration-hours`, `--message`
* `react`
* Channels: Discord/Google Chat/Slack/Telegram/WhatsApp/Signal
* Required: `--message-id`, `--target`
* Optional: `--emoji`, `--remove`, `--participant`, `--from-me`, `--target-author`, `--target-author-uuid`
* Note: `--remove` requires `--emoji` (omit `--emoji` to clear own reactions where supported; see /tools/reactions)
* WhatsApp only: `--participant`, `--from-me`
* Signal group reactions: `--target-author` or `--target-author-uuid` required
* `reactions`
* Channels: Discord/Google Chat/Slack
* Required: `--message-id`, `--target`
* Optional: `--limit`
* `read`
* Channels: Discord/Slack
* Required: `--target`
* Optional: `--limit`, `--before`, `--after`
* Discord only: `--around`
* `edit`
* Channels: Discord/Slack
* Required: `--message-id`, `--message`, `--target`
* `delete`
* Channels: Discord/Slack/Telegram
* Required: `--message-id`, `--target`
* `pin` / `unpin`
* Channels: Discord/Slack
* Required: `--message-id`, `--target`
* `pins` (list)
* Channels: Discord/Slack
* Required: `--target`
* `permissions`
* Channels: Discord
* Required: `--target`
* `search`
* Channels: Discord
* Required: `--guild-id`, `--query`
* Optional: `--channel-id`, `--channel-ids` (repeat), `--author-id`, `--author-ids` (repeat), `--limit`
### Threads
* `thread create`
* Channels: Discord
* Required: `--thread-name`, `--target` (channel id)
* Optional: `--message-id`, `--auto-archive-min`
* `thread list`
* Channels: Discord
* Required: `--guild-id`
* Optional: `--channel-id`, `--include-archived`, `--before`, `--limit`
* `thread reply`
* Channels: Discord
* Required: `--target` (thread id), `--message`
* Optional: `--media`, `--reply-to`
### Emojis
* `emoji list`
* Discord: `--guild-id`
* Slack: no extra flags
* `emoji upload`
* Channels: Discord
* Required: `--guild-id`, `--emoji-name`, `--media`
* Optional: `--role-ids` (repeat)
### Stickers
* `sticker send`
* Channels: Discord
* Required: `--target`, `--sticker-id` (repeat)
* Optional: `--message`
* `sticker upload`
* Channels: Discord
* Required: `--guild-id`, `--sticker-name`, `--sticker-desc`, `--sticker-tags`, `--media`
### Roles / Channels / Members / Voice
* `role info` (Discord): `--guild-id`
* `role add` / `role remove` (Discord): `--guild-id`, `--user-id`, `--role-id`
* `channel info` (Discord): `--target`
* `channel list` (Discord): `--guild-id`
* `member info` (Discord/Slack): `--user-id` (+ `--guild-id` for Discord)
* `voice status` (Discord): `--guild-id`, `--user-id`
### Events
* `event list` (Discord): `--guild-id`
* `event create` (Discord): `--guild-id`, `--event-name`, `--start-time`
* Optional: `--end-time`, `--desc`, `--channel-id`, `--location`, `--event-type`
### Moderation (Discord)
* `timeout`: `--guild-id`, `--user-id` (optional `--duration-min` or `--until`; omit both to clear timeout)
* `kick`: `--guild-id`, `--user-id` (+ `--reason`)
* `ban`: `--guild-id`, `--user-id` (+ `--delete-days`, `--reason`)
* `timeout` also supports `--reason`
### Broadcast
* `broadcast`
* Channels: any configured channel; use `--channel all` to target all providers
* Required: `--targets` (repeat)
* Optional: `--message`, `--media`, `--dry-run`
## Examples
Send a Discord reply:
```bash
openclaw message send --channel discord \
--target channel:123 --message "hi" --reply-to 456
```
Create a Discord poll:
```bash
openclaw message poll --channel discord \
--target channel:123 \
--poll-question "Snack?" \
--poll-option Pizza --poll-option Sushi \
--poll-multi --poll-duration-hours 48
```
Send a Teams proactive message:
```bash
openclaw message send --channel msteams \
--target conversation:19:abc@thread.tacv2 --message "hi"
```
Create a Teams poll:
```bash
openclaw message poll --channel msteams \
--target conversation:19:abc@thread.tacv2 \
--poll-question "Lunch?" \
--poll-option Pizza --poll-option Sushi
```
React in Slack:
```bash
openclaw message react --channel slack \
--target C123 --message-id 456 --emoji "check"
```
React in a Signal group:
```bash
openclaw message react --channel signal \
--target signal:group:abc123 --message-id 1737630212345 \
--emoji "check" --target-author-uuid 123e4567-e89b-12d3-a456-426614174000
```
Send Telegram inline buttons:
```bash
openclaw message send --channel telegram --target @mychat --message "Choose:" \
--buttons '[ [{"text":"Yes","callback_data":"cmd:yes"}], [{"text":"No","callback_data":"cmd:no"}] ]'
```

View File

@@ -0,0 +1,73 @@
# models
# `openclaw models`
Model discovery, scanning, and configuration (default model, fallbacks, auth profiles).
Related:
* Providers + models: [Models](/providers/models)
* Provider auth setup: [Getting started](/start/getting-started)
## Common commands
```bash
openclaw models status
openclaw models list
openclaw models set <model-or-alias>
openclaw models scan
```
`openclaw models status` shows the resolved default/fallbacks plus an auth overview.
When provider usage snapshots are available, the OAuth/token status section includes
provider usage headers.
Add `--probe` to run live auth probes against each configured provider profile.
Probes are real requests (may consume tokens and trigger rate limits).
Use `--agent <id>` to inspect a configured agent's model/auth state. When omitted,
the command uses `OPENCLAW_AGENT_DIR`/`PI_CODING_AGENT_DIR` if set, otherwise the
configured default agent.
Notes:
* `models set <model-or-alias>` accepts `provider/model` or an alias.
* Model refs are parsed by splitting on the **first** `/`. If the model ID includes `/` (OpenRouter-style), include the provider prefix (example: `openrouter/moonshotai/kimi-k2`).
* If you omit the provider, OpenClaw treats the input as an alias or a model for the **default provider** (only works when there is no `/` in the model ID).
### `models status`
Options:
* `--json`
* `--plain`
* `--check` (exit 1=expired/missing, 2=expiring)
* `--probe` (live probe of configured auth profiles)
* `--probe-provider <name>` (probe one provider)
* `--probe-profile <id>` (repeat or comma-separated profile ids)
* `--probe-timeout <ms>`
* `--probe-concurrency <n>`
* `--probe-max-tokens <n>`
* `--agent <id>` (configured agent id; overrides `OPENCLAW_AGENT_DIR`/`PI_CODING_AGENT_DIR`)
## Aliases + fallbacks
```bash
openclaw models aliases list
openclaw models fallbacks list
```
## Auth profiles
```bash
openclaw models auth add
openclaw models auth login --provider <id>
openclaw models auth setup-token
openclaw models auth paste-token
```
`models auth login` runs a provider plugin's auth flow (OAuth/API key). Use
`openclaw plugins list` to see which providers are installed.
Notes:
* `setup-token` prompts for a setup-token value (generate it with `claude setup-token` on any machine).
* `paste-token` accepts a token string generated elsewhere or from automation.

View File

@@ -0,0 +1,48 @@
# nodes
# `openclaw nodes`
Manage paired devices and enable invocation of node capabilities.
## Overview
The `openclaw nodes` command manages paired devices and enables invocation of node capabilities. This includes listing nodes, approving pending connections, checking status, and executing commands remotely.
## Key Commands
**Listing and Status:**
```bash
openclaw nodes list
openclaw nodes list --pending
openclaw nodes list --connected
openclaw nodes status <id|name|ip>
```
The tool provides commands to display pending and paired nodes, with options to filter by connection status or timeframe. Users can retrieve only currently-connected nodes or those connecting within specified durations like 24 hours or 7 days.
**Execution:**
Remote command execution is available through two approaches:
```bash
openclaw nodes invoke <id|name|ip> --method <method> --params <json>
openclaw nodes run <id|name|ip> -- <command>
```
The `invoke` command uses structured parameters, while the `run` command uses shell-style syntax. The system supports both direct commands and raw shell strings.
## Important Features
**Exec-style Defaults:**
The `nodes run` command mirrors standard execution behavior by reading configuration from `tools.exec.*` settings and implementing approval workflows before invoking system commands.
**Customization Options:**
Users can specify working directories, environment variables, command timeouts, and security levels. Additional flags allow requiring screen recording permissions and agent-scoped approvals.
**Node Identification:**
Commands accept node references by ID, name, or IP address for flexible targeting.
## Notes
A compatible node must advertise `system.run` capabilities, such as a macOS companion application or headless node host.

View File

@@ -0,0 +1,36 @@
# onboard
# `openclaw onboard`
Interactive onboarding wizard (local or remote Gateway setup).
## Related guides
* CLI onboarding hub: [Onboarding Wizard (CLI)](/start/wizard)
* CLI onboarding reference: [CLI Onboarding Reference](/start/wizard-cli-reference)
* CLI automation: [CLI Automation](/start/wizard-cli-automation)
* macOS onboarding: [Onboarding (macOS App)](/start/onboarding)
## Examples
```bash
openclaw onboard
openclaw onboard --flow quickstart
openclaw onboard --flow manual
openclaw onboard --mode remote --remote-url ws://gateway-host:18789
```
Flow notes:
* `quickstart`: minimal prompts, auto-generates a gateway token.
* `manual`: full prompts for port/bind/auth (alias of `advanced`).
* Fastest first chat: `openclaw dashboard` (Control UI, no channel setup).
## Common follow-up commands
```bash
openclaw configure
openclaw agents add <name>
```
Note: `--json` does not imply non-interactive mode. Use `--non-interactive` for scripts.

View File

@@ -0,0 +1,16 @@
# pairing
# `openclaw pairing`
Approve or inspect DM pairing requests (for channels that support pairing).
Related:
* Pairing flow: [Pairing](/start/pairing)
## Commands
```bash
openclaw pairing list whatsapp
openclaw pairing approve whatsapp <code> --notify
```

View File

@@ -0,0 +1,43 @@
# plugins
# `openclaw plugins`
Manage gateway extensions that load in-process.
## Overview
The OpenClaw plugins system manages gateway extensions that load in-process. Bundled plugins ship with OpenClaw but start disabled. Use `plugins enable` to activate them.
## Key Commands
```bash
openclaw plugins list
openclaw plugins info <name>
openclaw plugins enable <name>
openclaw plugins disable <name>
openclaw plugins install <path-or-spec>
openclaw plugins install <path> --link
openclaw plugins update <id>
openclaw plugins update --all
openclaw plugins doctor
```
## Installation Requirements
Plugins must include a manifest file (`openclaw.plugin.json`) containing inline JSON Schema specifications. Missing/invalid manifests or schemas prevent the plugin from loading and fail config validation.
## Installation Methods
Users can install plugins via:
1. **Direct path or specification**: `openclaw plugins install <path-or-spec>`
2. **Linked local directory**: Using the `--link` flag to reference a local folder without copying
3. **Supported archive formats**: ZIP, TGZ, TAR.GZ, and TAR files
## Security Considerations
Treat plugin installs like running code. Prefer pinned versions.
## Update Capabilities
Updates apply only to plugins installed from npm and tracked in the configuration. A dry-run option allows users to preview changes before applying them.

View File

@@ -0,0 +1,28 @@
# reset
# `openclaw reset`
Clear local configuration and state (CLI remains).
## Overview
The `openclaw reset` command clears local configuration and state while preserving the CLI installation itself.
## Examples
```bash
openclaw reset
openclaw reset --dry-run
openclaw reset --scope config+creds+sessions --yes --non-interactive
```
## Key Options
* `--dry-run`: Preview changes without applying them
* `--scope`: Specify what to reset (config, credentials, sessions)
* `--yes`: Skip confirmation prompts
* `--non-interactive`: Run without user interaction
## Purpose
This command helps users clear their local settings and stored data while keeping the CLI tool itself installed and functional.

View File

@@ -0,0 +1,44 @@
# sandbox
# `openclaw sandbox`
Manage Docker-based isolated containers for secure agent execution.
## Overview
The OpenClaw sandbox system manages Docker-based isolated containers for secure agent execution. The CLI provides tools to inspect, list, and recreate these containers when configurations or images change.
## Key Commands
**`openclaw sandbox explain`** displays effective sandbox settings, including mode, scope, workspace access, and tool policies with relevant configuration paths.
**`openclaw sandbox list`** enumerates all sandbox containers, showing their operational status, Docker image details, creation time, idle duration, and associated session/agent information.
**`openclaw sandbox recreate`** forcefully removes containers to trigger fresh initialization with current images and configurations. Supports filtering by session, agent, or container type.
## Examples
```bash
openclaw sandbox explain
openclaw sandbox list
openclaw sandbox recreate
openclaw sandbox recreate --session <id>
openclaw sandbox recreate --agent <id>
```
## Primary Use Cases
After updating Docker images or modifying sandbox configuration settings, the recreate command ensures containers reflect these changes rather than persisting with outdated configurations. This addresses a core issue: existing containers continue running with old settings while the system waits up to 24 hours for automatic pruning.
## Configuration Location
Sandbox settings reside in `~/.openclaw/openclaw.json` under `agents.defaults.sandbox`, with per-agent overrides available in `agents.list[].sandbox`. Key parameters include:
* Execution mode (off/non-main/all)
* Scope level (session/agent/shared)
* Docker image specification
* Pruning thresholds
## Related Resources
See additional documentation covering broader sandboxing concepts, agent workspace configuration, and the doctor command for sandbox diagnostics verification.

View File

@@ -0,0 +1,21 @@
# security
# `openclaw security`
Security tools (audit + optional fixes).
Related:
* Security guide: [Security](/gateway/security)
## Audit
```bash
openclaw security audit
openclaw security audit --deep
openclaw security audit --fix
```
The audit warns when multiple DM senders share the main session and recommends **secure DM mode**: `session.dmScope="per-channel-peer"` (or `per-account-channel-peer` for multi-account channels) for shared inboxes.
It also warns when small models (`<=300B`) are used without sandboxing and with web/browser tools enabled.

View File

@@ -0,0 +1,16 @@
# sessions
# `openclaw sessions`
List stored conversation sessions.
```bash
openclaw sessions
openclaw sessions --active 120
openclaw sessions --json
```
Notes:
* The `--active` flag filters sessions by activity duration (in minutes).
* Use `--json` for programmatic consumption.

View File

@@ -0,0 +1,29 @@
# setup
# `openclaw setup`
Initialize the OpenClaw configuration file and agent workspace environment.
## Overview
The `openclaw setup` command establishes `~/.openclaw/openclaw.json` and sets up the agent workspace.
## Basic Usage
```bash
openclaw setup
openclaw setup --workspace ~/.openclaw/workspace
```
## Wizard Mode
To launch the interactive onboarding wizard during setup:
```bash
openclaw setup --wizard
```
## Related Resources
* Getting started guide: [Getting Started](/start/getting-started)
* Onboarding wizard documentation: [Onboarding](/start/onboarding)

View File

@@ -0,0 +1,20 @@
# skills
# `openclaw skills`
Inspect skills (bundled + workspace + managed overrides) and see what's eligible vs missing requirements.
Related:
* Skills system: [Skills](/tools/skills)
* Skills config: [Skills config](/tools/skills-config)
* ClawHub installs: [ClawHub](/tools/clawhub)
## Commands
```bash
openclaw skills list
openclaw skills list --eligible
openclaw skills info <name>
openclaw skills check
```

View File

@@ -0,0 +1,20 @@
# status
# `openclaw status`
Diagnostics for channels + sessions.
```bash
openclaw status
openclaw status --all
openclaw status --deep
openclaw status --usage
```
Notes:
* `--deep` runs live probes (WhatsApp Web + Telegram + Discord + Google Chat + Slack + Signal).
* Output includes per-agent session stores when multiple agents are configured.
* Overview includes Gateway + node host service install/runtime status when available.
* Overview includes update channel + git SHA (for source checkouts).
* Update info surfaces in the Overview; if an update is available, status prints a hint to run `openclaw update` (see [Updating](/install/updating)).

View File

@@ -0,0 +1,32 @@
# system
# `openclaw system`
System-level helpers for the Gateway: enqueue system events, control heartbeats, and view presence.
## Key Capabilities
**System Events**: Enqueue messages that inject into prompts as system lines, with options to trigger immediately or await the next scheduled heartbeat.
**Heartbeat Management**: Enable, disable, or check the status of periodic heartbeat events.
**Presence Monitoring**: Display current system presence entries including nodes and instance statuses.
## Commands
```bash
openclaw system event --text "message" --mode now
openclaw system event --text "message" --mode next-heartbeat
openclaw system heartbeat status
openclaw system heartbeat enable
openclaw system heartbeat disable
openclaw system presence
```
## Notable Parameters
The system event command accepts text content, execution mode selection (`now` or `next-heartbeat`), and optional JSON output formatting. Similarly, heartbeat and presence commands support JSON output for programmatic use.
## Requirements
A running Gateway reachable by your current config (local or remote) is necessary. Note that system events are temporary rather than persisting across restarts.

View File

@@ -0,0 +1,17 @@
# tui
# `openclaw tui`
Open the terminal UI connected to the Gateway.
Related:
* TUI guide: [TUI](/tui)
## Examples
```bash
openclaw tui
openclaw tui --url ws://127.0.0.1:18789 --token <token>
openclaw tui --session main --deliver
```

View File

@@ -0,0 +1,11 @@
# uninstall
# `openclaw uninstall`
Uninstall the gateway service + local data (CLI remains).
```bash
openclaw uninstall
openclaw uninstall --all --yes
openclaw uninstall --dry-run
```

View File

@@ -0,0 +1,39 @@
# update
# `openclaw update`
Manage OpenClaw updates across stable, beta, and development channels.
## Overview
The `openclaw update` command manages OpenClaw updates across stable, beta, and development channels. This tool handles version switching while maintaining configuration integrity.
## Key Capabilities
The update system supports multiple installation methods. When switching channels explicitly, OpenClaw also keeps the install method aligned with your chosen channel - dev uses git checkouts, while stable/beta use npm distribution tags.
## Primary Commands
```bash
openclaw update
openclaw update status
openclaw update wizard
openclaw update --channel beta
openclaw update --channel dev
```
## Important Safeguards
The update process includes verification steps:
* For git-based installations, the system requires a clean worktree (no uncommitted changes) before proceeding.
* Downgrades require confirmation because older versions can break configuration.
## Update Workflow Details
For dev channel users, the system performs preflight checks in a temporary workspace and can walk back up to 10 commits to find the newest clean build if the latest version has issues. All update paths conclude with running `openclaw doctor` as a validation step.
## Additional Options
* `--no-restart`: Skip Gateway service restart
* `--json`: Output machine-readable results for automation purposes

View File

@@ -0,0 +1,44 @@
# voicecall
# `openclaw voicecall`
Plugin-provided voice call functionality (requires voice-call plugin).
## Overview
The `voicecall` command is a plugin-provided feature available when the voice-call plugin is installed and enabled.
## Key Commands
```bash
# Check call status
openclaw voicecall status --call-id <id>
# Initiate a call
openclaw voicecall call --to "+15555550123" --message "Hello" --mode notify
# Continue a call
openclaw voicecall continue --call-id <id> --message "Any questions?"
# End a call
openclaw voicecall end --call-id <id>
```
## Webhook Exposure
Expose webhooks using Tailscale:
```bash
# Serve mode
openclaw voicecall expose --mode serve
# Funnel mode
openclaw voicecall expose --mode funnel
# Disable exposure
openclaw voicecall unexpose
```
## Security Guidance
Only expose the webhook endpoint to networks you trust. Prefer Tailscale Serve over Funnel when feasible due to security considerations.

View File

@@ -0,0 +1,107 @@
# Gateway Architecture
Last updated: 2026-01-22
## Overview
- A single long-lived **Gateway** owns all messaging surfaces (WhatsApp via Baileys, Telegram via grammY, Slack, Discord, Signal, iMessage, WebChat).
- Control-plane clients (macOS app, CLI, web UI, automations) connect to the Gateway over **WebSocket** on the configured bind host (default `127.0.0.1:18789`).
- **Nodes** (macOS/iOS/Android/headless) also connect over **WebSocket**, but declare `role: node` with explicit caps/commands.
- One Gateway per host; it is the only place that opens a WhatsApp session.
- A **canvas host** (default `18793`) serves agent-editable HTML and A2UI.
## Components and flows
### Gateway (daemon)
- Maintains provider connections.
- Exposes a typed WS API (requests, responses, server-push events).
- Validates inbound frames against JSON Schema.
- Emits events like `agent`, `chat`, `presence`, `health`, `heartbeat`, `cron`.
### Clients (mac app / CLI / web admin)
- One WS connection per client.
- Send requests (`health`, `status`, `send`, `agent`, `system-presence`).
- Subscribe to events (`tick`, `agent`, `presence`, `shutdown`).
### Nodes (macOS / iOS / Android / headless)
- Connect to the **same WS server** with `role: node`.
- Provide a device identity in `connect`; pairing is **device-based** (role `node`) and approval lives in the device pairing store.
- Expose commands like `canvas.*`, `camera.*`, `screen.record`, `location.get`.
Protocol details: [Gateway protocol](/gateway/protocol)
### WebChat
- Static UI that uses the Gateway WS API for chat history and sends.
- In remote setups, connects through the same SSH/Tailscale tunnel as other clients.
## Connection lifecycle (single client)
```
Client Gateway
| |
|---- req:connect -------->|
|<------ res (ok) ---------| (or res error + close)
| (payload=hello-ok carries snapshot: presence + health)
| |
|<------ event:presence ---|
|<------ event:tick -------|
| |
|------- req:agent ------->|
|<------ res:agent --------| (ack: {runId,status:"accepted"})
|<------ event:agent ------| (streaming)
|<------ res:agent --------| (final: {runId,status,summary})
| |
```
## Wire protocol (summary)
- Transport: WebSocket, text frames with JSON payloads.
- First frame **must** be `connect`.
- After handshake:
- Requests: `{type:"req", id, method, params}` -> `{type:"res", id, ok, payload|error}`
- Events: `{type:"event", event, payload, seq?, stateVersion?}`
- If `OPENCLAW_GATEWAY_TOKEN` (or `--token`) is set, `connect.params.auth.token` must match or the socket closes.
- Idempotency keys are required for side-effecting methods (`send`, `agent`) to safely retry; the server keeps a short-lived dedupe cache.
- Nodes must include `role: "node"` plus caps/commands/permissions in `connect`.
## Pairing + local trust
- All WS clients (operators + nodes) include a **device identity** on `connect`.
- New device IDs require pairing approval; the Gateway issues a **device token** for subsequent connects.
- **Local** connects (loopback or the gateway host's own tailnet address) can be auto-approved to keep same-host UX smooth.
- **Non-local** connects must sign the `connect.challenge` nonce and require explicit approval.
- Gateway auth (`gateway.auth.*`) still applies to **all** connections, local or remote.
Details: [Gateway protocol](/gateway/protocol), [Pairing](/start/pairing), [Security](/gateway/security).
## Protocol typing and codegen
- TypeBox schemas define the protocol.
- JSON Schema is generated from those schemas.
- Swift models are generated from the JSON Schema.
## Remote access
- Preferred: Tailscale or VPN.
- Alternative: SSH tunnel
```bash
ssh -N -L 18789:127.0.0.1:18789 user@host
```
- The same handshake + auth token apply over the tunnel.
- TLS + optional pinning can be enabled for WS in remote setups.
## Operations snapshot
- Start: `openclaw gateway` (foreground, logs to stdout).
- Health: `health` over WS (also included in `hello-ok`).
- Supervision: launchd/systemd for auto-restart.
## Invariants
- Exactly one Gateway controls a single Baileys session per host.
- Handshake is mandatory; any non-JSON or non-connect first frame is a hard close.
- Events are not replayed; clients must refresh on gaps.

View File

@@ -0,0 +1,34 @@
# Channel Routing
OpenClaw's channel routing system deterministically directs replies back to their originating channel. The system uses agents as isolated workspaces that handle messages across multiple platforms including WhatsApp, Telegram, Discord, Slack, Signal, iMessage, and WebChat.
## Key Components
### Channels & Session Management
The platform organizes conversations through session keys that vary by context. Direct messages use a main session structure (`agent:main:main`), while group conversations and threaded discussions receive isolated keys incorporating channel and conversation identifiers.
### Routing Priority
Message routing follows a hierarchical matching system:
1. Exact peer match (highest priority)
2. Guild/team matching
3. Account-level routing
4. Channel-level defaults
5. Fallback to the primary agent configuration
### Multi-Agent Broadcasting
For scenarios requiring simultaneous agent responses, the broadcast groups feature enables "parallel" execution across multiple agents for a single message - useful for support workflows combining human agents with logging systems.
## Storage & Configuration
Session data persists in `~/.openclaw/agents/<agentId>/sessions/`, supporting both JSON session stores and JSONL transcripts.
Configuration relies on two primary structures:
- `agents.list` for agent definitions
- `bindings` for mapping inbound channels to specific agents
### WebChat Integration
WebChat instances attach to selected agents and default to main sessions, enabling cross-channel context visibility within a single interface.

View File

@@ -0,0 +1,36 @@
# Compaction
OpenClaw's compaction feature manages context window limitations by summarizing older conversation history while preserving recent messages.
## What Compaction Does
Compaction summarizes older conversation into a compact summary entry and keeps recent messages intact. The summaries remain stored in session history for future reference.
## Two Compaction Types
### Auto-compaction
Triggers automatically when sessions approach or exceed the model's context limits. Users see a "Auto-compaction complete" notification in verbose mode.
### Manual compaction
Initiated via the `/compact` command, optionally with custom instructions like "Focus on decisions and open questions."
## Compaction vs Session Pruning
| Feature | Compaction | Session Pruning |
|---------|------------|-----------------|
| Action | Summarizes and persists in JSONL | Trims old tool results only |
| Scope | Full conversation history | In-memory, per request |
| Persistence | Permanent | Temporary |
## Practical Guidance
- Use `/compact` when sessions feel outdated or context becomes bloated
- Use `/new` or `/reset` when starting fresh sessions is preferred
## Related Documentation
- [Session Management](/concepts/session)
- [Session Pruning](/concepts/session-pruning)
- [Context](/concepts/context)

View File

@@ -0,0 +1,56 @@
# Context
OpenClaw's "Context" represents everything the model receives for a run, constrained by the model's token limit. It encompasses the system prompt, conversation history, tool calls, and attachments.
## Key Components
The system breaks down into several parts:
- **System prompt** (built by OpenClaw): includes rules, tools, skills, time/runtime data, and workspace files
- **Conversation history**: user and assistant messages within the session
- **Tool results and attachments**: command outputs, file reads, media
## Context Inspection Commands
Users can monitor context usage via:
| Command | Description |
|---------|-------------|
| `/status` | Shows window fullness and session settings |
| `/context list` | Displays injected files with approximate token counts |
| `/context detail` | Provides granular breakdown by file and tool schemas |
| `/usage tokens` | Appends token usage to replies |
| `/compact` | Summarizes older messages to free space |
## What Counts Toward the Window
Everything sent to the model consumes tokens:
- System prompt sections
- Conversation history
- Tool calls and results
- Attachments and transcripts
- Compaction summaries
- Provider wrappers
## Workspace File Injection
OpenClaw automatically injects these files (if present):
- `AGENTS.md`
- `SOUL.md`
- `TOOLS.md`
- `IDENTITY.md`
- `USER.md`
- `HEARTBEAT.md`
- `BOOTSTRAP.md`
Files exceeding `bootstrapMaxChars` (default 20,000) are truncated, with truncation status indicated in context reports.
## Skills and Tools
Skills include metadata in the system prompt but load instruction details only when the model calls `/read` on the skill file.
Tools incur dual costs:
1. Text descriptions in the system prompt
2. JSON schemas that count toward context separately

View File

@@ -0,0 +1,33 @@
# Features
## Highlights Overview
The platform provides integrated communication capabilities across multiple channels. Users can connect WhatsApp, Telegram, Discord, and iMessage with a single Gateway. Additional functionality includes plugin support for services like Mattermost, advanced multi-agent routing with isolated sessions, and comprehensive media handling.
## Key Capabilities
The system supports messaging through several popular platforms using different underlying technologies:
- **Discord**: Integration uses channels.discord.js
- **Telegram**: Operates via grammY
- **WhatsApp**: Connectivity achieved through WhatsApp Web with Baileys
Beyond basic messaging, the platform handles:
- Multi-agent routing for isolated sessions per workspace or sender
- Voice transcription capabilities for audio content
## Interface and Mobile Support
The offering extends to native applications, including:
- WebChat and macOS menu bar app
- Mobile nodes for iOS and Android
- Pairing functionality and Canvas surface support
## Important Update
Legacy code paths have been discontinued. Legacy Claude, Codex, Gemini, and Opencode paths have been removed. Pi is the only coding agent path.
## Documentation
For comprehensive documentation, see the documentation index at https://docs.openclaw.ai/llms.txt

View File

@@ -0,0 +1,56 @@
# Group Messages
This documentation covers WhatsApp group chat functionality for Clawd, enabling the agent to participate in groups while remaining dormant until activated.
## Key Features
### Activation Modes
The system supports two modes:
- `mention` (default): requires @-ping to respond
- `always`: responds to every message
### Group Policy Control
Access is managed through `groupPolicy` settings with three options:
- `open`
- `disabled`
- `allowlist` (default)
The default `allowlist` blocks messages until senders are explicitly permitted.
### Separate Sessions
Each group maintains its own session context independent from direct messages. Session keys look like `agent:<agentId>:whatsapp:group:<jid>` to keep group and DM conversations isolated.
### Context Injection
Unread group messages (up to 50 by default) are automatically included in prompts, labeled as "[Chat messages since your last reply - for context]" with the current message marked separately.
### Sender Attribution
Each message batch includes `[from: Sender Name (+E164)]` so the agent knows who is speaking.
## Configuration
The setup requires adding mention patterns and group settings to `openclaw.json`, including regex patterns for display-name recognition and numerical fallbacks.
```json5
{
agents: {
list: [
{
id: "main",
groupChat: {
mentionPatterns: ["@openclaw", "openclaw", "\\+15555550123"],
historyLimit: 50,
},
},
],
},
}
```
## Usage
Simply @-mention the bot in a group (using `@openclaw` or the phone number), and only allowlisted senders can trigger responses unless open policy is enabled. Group-specific commands like `/verbose on` apply only to that session.

View File

@@ -0,0 +1,365 @@
# Groups
OpenClaw treats group chats consistently across surfaces: WhatsApp, Telegram, Discord, Slack, Signal, iMessage, Microsoft Teams.
## Beginner intro (2 minutes)
OpenClaw "lives" on your own messaging accounts. There is no separate WhatsApp bot user.
If **you** are in a group, OpenClaw can see that group and respond there.
Default behavior:
- Groups are restricted (`groupPolicy: "allowlist"`).
- Replies require a mention unless you explicitly disable mention gating.
Translation: allowlisted senders can trigger OpenClaw by mentioning it.
> TL;DR
>
> - **DM access** is controlled by `*.allowFrom`.
> - **Group access** is controlled by `*.groupPolicy` + allowlists (`*.groups`, `*.groupAllowFrom`).
> - **Reply triggering** is controlled by mention gating (`requireMention`, `/activation`).
Quick flow (what happens to a group message):
```
groupPolicy? disabled -> drop
groupPolicy? allowlist -> group allowed? no -> drop
requireMention? yes -> mentioned? no -> store for context only
otherwise -> reply
```
If you want...
| Goal | What to set |
| -------------------------------------------- | ---------------------------------------------------------- |
| Allow all groups but only reply on @mentions | `groups: { "*": { requireMention: true } }` |
| Disable all group replies | `groupPolicy: "disabled"` |
| Only specific groups | `groups: { "<group-id>": { ... } }` (no `"*"` key) |
| Only you can trigger in groups | `groupPolicy: "allowlist"`, `groupAllowFrom: ["+1555..."]` |
## Session keys
- Group sessions use `agent:<agentId>:<channel>:group:<id>` session keys (rooms/channels use `agent:<agentId>:<channel>:channel:<id>`).
- Telegram forum topics add `:topic:<threadId>` to the group id so each topic has its own session.
- Direct chats use the main session (or per-sender if configured).
- Heartbeats are skipped for group sessions.
## Pattern: personal DMs + public groups (single agent)
Yes - this works well if your "personal" traffic is **DMs** and your "public" traffic is **groups**.
Why: in single-agent mode, DMs typically land in the **main** session key (`agent:main:main`), while groups always use **non-main** session keys (`agent:main:<channel>:group:<id>`). If you enable sandboxing with `mode: "non-main"`, those group sessions run in Docker while your main DM session stays on-host.
This gives you one agent "brain" (shared workspace + memory), but two execution postures:
- **DMs**: full tools (host)
- **Groups**: sandbox + restricted tools (Docker)
> If you need truly separate workspaces/personas ("personal" and "public" must never mix), use a second agent + bindings. See [Multi-Agent Routing](/concepts/multi-agent).
Example (DMs on host, groups sandboxed + messaging-only tools):
```json5
{
agents: {
defaults: {
sandbox: {
mode: "non-main", // groups/channels are non-main -> sandboxed
scope: "session", // strongest isolation (one container per group/channel)
workspaceAccess: "none",
},
},
},
tools: {
sandbox: {
tools: {
// If allow is non-empty, everything else is blocked (deny still wins).
allow: ["group:messaging", "group:sessions"],
deny: ["group:runtime", "group:fs", "group:ui", "nodes", "cron", "gateway"],
},
},
},
}
```
Want "groups can only see folder X" instead of "no host access"? Keep `workspaceAccess: "none"` and mount only allowlisted paths into the sandbox:
```json5
{
agents: {
defaults: {
sandbox: {
mode: "non-main",
scope: "session",
workspaceAccess: "none",
docker: {
binds: [
// hostPath:containerPath:mode
"~/FriendsShared:/data:ro",
],
},
},
},
},
}
```
Related:
- Configuration keys and defaults: [Gateway configuration](/gateway/configuration#agentsdefaultssandbox)
- Debugging why a tool is blocked: [Sandbox vs Tool Policy vs Elevated](/gateway/sandbox-vs-tool-policy-vs-elevated)
- Bind mounts details: [Sandboxing](/gateway/sandboxing#custom-bind-mounts)
## Display labels
- UI labels use `displayName` when available, formatted as `<channel>:<token>`.
- `#room` is reserved for rooms/channels; group chats use `g-<slug>` (lowercase, spaces -> `-`, keep `#@+._-`).
## Group policy
Control how group/room messages are handled per channel:
```json5
{
channels: {
whatsapp: {
groupPolicy: "disabled", // "open" | "disabled" | "allowlist"
groupAllowFrom: ["+15551234567"],
},
telegram: {
groupPolicy: "disabled",
groupAllowFrom: ["123456789", "@username"],
},
signal: {
groupPolicy: "disabled",
groupAllowFrom: ["+15551234567"],
},
imessage: {
groupPolicy: "disabled",
groupAllowFrom: ["chat_id:123"],
},
msteams: {
groupPolicy: "disabled",
groupAllowFrom: ["user@org.com"],
},
discord: {
groupPolicy: "allowlist",
guilds: {
GUILD_ID: { channels: { help: { allow: true } } },
},
},
slack: {
groupPolicy: "allowlist",
channels: { "#general": { allow: true } },
},
matrix: {
groupPolicy: "allowlist",
groupAllowFrom: ["@owner:example.org"],
groups: {
"!roomId:example.org": { allow: true },
"#alias:example.org": { allow: true },
},
},
},
}
```
| Policy | Behavior |
| ------------- | ------------------------------------------------------------ |
| `"open"` | Groups bypass allowlists; mention-gating still applies. |
| `"disabled"` | Block all group messages entirely. |
| `"allowlist"` | Only allow groups/rooms that match the configured allowlist. |
Notes:
- `groupPolicy` is separate from mention-gating (which requires @mentions).
- WhatsApp/Telegram/Signal/iMessage/Microsoft Teams: use `groupAllowFrom` (fallback: explicit `allowFrom`).
- Discord: allowlist uses `channels.discord.guilds.<id>.channels`.
- Slack: allowlist uses `channels.slack.channels`.
- Matrix: allowlist uses `channels.matrix.groups` (room IDs, aliases, or names). Use `channels.matrix.groupAllowFrom` to restrict senders; per-room `users` allowlists are also supported.
- Group DMs are controlled separately (`channels.discord.dm.*`, `channels.slack.dm.*`).
- Telegram allowlist can match user IDs (`"123456789"`, `"telegram:123456789"`, `"tg:123456789"`) or usernames (`"@alice"` or `"alice"`); prefixes are case-insensitive.
- Default is `groupPolicy: "allowlist"`; if your group allowlist is empty, group messages are blocked.
Quick mental model (evaluation order for group messages):
1. `groupPolicy` (open/disabled/allowlist)
2. group allowlists (`*.groups`, `*.groupAllowFrom`, channel-specific allowlist)
3. mention gating (`requireMention`, `/activation`)
## Mention gating (default)
Group messages require a mention unless overridden per group. Defaults live per subsystem under `*.groups."*"`.
Replying to a bot message counts as an implicit mention (when the channel supports reply metadata). This applies to Telegram, WhatsApp, Slack, Discord, and Microsoft Teams.
```json5
{
channels: {
whatsapp: {
groups: {
"*": { requireMention: true },
"123@g.us": { requireMention: false },
},
},
telegram: {
groups: {
"*": { requireMention: true },
"123456789": { requireMention: false },
},
},
imessage: {
groups: {
"*": { requireMention: true },
"123": { requireMention: false },
},
},
},
agents: {
list: [
{
id: "main",
groupChat: {
mentionPatterns: ["@openclaw", "openclaw", "\\+15555550123"],
historyLimit: 50,
},
},
],
},
}
```
Notes:
- `mentionPatterns` are case-insensitive regexes.
- Surfaces that provide explicit mentions still pass; patterns are a fallback.
- Per-agent override: `agents.list[].groupChat.mentionPatterns` (useful when multiple agents share a group).
- Mention gating is only enforced when mention detection is possible (native mentions or `mentionPatterns` are configured).
- Discord defaults live in `channels.discord.guilds."*"` (overridable per guild/channel).
- Group history context is wrapped uniformly across channels and is **pending-only** (messages skipped due to mention gating); use `messages.groupChat.historyLimit` for the global default and `channels.<channel>.historyLimit` (or `channels.<channel>.accounts.*.historyLimit`) for overrides. Set `0` to disable.
## Group/channel tool restrictions (optional)
Some channel configs support restricting which tools are available **inside a specific group/room/channel**.
- `tools`: allow/deny tools for the whole group.
- `toolsBySender`: per-sender overrides within the group (keys are sender IDs/usernames/emails/phone numbers depending on the channel). Use `"*"` as a wildcard.
Resolution order (most specific wins):
1. group/channel `toolsBySender` match
2. group/channel `tools`
3. default (`"*"`) `toolsBySender` match
4. default (`"*"`) `tools`
Example (Telegram):
```json5
{
channels: {
telegram: {
groups: {
"*": { tools: { deny: ["exec"] } },
"-1001234567890": {
tools: { deny: ["exec", "read", "write"] },
toolsBySender: {
"123456789": { alsoAllow: ["exec"] },
},
},
},
},
},
}
```
Notes:
- Group/channel tool restrictions are applied in addition to global/agent tool policy (deny still wins).
- Some channels use different nesting for rooms/channels (e.g., Discord `guilds.*.channels.*`, Slack `channels.*`, MS Teams `teams.*.channels.*`).
## Group allowlists
When `channels.whatsapp.groups`, `channels.telegram.groups`, or `channels.imessage.groups` is configured, the keys act as a group allowlist. Use `"*"` to allow all groups while still setting default mention behavior.
Common intents (copy/paste):
### 1. Disable all group replies
```json5
{
channels: { whatsapp: { groupPolicy: "disabled" } },
}
```
### 2. Allow only specific groups (WhatsApp)
```json5
{
channels: {
whatsapp: {
groups: {
"123@g.us": { requireMention: true },
"456@g.us": { requireMention: false },
},
},
},
}
```
### 3. Allow all groups but require mention (explicit)
```json5
{
channels: {
whatsapp: {
groups: { "*": { requireMention: true } },
},
},
}
```
### 4. Only the owner can trigger in groups (WhatsApp)
```json5
{
channels: {
whatsapp: {
groupPolicy: "allowlist",
groupAllowFrom: ["+15551234567"],
groups: { "*": { requireMention: true } },
},
},
}
```
## Activation (owner-only)
Group owners can toggle per-group activation:
- `/activation mention`
- `/activation always`
Owner is determined by `channels.whatsapp.allowFrom` (or the bot's self E.164 when unset). Send the command as a standalone message. Other surfaces currently ignore `/activation`.
## Context fields
Group inbound payloads set:
- `ChatType=group`
- `GroupSubject` (if known)
- `GroupMembers` (if known)
- `WasMentioned` (mention gating result)
- Telegram forum topics also include `MessageThreadId` and `IsForum`.
The agent system prompt includes a group intro on the first turn of a new group session. It reminds the model to respond like a human, avoid Markdown tables, and avoid typing literal `\n` sequences.
## iMessage specifics
- Prefer `chat_id:<id>` when routing or allowlisting.
- List chats: `imsg chats --limit 20`.
- Group replies always go back to the same `chat_id`.
## WhatsApp specifics
See [Group messages](/concepts/group-messages) for WhatsApp-only behavior (history injection, mention handling details).

View File

@@ -0,0 +1,47 @@
# Markdown Formatting
OpenClaw processes Markdown through an intermediate representation (IR) system that maintains consistent formatting across multiple chat platforms including Slack, Telegram, and Signal.
## Core Architecture
The system operates in three stages:
1. Parsing Markdown into an IR format
2. Chunking the IR text before rendering
3. Converting to channel-specific output
The IR preserves plain text plus style spans (bold/italic/strike/code/spoiler) and link spans, using UTF-16 code units for offset compatibility.
## Key Design Principles
The approach aims to achieve:
- **Consistency**: Single parsing step with multiple renderers
- **Safe chunking**: Avoid splitting inline formatting
- **Adaptability**: Same IR works across different platform requirements without re-parsing
## Channel-Specific Rendering
Each platform receives tailored output:
| Platform | Formatting |
|----------|------------|
| **Slack** | Uses mrkdwn formatting with `<url\|label>` link syntax |
| **Telegram** | Applies HTML tags for styling and links |
| **Signal** | Employs plain text with style ranges; links display as "label (url)" |
## Table Handling
Tables support three modes:
- Code blocks (default)
- Bullet-point conversion
- Disabled parsing
Configuration allows per-channel and per-account customization.
## Implementation Guidance
Adding formatters requires:
1. Parsing with appropriate options
2. Implementing channel-specific renderers
3. Calling the chunking function before rendering
4. Updating the adapter
5. Adding test coverage for both formatting and delivery

View File

@@ -0,0 +1,61 @@
# Memory
OpenClaw's memory system uses plain Markdown in the agent workspace as the foundational approach. Files serve as the authoritative source rather than RAM-based storage.
## Memory File Structure
The system organizes information across two layers:
### Daily logs (`memory/YYYY-MM-DD.md`)
Append-only daily entries, with today's and yesterday's files loaded at session start.
### Long-term memory (`MEMORY.md`)
Curated persistent information, loaded only in private sessions.
## Writing to Memory
Recommended storage patterns:
- **Decisions, preferences, and durable facts** go to `MEMORY.md`
- **Ephemeral notes and contextual information** in daily logs
- **Explicit requests to remember something** should be written immediately
## Automatic Memory Management
When sessions approach token limits, OpenClaw triggers a silent agentic turn prompting memory consolidation before context compaction occurs.
This flush mechanism can be configured via `agents.defaults.compaction.memoryFlush` settings:
```json5
{
agents: {
defaults: {
compaction: {
memoryFlush: {
enabled: true,
softThresholdTokens: 4000,
prompt: "...",
systemPrompt: "..."
}
}
}
}
}
```
## Search Capabilities
The system supports vector-based semantic search across memory files, with configurable backends including:
| Backend | Description |
|---------|-------------|
| Built-in SQLite | Optional vector acceleration |
| QMD sidecar | Local-first search combining BM25 + vectors + reranking |
| Hybrid search | Merges both keyword and semantic signals |
### Tools
- `memory_search` - Semantic queries across memory files
- `memory_get` - Direct file retrieval

View File

@@ -0,0 +1,125 @@
# Messages
This page ties together how OpenClaw handles inbound messages, sessions, queueing, streaming, and reasoning visibility.
## Message flow (high level)
```
Inbound message
-> routing/bindings -> session key
-> queue (if a run is active)
-> agent run (streaming + tools)
-> outbound replies (channel limits + chunking)
```
Key knobs live in configuration:
- `messages.*` for prefixes, queueing, and group behavior.
- `agents.defaults.*` for block streaming and chunking defaults.
- Channel overrides (`channels.whatsapp.*`, `channels.telegram.*`, etc.) for caps and streaming toggles.
See [Configuration](/gateway/configuration) for full schema.
## Inbound dedupe
Channels can redeliver the same message after reconnects. OpenClaw keeps a short-lived cache keyed by channel/account/peer/session/message id so duplicate deliveries do not trigger another agent run.
## Inbound debouncing
Rapid consecutive messages from the **same sender** can be batched into a single agent turn via `messages.inbound`. Debouncing is scoped per channel + conversation and uses the most recent message for reply threading/IDs.
Config (global default + per-channel overrides):
```json5
{
messages: {
inbound: {
debounceMs: 2000,
byChannel: {
whatsapp: 5000,
slack: 1500,
discord: 1500,
},
},
},
}
```
Notes:
- Debounce applies to **text-only** messages; media/attachments flush immediately.
- Control commands bypass debouncing so they remain standalone.
## Sessions and devices
Sessions are owned by the gateway, not by clients.
- Direct chats collapse into the agent main session key.
- Groups/channels get their own session keys.
- The session store and transcripts live on the gateway host.
Multiple devices/channels can map to the same session, but history is not fully synced back to every client. Recommendation: use one primary device for long conversations to avoid divergent context. The Control UI and TUI always show the gateway-backed session transcript, so they are the source of truth.
Details: [Session management](/concepts/session).
## Inbound bodies and history context
OpenClaw separates the **prompt body** from the **command body**:
- `Body`: prompt text sent to the agent. This may include channel envelopes and optional history wrappers.
- `CommandBody`: raw user text for directive/command parsing.
- `RawBody`: legacy alias for `CommandBody` (kept for compatibility).
When a channel supplies history, it uses a shared wrapper:
- `[Chat messages since your last reply - for context]`
- `[Current message - respond to this]`
For **non-direct chats** (groups/channels/rooms), the **current message body** is prefixed with the sender label (same style used for history entries). This keeps real-time and queued/history messages consistent in the agent prompt.
History buffers are **pending-only**: they include group messages that did *not* trigger a run (for example, mention-gated messages) and **exclude** messages already in the session transcript.
Directive stripping only applies to the **current message** section so history remains intact. Channels that wrap history should set `CommandBody` (or `RawBody`) to the original message text and keep `Body` as the combined prompt. History buffers are configurable via `messages.groupChat.historyLimit` (global default) and per-channel overrides like `channels.slack.historyLimit` or `channels.telegram.accounts.<id>.historyLimit` (set `0` to disable).
## Queueing and followups
If a run is already active, inbound messages can be queued, steered into the current run, or collected for a followup turn.
- Configure via `messages.queue` (and `messages.queue.byChannel`).
- Modes: `interrupt`, `steer`, `followup`, `collect`, plus backlog variants.
Details: [Queueing](/concepts/queue).
## Streaming, chunking, and batching
Block streaming sends partial replies as the model produces text blocks. Chunking respects channel text limits and avoids splitting fenced code.
Key settings:
- `agents.defaults.blockStreamingDefault` (`on|off`, default off)
- `agents.defaults.blockStreamingBreak` (`text_end|message_end`)
- `agents.defaults.blockStreamingChunk` (`minChars|maxChars|breakPreference`)
- `agents.defaults.blockStreamingCoalesce` (idle-based batching)
- `agents.defaults.humanDelay` (human-like pause between block replies)
- Channel overrides: `*.blockStreaming` and `*.blockStreamingCoalesce` (non-Telegram channels require explicit `*.blockStreaming: true`)
Details: [Streaming + chunking](/concepts/streaming).
## Reasoning visibility and tokens
OpenClaw can expose or hide model reasoning:
- `/reasoning on|off|stream` controls visibility.
- Reasoning content still counts toward token usage when produced by the model.
- Telegram supports reasoning stream into the draft bubble.
Details: [Thinking + reasoning directives](/tools/thinking) and [Token use](/token-use).
## Prefixes, threading, and replies
Outbound message formatting is centralized in `messages`:
- `messages.responsePrefix`, `channels.<channel>.responsePrefix`, and `channels.<channel>.accounts.<id>.responsePrefix` (outbound prefix cascade), plus `channels.whatsapp.messagePrefix` (WhatsApp inbound prefix)
- Reply threading via `replyToMode` and per-channel defaults
Details: [Configuration](/gateway/configuration#messages) and channel docs.

View File

@@ -0,0 +1,147 @@
> ## Documentation Index
> Fetch the complete documentation index at: https://docs.openclaw.ai/llms.txt
> Use this file to discover all available pages before exploring further.
# Model Failover
# Model failover
OpenClaw handles failures in two stages:
1. **Auth profile rotation** within the current provider.
2. **Model fallback** to the next model in `agents.defaults.model.fallbacks`.
This doc explains the runtime rules and the data that backs them.
## Auth storage (keys + OAuth)
OpenClaw uses **auth profiles** for both API keys and OAuth tokens.
* Secrets live in `~/.openclaw/agents/<agentId>/agent/auth-profiles.json` (legacy: `~/.openclaw/agent/auth-profiles.json`).
* Config `auth.profiles` / `auth.order` are **metadata + routing only** (no secrets).
* Legacy import-only OAuth file: `~/.openclaw/credentials/oauth.json` (imported into `auth-profiles.json` on first use).
More detail: [/concepts/oauth](/concepts/oauth)
Credential types:
* `type: "api_key"``{ provider, key }`
* `type: "oauth"``{ provider, access, refresh, expires, email? }` (+ `projectId`/`enterpriseUrl` for some providers)
## Profile IDs
OAuth logins create distinct profiles so multiple accounts can coexist.
* Default: `provider:default` when no email is available.
* OAuth with email: `provider:<email>` (for example `google-antigravity:user@gmail.com`).
Profiles live in `~/.openclaw/agents/<agentId>/agent/auth-profiles.json` under `profiles`.
## Rotation order
When a provider has multiple profiles, OpenClaw chooses an order like this:
1. **Explicit config**: `auth.order[provider]` (if set).
2. **Configured profiles**: `auth.profiles` filtered by provider.
3. **Stored profiles**: entries in `auth-profiles.json` for the provider.
If no explicit order is configured, OpenClaw uses a roundrobin order:
* **Primary key:** profile type (**OAuth before API keys**).
* **Secondary key:** `usageStats.lastUsed` (oldest first, within each type).
* **Cooldown/disabled profiles** are moved to the end, ordered by soonest expiry.
### Session stickiness (cache-friendly)
OpenClaw **pins the chosen auth profile per session** to keep provider caches warm.
It does **not** rotate on every request. The pinned profile is reused until:
* the session is reset (`/new` / `/reset`)
* a compaction completes (compaction count increments)
* the profile is in cooldown/disabled
Manual selection via `/model …@<profileId>` sets a **user override** for that session
and is not autorotated until a new session starts.
Autopinned profiles (selected by the session router) are treated as a **preference**:
they are tried first, but OpenClaw may rotate to another profile on rate limits/timeouts.
Userpinned profiles stay locked to that profile; if it fails and model fallbacks
are configured, OpenClaw moves to the next model instead of switching profiles.
### Why OAuth can “look lost”
If you have both an OAuth profile and an API key profile for the same provider, roundrobin can switch between them across messages unless pinned. To force a single profile:
* Pin with `auth.order[provider] = ["provider:profileId"]`, or
* Use a per-session override via `/model …` with a profile override (when supported by your UI/chat surface).
## Cooldowns
When a profile fails due to auth/ratelimit errors (or a timeout that looks
like rate limiting), OpenClaw marks it in cooldown and moves to the next profile.
Format/invalidrequest errors (for example Cloud Code Assist tool call ID
validation failures) are treated as failoverworthy and use the same cooldowns.
Cooldowns use exponential backoff:
* 1 minute
* 5 minutes
* 25 minutes
* 1 hour (cap)
State is stored in `auth-profiles.json` under `usageStats`:
```json theme={null}
{
"usageStats": {
"provider:profile": {
"lastUsed": 1736160000000,
"cooldownUntil": 1736160600000,
"errorCount": 2
}
}
}
```
## Billing disables
Billing/credit failures (for example “insufficient credits” / “credit balance too low”) are treated as failoverworthy, but theyre usually not transient. Instead of a short cooldown, OpenClaw marks the profile as **disabled** (with a longer backoff) and rotates to the next profile/provider.
State is stored in `auth-profiles.json`:
```json theme={null}
{
"usageStats": {
"provider:profile": {
"disabledUntil": 1736178000000,
"disabledReason": "billing"
}
}
}
```
Defaults:
* Billing backoff starts at **5 hours**, doubles per billing failure, and caps at **24 hours**.
* Backoff counters reset if the profile hasnt failed for **24 hours** (configurable).
## Model fallback
If all profiles for a provider fail, OpenClaw moves to the next model in
`agents.defaults.model.fallbacks`. This applies to auth failures, rate limits, and
timeouts that exhausted profile rotation (other errors do not advance fallback).
When a run starts with a model override (hooks or CLI), fallbacks still end at
`agents.defaults.model.primary` after trying any configured fallbacks.
## Related config
See [Gateway configuration](/gateway/configuration) for:
* `auth.profiles` / `auth.order`
* `auth.cooldowns.billingBackoffHours` / `auth.cooldowns.billingBackoffHoursByProvider`
* `auth.cooldowns.billingMaxHours` / `auth.cooldowns.failureWindowHours`
* `agents.defaults.model.primary` / `agents.defaults.model.fallbacks`
* `agents.defaults.imageModel` routing
See [Models](/concepts/models) for the broader model selection and fallback overview.

View File

@@ -0,0 +1,314 @@
> ## Documentation Index
> Fetch the complete documentation index at: https://docs.openclaw.ai/llms.txt
> Use this file to discover all available pages before exploring further.
# Model Providers
# Model providers
This page covers **LLM/model providers** (not chat channels like WhatsApp/Telegram).
For model selection rules, see [/concepts/models](/concepts/models).
## Quick rules
* Model refs use `provider/model` (example: `opencode/claude-opus-4-6`).
* If you set `agents.defaults.models`, it becomes the allowlist.
* CLI helpers: `openclaw onboard`, `openclaw models list`, `openclaw models set <provider/model>`.
## Built-in providers (pi-ai catalog)
OpenClaw ships with the piai catalog. These providers require **no**
`models.providers` config; just set auth + pick a model.
### OpenAI
* Provider: `openai`
* Auth: `OPENAI_API_KEY`
* Example model: `openai/gpt-5.1-codex`
* CLI: `openclaw onboard --auth-choice openai-api-key`
```json5 theme={null}
{
agents: { defaults: { model: { primary: "openai/gpt-5.1-codex" } } },
}
```
### Anthropic
* Provider: `anthropic`
* Auth: `ANTHROPIC_API_KEY` or `claude setup-token`
* Example model: `anthropic/claude-opus-4-6`
* CLI: `openclaw onboard --auth-choice token` (paste setup-token) or `openclaw models auth paste-token --provider anthropic`
```json5 theme={null}
{
agents: { defaults: { model: { primary: "anthropic/claude-opus-4-6" } } },
}
```
### OpenAI Code (Codex)
* Provider: `openai-codex`
* Auth: OAuth (ChatGPT)
* Example model: `openai-codex/gpt-5.3-codex`
* CLI: `openclaw onboard --auth-choice openai-codex` or `openclaw models auth login --provider openai-codex`
```json5 theme={null}
{
agents: { defaults: { model: { primary: "openai-codex/gpt-5.3-codex" } } },
}
```
### OpenCode Zen
* Provider: `opencode`
* Auth: `OPENCODE_API_KEY` (or `OPENCODE_ZEN_API_KEY`)
* Example model: `opencode/claude-opus-4-6`
* CLI: `openclaw onboard --auth-choice opencode-zen`
```json5 theme={null}
{
agents: { defaults: { model: { primary: "opencode/claude-opus-4-6" } } },
}
```
### Google Gemini (API key)
* Provider: `google`
* Auth: `GEMINI_API_KEY`
* Example model: `google/gemini-3-pro-preview`
* CLI: `openclaw onboard --auth-choice gemini-api-key`
### Google Vertex, Antigravity, and Gemini CLI
* Providers: `google-vertex`, `google-antigravity`, `google-gemini-cli`
* Auth: Vertex uses gcloud ADC; Antigravity/Gemini CLI use their respective auth flows
* Antigravity OAuth is shipped as a bundled plugin (`google-antigravity-auth`, disabled by default).
* Enable: `openclaw plugins enable google-antigravity-auth`
* Login: `openclaw models auth login --provider google-antigravity --set-default`
* Gemini CLI OAuth is shipped as a bundled plugin (`google-gemini-cli-auth`, disabled by default).
* Enable: `openclaw plugins enable google-gemini-cli-auth`
* Login: `openclaw models auth login --provider google-gemini-cli --set-default`
* Note: you do **not** paste a client id or secret into `openclaw.json`. The CLI login flow stores
tokens in auth profiles on the gateway host.
### Z.AI (GLM)
* Provider: `zai`
* Auth: `ZAI_API_KEY`
* Example model: `zai/glm-4.7`
* CLI: `openclaw onboard --auth-choice zai-api-key`
* Aliases: `z.ai/*` and `z-ai/*` normalize to `zai/*`
### Vercel AI Gateway
* Provider: `vercel-ai-gateway`
* Auth: `AI_GATEWAY_API_KEY`
* Example model: `vercel-ai-gateway/anthropic/claude-opus-4.6`
* CLI: `openclaw onboard --auth-choice ai-gateway-api-key`
### Other built-in providers
* OpenRouter: `openrouter` (`OPENROUTER_API_KEY`)
* Example model: `openrouter/anthropic/claude-sonnet-4-5`
* xAI: `xai` (`XAI_API_KEY`)
* Groq: `groq` (`GROQ_API_KEY`)
* Cerebras: `cerebras` (`CEREBRAS_API_KEY`)
* GLM models on Cerebras use ids `zai-glm-4.7` and `zai-glm-4.6`.
* OpenAI-compatible base URL: `https://api.cerebras.ai/v1`.
* Mistral: `mistral` (`MISTRAL_API_KEY`)
* GitHub Copilot: `github-copilot` (`COPILOT_GITHUB_TOKEN` / `GH_TOKEN` / `GITHUB_TOKEN`)
## Providers via `models.providers` (custom/base URL)
Use `models.providers` (or `models.json`) to add **custom** providers or
OpenAI/Anthropiccompatible proxies.
### Moonshot AI (Kimi)
Moonshot uses OpenAI-compatible endpoints, so configure it as a custom provider:
* Provider: `moonshot`
* Auth: `MOONSHOT_API_KEY`
* Example model: `moonshot/kimi-k2.5`
Kimi K2 model IDs:
{/_moonshot-kimi-k2-model-refs:start_/ && null}
* `moonshot/kimi-k2.5`
* `moonshot/kimi-k2-0905-preview`
* `moonshot/kimi-k2-turbo-preview`
* `moonshot/kimi-k2-thinking`
* `moonshot/kimi-k2-thinking-turbo`
{/_moonshot-kimi-k2-model-refs:end_/ && null}
```json5 theme={null}
{
agents: {
defaults: { model: { primary: "moonshot/kimi-k2.5" } },
},
models: {
mode: "merge",
providers: {
moonshot: {
baseUrl: "https://api.moonshot.ai/v1",
apiKey: "${MOONSHOT_API_KEY}",
api: "openai-completions",
models: [{ id: "kimi-k2.5", name: "Kimi K2.5" }],
},
},
},
}
```
### Kimi Coding
Kimi Coding uses Moonshot AI's Anthropic-compatible endpoint:
* Provider: `kimi-coding`
* Auth: `KIMI_API_KEY`
* Example model: `kimi-coding/k2p5`
```json5 theme={null}
{
env: { KIMI_API_KEY: "sk-..." },
agents: {
defaults: { model: { primary: "kimi-coding/k2p5" } },
},
}
```
### Qwen OAuth (free tier)
Qwen provides OAuth access to Qwen Coder + Vision via a device-code flow.
Enable the bundled plugin, then log in:
```bash theme={null}
openclaw plugins enable qwen-portal-auth
openclaw models auth login --provider qwen-portal --set-default
```
Model refs:
* `qwen-portal/coder-model`
* `qwen-portal/vision-model`
See [/providers/qwen](/providers/qwen) for setup details and notes.
### Synthetic
Synthetic provides Anthropic-compatible models behind the `synthetic` provider:
* Provider: `synthetic`
* Auth: `SYNTHETIC_API_KEY`
* Example model: `synthetic/hf:MiniMaxAI/MiniMax-M2.1`
* CLI: `openclaw onboard --auth-choice synthetic-api-key`
```json5 theme={null}
{
agents: {
defaults: { model: { primary: "synthetic/hf:MiniMaxAI/MiniMax-M2.1" } },
},
models: {
mode: "merge",
providers: {
synthetic: {
baseUrl: "https://api.synthetic.new/anthropic",
apiKey: "${SYNTHETIC_API_KEY}",
api: "anthropic-messages",
models: [{ id: "hf:MiniMaxAI/MiniMax-M2.1", name: "MiniMax M2.1" }],
},
},
},
}
```
### MiniMax
MiniMax is configured via `models.providers` because it uses custom endpoints:
* MiniMax (Anthropiccompatible): `--auth-choice minimax-api`
* Auth: `MINIMAX_API_KEY`
See [/providers/minimax](/providers/minimax) for setup details, model options, and config snippets.
### Ollama
Ollama is a local LLM runtime that provides an OpenAI-compatible API:
* Provider: `ollama`
* Auth: None required (local server)
* Example model: `ollama/llama3.3`
* Installation: [https://ollama.ai](https://ollama.ai)
```bash theme={null}
# Install Ollama, then pull a model:
ollama pull llama3.3
```
```json5 theme={null}
{
agents: {
defaults: { model: { primary: "ollama/llama3.3" } },
},
}
```
Ollama is automatically detected when running locally at `http://127.0.0.1:11434/v1`. See [/providers/ollama](/providers/ollama) for model recommendations and custom configuration.
### Local proxies (LM Studio, vLLM, LiteLLM, etc.)
Example (OpenAIcompatible):
```json5 theme={null}
{
agents: {
defaults: {
model: { primary: "lmstudio/minimax-m2.1-gs32" },
models: { "lmstudio/minimax-m2.1-gs32": { alias: "Minimax" } },
},
},
models: {
providers: {
lmstudio: {
baseUrl: "http://localhost:1234/v1",
apiKey: "LMSTUDIO_KEY",
api: "openai-completions",
models: [
{
id: "minimax-m2.1-gs32",
name: "MiniMax M2.1",
reasoning: false,
input: ["text"],
cost: { input: 0, output: 0, cacheRead: 0, cacheWrite: 0 },
contextWindow: 200000,
maxTokens: 8192,
},
],
},
},
},
}
```
Notes:
* For custom providers, `reasoning`, `input`, `cost`, `contextWindow`, and `maxTokens` are optional.
When omitted, OpenClaw defaults to:
* `reasoning: false`
* `input: ["text"]`
* `cost: { input: 0, output: 0, cacheRead: 0, cacheWrite: 0 }`
* `contextWindow: 200000`
* `maxTokens: 8192`
* Recommended: set explicit values that match your proxy/model limits.
## CLI examples
```bash theme={null}
openclaw onboard --auth-choice opencode-zen
openclaw models set opencode/claude-opus-4-6
openclaw models list
```
See also: [/gateway/configuration](/gateway/configuration) for full configuration examples.

View File

@@ -0,0 +1,205 @@
> ## Documentation Index
> Fetch the complete documentation index at: https://docs.openclaw.ai/llms.txt
> Use this file to discover all available pages before exploring further.
# Models CLI
# Models CLI
See [/concepts/model-failover](/concepts/model-failover) for auth profile
rotation, cooldowns, and how that interacts with fallbacks.
Quick provider overview + examples: [/concepts/model-providers](/concepts/model-providers).
## How model selection works
OpenClaw selects models in this order:
1. **Primary** model (`agents.defaults.model.primary` or `agents.defaults.model`).
2. **Fallbacks** in `agents.defaults.model.fallbacks` (in order).
3. **Provider auth failover** happens inside a provider before moving to the
next model.
Related:
* `agents.defaults.models` is the allowlist/catalog of models OpenClaw can use (plus aliases).
* `agents.defaults.imageModel` is used **only when** the primary model cant accept images.
* Per-agent defaults can override `agents.defaults.model` via `agents.list[].model` plus bindings (see [/concepts/multi-agent](/concepts/multi-agent)).
## Quick model picks (anecdotal)
* **GLM**: a bit better for coding/tool calling.
* **MiniMax**: better for writing and vibes.
## Setup wizard (recommended)
If you dont want to hand-edit config, run the onboarding wizard:
```bash theme={null}
openclaw onboard
```
It can set up model + auth for common providers, including **OpenAI Code (Codex)
subscription** (OAuth) and **Anthropic** (API key recommended; `claude
setup-token` also supported).
## Config keys (overview)
* `agents.defaults.model.primary` and `agents.defaults.model.fallbacks`
* `agents.defaults.imageModel.primary` and `agents.defaults.imageModel.fallbacks`
* `agents.defaults.models` (allowlist + aliases + provider params)
* `models.providers` (custom providers written into `models.json`)
Model refs are normalized to lowercase. Provider aliases like `z.ai/*` normalize
to `zai/*`.
Provider configuration examples (including OpenCode Zen) live in
[/gateway/configuration](/gateway/configuration#opencode-zen-multi-model-proxy).
## “Model is not allowed” (and why replies stop)
If `agents.defaults.models` is set, it becomes the **allowlist** for `/model` and for
session overrides. When a user selects a model that isnt in that allowlist,
OpenClaw returns:
```
Model "provider/model" is not allowed. Use /model to list available models.
```
This happens **before** a normal reply is generated, so the message can feel
like it “didnt respond.” The fix is to either:
* Add the model to `agents.defaults.models`, or
* Clear the allowlist (remove `agents.defaults.models`), or
* Pick a model from `/model list`.
Example allowlist config:
```json5 theme={null}
{
agent: {
model: { primary: "anthropic/claude-sonnet-4-5" },
models: {
"anthropic/claude-sonnet-4-5": { alias: "Sonnet" },
"anthropic/claude-opus-4-6": { alias: "Opus" },
},
},
}
```
## Switching models in chat (`/model`)
You can switch models for the current session without restarting:
```
/model
/model list
/model 3
/model openai/gpt-5.2
/model status
```
Notes:
* `/model` (and `/model list`) is a compact, numbered picker (model family + available providers).
* `/model <#>` selects from that picker.
* `/model status` is the detailed view (auth candidates and, when configured, provider endpoint `baseUrl` + `api` mode).
* Model refs are parsed by splitting on the **first** `/`. Use `provider/model` when typing `/model <ref>`.
* If the model ID itself contains `/` (OpenRouter-style), you must include the provider prefix (example: `/model openrouter/moonshotai/kimi-k2`).
* If you omit the provider, OpenClaw treats the input as an alias or a model for the **default provider** (only works when there is no `/` in the model ID).
Full command behavior/config: [Slash commands](/tools/slash-commands).
## CLI commands
```bash theme={null}
openclaw models list
openclaw models status
openclaw models set <provider/model>
openclaw models set-image <provider/model>
openclaw models aliases list
openclaw models aliases add <alias> <provider/model>
openclaw models aliases remove <alias>
openclaw models fallbacks list
openclaw models fallbacks add <provider/model>
openclaw models fallbacks remove <provider/model>
openclaw models fallbacks clear
openclaw models image-fallbacks list
openclaw models image-fallbacks add <provider/model>
openclaw models image-fallbacks remove <provider/model>
openclaw models image-fallbacks clear
```
`openclaw models` (no subcommand) is a shortcut for `models status`.
### `models list`
Shows configured models by default. Useful flags:
* `--all`: full catalog
* `--local`: local providers only
* `--provider <name>`: filter by provider
* `--plain`: one model per line
* `--json`: machinereadable output
### `models status`
Shows the resolved primary model, fallbacks, image model, and an auth overview
of configured providers. It also surfaces OAuth expiry status for profiles found
in the auth store (warns within 24h by default). `--plain` prints only the
resolved primary model.
OAuth status is always shown (and included in `--json` output). If a configured
provider has no credentials, `models status` prints a **Missing auth** section.
JSON includes `auth.oauth` (warn window + profiles) and `auth.providers`
(effective auth per provider).
Use `--check` for automation (exit `1` when missing/expired, `2` when expiring).
Preferred Anthropic auth is the Claude Code CLI setup-token (run anywhere; paste on the gateway host if needed):
```bash theme={null}
claude setup-token
openclaw models status
```
## Scanning (OpenRouter free models)
`openclaw models scan` inspects OpenRouters **free model catalog** and can
optionally probe models for tool and image support.
Key flags:
* `--no-probe`: skip live probes (metadata only)
* `--min-params <b>`: minimum parameter size (billions)
* `--max-age-days <days>`: skip older models
* `--provider <name>`: provider prefix filter
* `--max-candidates <n>`: fallback list size
* `--set-default`: set `agents.defaults.model.primary` to the first selection
* `--set-image`: set `agents.defaults.imageModel.primary` to the first image selection
Probing requires an OpenRouter API key (from auth profiles or
`OPENROUTER_API_KEY`). Without a key, use `--no-probe` to list candidates only.
Scan results are ranked by:
1. Image support
2. Tool latency
3. Context size
4. Parameter count
Input
* OpenRouter `/models` list (filter `:free`)
* Requires OpenRouter API key from auth profiles or `OPENROUTER_API_KEY` (see [/environment](/help/environment))
* Optional filters: `--max-age-days`, `--min-params`, `--provider`, `--max-candidates`
* Probe controls: `--timeout`, `--concurrency`
When run in a TTY, you can select fallbacks interactively. In noninteractive
mode, pass `--yes` to accept defaults.
## Models registry (`models.json`)
Custom providers in `models.providers` are written into `models.json` under the
agent directory (default `~/.openclaw/agents/<agentId>/models.json`). This file
is merged by default unless `models.mode` is set to `replace`.

View File

@@ -0,0 +1,141 @@
> ## Documentation Index
> Fetch the complete documentation index at: https://docs.openclaw.ai/llms.txt
> Use this file to discover all available pages before exploring further.
# OAuth
# OAuth
OpenClaw supports “subscription auth” via OAuth for providers that offer it (notably **OpenAI Codex (ChatGPT OAuth)**). For Anthropic subscriptions, use the **setup-token** flow. This page explains:
* how the OAuth **token exchange** works (PKCE)
* where tokens are **stored** (and why)
* how to handle **multiple accounts** (profiles + per-session overrides)
OpenClaw also supports **provider plugins** that ship their own OAuth or APIkey
flows. Run them via:
```bash theme={null}
openclaw models auth login --provider <id>
```
## The token sink (why it exists)
OAuth providers commonly mint a **new refresh token** during login/refresh flows. Some providers (or OAuth clients) can invalidate older refresh tokens when a new one is issued for the same user/app.
Practical symptom:
* you log in via OpenClaw *and* via Claude Code / Codex CLI → one of them randomly gets “logged out” later
To reduce that, OpenClaw treats `auth-profiles.json` as a **token sink**:
* the runtime reads credentials from **one place**
* we can keep multiple profiles and route them deterministically
## Storage (where tokens live)
Secrets are stored **per-agent**:
* Auth profiles (OAuth + API keys): `~/.openclaw/agents/<agentId>/agent/auth-profiles.json`
* Runtime cache (managed automatically; dont edit): `~/.openclaw/agents/<agentId>/agent/auth.json`
Legacy import-only file (still supported, but not the main store):
* `~/.openclaw/credentials/oauth.json` (imported into `auth-profiles.json` on first use)
All of the above also respect `$OPENCLAW_STATE_DIR` (state dir override). Full reference: [/gateway/configuration](/gateway/configuration#auth-storage-oauth--api-keys)
## Anthropic setup-token (subscription auth)
Run `claude setup-token` on any machine, then paste it into OpenClaw:
```bash theme={null}
openclaw models auth setup-token --provider anthropic
```
If you generated the token elsewhere, paste it manually:
```bash theme={null}
openclaw models auth paste-token --provider anthropic
```
Verify:
```bash theme={null}
openclaw models status
```
## OAuth exchange (how login works)
OpenClaws interactive login flows are implemented in `@mariozechner/pi-ai` and wired into the wizards/commands.
### Anthropic (Claude Pro/Max) setup-token
Flow shape:
1. run `claude setup-token`
2. paste the token into OpenClaw
3. store as a token auth profile (no refresh)
The wizard path is `openclaw onboard` → auth choice `setup-token` (Anthropic).
### OpenAI Codex (ChatGPT OAuth)
Flow shape (PKCE):
1. generate PKCE verifier/challenge + random `state`
2. open `https://auth.openai.com/oauth/authorize?...`
3. try to capture callback on `http://127.0.0.1:1455/auth/callback`
4. if callback cant bind (or youre remote/headless), paste the redirect URL/code
5. exchange at `https://auth.openai.com/oauth/token`
6. extract `accountId` from the access token and store `{ access, refresh, expires, accountId }`
Wizard path is `openclaw onboard` → auth choice `openai-codex`.
## Refresh + expiry
Profiles store an `expires` timestamp.
At runtime:
* if `expires` is in the future → use the stored access token
* if expired → refresh (under a file lock) and overwrite the stored credentials
The refresh flow is automatic; you generally don't need to manage tokens manually.
## Multiple accounts (profiles) + routing
Two patterns:
### 1) Preferred: separate agents
If you want “personal” and “work” to never interact, use isolated agents (separate sessions + credentials + workspace):
```bash theme={null}
openclaw agents add work
openclaw agents add personal
```
Then configure auth per-agent (wizard) and route chats to the right agent.
### 2) Advanced: multiple profiles in one agent
`auth-profiles.json` supports multiple profile IDs for the same provider.
Pick which profile is used:
* globally via config ordering (`auth.order`)
* per-session via `/model ...@<profileId>`
Example (session override):
* `/model Opus@anthropic:work`
How to see what profile IDs exist:
* `openclaw channels list --json` (shows `auth[]`)
Related docs:
* [/concepts/model-failover](/concepts/model-failover) (rotation + cooldown rules)
* [/tools/slash-commands](/tools/slash-commands) (command surface)

View File

@@ -0,0 +1,99 @@
> ## Documentation Index
> Fetch the complete documentation index at: https://docs.openclaw.ai/llms.txt
> Use this file to discover all available pages before exploring further.
# Presence
# Presence
OpenClaw “presence” is a lightweight, besteffort view of:
* the **Gateway** itself, and
* **clients connected to the Gateway** (mac app, WebChat, CLI, etc.)
Presence is used primarily to render the macOS apps **Instances** tab and to
provide quick operator visibility.
## Presence fields (what shows up)
Presence entries are structured objects with fields like:
* `instanceId` (optional but strongly recommended): stable client identity (usually `connect.client.instanceId`)
* `host`: humanfriendly host name
* `ip`: besteffort IP address
* `version`: client version string
* `deviceFamily` / `modelIdentifier`: hardware hints
* `mode`: `ui`, `webchat`, `cli`, `backend`, `probe`, `test`, `node`, ...
* `lastInputSeconds`: “seconds since last user input” (if known)
* `reason`: `self`, `connect`, `node-connected`, `periodic`, ...
* `ts`: last update timestamp (ms since epoch)
## Producers (where presence comes from)
Presence entries are produced by multiple sources and **merged**.
### 1) Gateway self entry
The Gateway always seeds a “self” entry at startup so UIs show the gateway host
even before any clients connect.
### 2) WebSocket connect
Every WS client begins with a `connect` request. On successful handshake the
Gateway upserts a presence entry for that connection.
#### Why oneoff CLI commands dont show up
The CLI often connects for short, oneoff commands. To avoid spamming the
Instances list, `client.mode === "cli"` is **not** turned into a presence entry.
### 3) `system-event` beacons
Clients can send richer periodic beacons via the `system-event` method. The mac
app uses this to report host name, IP, and `lastInputSeconds`.
### 4) Node connects (role: node)
When a node connects over the Gateway WebSocket with `role: node`, the Gateway
upserts a presence entry for that node (same flow as other WS clients).
## Merge + dedupe rules (why `instanceId` matters)
Presence entries are stored in a single inmemory map:
* Entries are keyed by a **presence key**.
* The best key is a stable `instanceId` (from `connect.client.instanceId`) that survives restarts.
* Keys are caseinsensitive.
If a client reconnects without a stable `instanceId`, it may show up as a
**duplicate** row.
## TTL and bounded size
Presence is intentionally ephemeral:
* **TTL:** entries older than 5 minutes are pruned
* **Max entries:** 200 (oldest dropped first)
This keeps the list fresh and avoids unbounded memory growth.
## Remote/tunnel caveat (loopback IPs)
When a client connects over an SSH tunnel / local port forward, the Gateway may
see the remote address as `127.0.0.1`. To avoid overwriting a good clientreported
IP, loopback remote addresses are ignored.
## Consumers
### macOS Instances tab
The macOS app renders the output of `system-presence` and applies a small status
indicator (Active/Idle/Stale) based on the age of the last update.
## Debugging tips
* To see the raw list, call `system-presence` against the Gateway.
* If you see duplicates:
* confirm clients send a stable `client.instanceId` in the handshake
* confirm periodic beacons use the same `instanceId`
* check whether the connectionderived entry is missing `instanceId` (duplicates are expected)

View File

@@ -0,0 +1,88 @@
> ## Documentation Index
> Fetch the complete documentation index at: https://docs.openclaw.ai/llms.txt
> Use this file to discover all available pages before exploring further.
# Command Queue
# Command Queue (2026-01-16)
We serialize inbound auto-reply runs (all channels) through a tiny in-process queue to prevent multiple agent runs from colliding, while still allowing safe parallelism across sessions.
## Why
* Auto-reply runs can be expensive (LLM calls) and can collide when multiple inbound messages arrive close together.
* Serializing avoids competing for shared resources (session files, logs, CLI stdin) and reduces the chance of upstream rate limits.
## How it works
* A lane-aware FIFO queue drains each lane with a configurable concurrency cap (default 1 for unconfigured lanes; main defaults to 4, subagent to 8).
* `runEmbeddedPiAgent` enqueues by **session key** (lane `session:<key>`) to guarantee only one active run per session.
* Each session run is then queued into a **global lane** (`main` by default) so overall parallelism is capped by `agents.defaults.maxConcurrent`.
* When verbose logging is enabled, queued runs emit a short notice if they waited more than \~2s before starting.
* Typing indicators still fire immediately on enqueue (when supported by the channel) so user experience is unchanged while we wait our turn.
## Queue modes (per channel)
Inbound messages can steer the current run, wait for a followup turn, or do both:
* `steer`: inject immediately into the current run (cancels pending tool calls after the next tool boundary). If not streaming, falls back to followup.
* `followup`: enqueue for the next agent turn after the current run ends.
* `collect`: coalesce all queued messages into a **single** followup turn (default). If messages target different channels/threads, they drain individually to preserve routing.
* `steer-backlog` (aka `steer+backlog`): steer now **and** preserve the message for a followup turn.
* `interrupt` (legacy): abort the active run for that session, then run the newest message.
* `queue` (legacy alias): same as `steer`.
Steer-backlog means you can get a followup response after the steered run, so
streaming surfaces can look like duplicates. Prefer `collect`/`steer` if you want
one response per inbound message.
Send `/queue collect` as a standalone command (per-session) or set `messages.queue.byChannel.discord: "collect"`.
Defaults (when unset in config):
* All surfaces → `collect`
Configure globally or per channel via `messages.queue`:
```json5 theme={null}
{
messages: {
queue: {
mode: "collect",
debounceMs: 1000,
cap: 20,
drop: "summarize",
byChannel: { discord: "collect" },
},
},
}
```
## Queue options
Options apply to `followup`, `collect`, and `steer-backlog` (and to `steer` when it falls back to followup):
* `debounceMs`: wait for quiet before starting a followup turn (prevents “continue, continue”).
* `cap`: max queued messages per session.
* `drop`: overflow policy (`old`, `new`, `summarize`).
Summarize keeps a short bullet list of dropped messages and injects it as a synthetic followup prompt.
Defaults: `debounceMs: 1000`, `cap: 20`, `drop: summarize`.
## Per-session overrides
* Send `/queue <mode>` as a standalone command to store the mode for the current session.
* Options can be combined: `/queue collect debounce:2s cap:25 drop:summarize`
* `/queue default` or `/queue reset` clears the session override.
## Scope and guarantees
* Applies to auto-reply agent runs across all inbound channels that use the gateway reply pipeline (WhatsApp web, Telegram, Slack, Discord, Signal, iMessage, webchat, etc.).
* Default lane (`main`) is process-wide for inbound + main heartbeats; set `agents.defaults.maxConcurrent` to allow multiple sessions in parallel.
* Additional lanes may exist (e.g. `cron`, `subagent`) so background jobs can run in parallel without blocking inbound replies.
* Per-session lanes guarantee that only one agent run touches a given session at a time.
* No external dependencies or background worker threads; pure TypeScript + promises.
## Troubleshooting
* If commands seem stuck, enable verbose logs and look for “queued for …ms” lines to confirm the queue is draining.
* If you need queue depth, enable verbose logs and watch for queue timing lines.

View File

@@ -0,0 +1,67 @@
> ## Documentation Index
> Fetch the complete documentation index at: https://docs.openclaw.ai/llms.txt
> Use this file to discover all available pages before exploring further.
# Retry Policy
# Retry policy
## Goals
* Retry per HTTP request, not per multi-step flow.
* Preserve ordering by retrying only the current step.
* Avoid duplicating non-idempotent operations.
## Defaults
* Attempts: 3
* Max delay cap: 30000 ms
* Jitter: 0.1 (10 percent)
* Provider defaults:
* Telegram min delay: 400 ms
* Discord min delay: 500 ms
## Behavior
### Discord
* Retries only on rate-limit errors (HTTP 429).
* Uses Discord `retry_after` when available, otherwise exponential backoff.
### Telegram
* Retries on transient errors (429, timeout, connect/reset/closed, temporarily unavailable).
* Uses `retry_after` when available, otherwise exponential backoff.
* Markdown parse errors are not retried; they fall back to plain text.
## Configuration
Set retry policy per provider in `~/.openclaw/openclaw.json`:
```json5 theme={null}
{
channels: {
telegram: {
retry: {
attempts: 3,
minDelayMs: 400,
maxDelayMs: 30000,
jitter: 0.1,
},
},
discord: {
retry: {
attempts: 3,
minDelayMs: 500,
maxDelayMs: 30000,
jitter: 0.1,
},
},
},
}
```
## Notes
* Retries apply per request (message send, media upload, reaction, poll, sticker).
* Composite flows do not retry completed steps.

View File

@@ -0,0 +1,121 @@
> ## Documentation Index
> Fetch the complete documentation index at: https://docs.openclaw.ai/llms.txt
> Use this file to discover all available pages before exploring further.
# null
# Session Pruning
Session pruning trims **old tool results** from the in-memory context right before each LLM call. It does **not** rewrite the on-disk session history (`*.jsonl`).
## When it runs
* When `mode: "cache-ttl"` is enabled and the last Anthropic call for the session is older than `ttl`.
* Only affects the messages sent to the model for that request.
* Only active for Anthropic API calls (and OpenRouter Anthropic models).
* For best results, match `ttl` to your model `cacheControlTtl`.
* After a prune, the TTL window resets so subsequent requests keep cache until `ttl` expires again.
## Smart defaults (Anthropic)
* **OAuth or setup-token** profiles: enable `cache-ttl` pruning and set heartbeat to `1h`.
* **API key** profiles: enable `cache-ttl` pruning, set heartbeat to `30m`, and default `cacheControlTtl` to `1h` on Anthropic models.
* If you set any of these values explicitly, OpenClaw does **not** override them.
## What this improves (cost + cache behavior)
* **Why prune:** Anthropic prompt caching only applies within the TTL. If a session goes idle past the TTL, the next request re-caches the full prompt unless you trim it first.
* **What gets cheaper:** pruning reduces the **cacheWrite** size for that first request after the TTL expires.
* **Why the TTL reset matters:** once pruning runs, the cache window resets, so followup requests can reuse the freshly cached prompt instead of re-caching the full history again.
* **What it does not do:** pruning doesnt add tokens or “double” costs; it only changes what gets cached on that first postTTL request.
## What can be pruned
* Only `toolResult` messages.
* User + assistant messages are **never** modified.
* The last `keepLastAssistants` assistant messages are protected; tool results after that cutoff are not pruned.
* If there arent enough assistant messages to establish the cutoff, pruning is skipped.
* Tool results containing **image blocks** are skipped (never trimmed/cleared).
## Context window estimation
Pruning uses an estimated context window (chars ≈ tokens × 4). The base window is resolved in this order:
1. `models.providers.*.models[].contextWindow` override.
2. Model definition `contextWindow` (from the model registry).
3. Default `200000` tokens.
If `agents.defaults.contextTokens` is set, it is treated as a cap (min) on the resolved window.
## Mode
### cache-ttl
* Pruning only runs if the last Anthropic call is older than `ttl` (default `5m`).
* When it runs: same soft-trim + hard-clear behavior as before.
## Soft vs hard pruning
* **Soft-trim**: only for oversized tool results.
* Keeps head + tail, inserts `...`, and appends a note with the original size.
* Skips results with image blocks.
* **Hard-clear**: replaces the entire tool result with `hardClear.placeholder`.
## Tool selection
* `tools.allow` / `tools.deny` support `*` wildcards.
* Deny wins.
* Matching is case-insensitive.
* Empty allow list => all tools allowed.
## Interaction with other limits
* Built-in tools already truncate their own output; session pruning is an extra layer that prevents long-running chats from accumulating too much tool output in the model context.
* Compaction is separate: compaction summarizes and persists, pruning is transient per request. See [/concepts/compaction](/concepts/compaction).
## Defaults (when enabled)
* `ttl`: `"5m"`
* `keepLastAssistants`: `3`
* `softTrimRatio`: `0.3`
* `hardClearRatio`: `0.5`
* `minPrunableToolChars`: `50000`
* `softTrim`: `{ maxChars: 4000, headChars: 1500, tailChars: 1500 }`
* `hardClear`: `{ enabled: true, placeholder: "[Old tool result content cleared]" }`
## Examples
Default (off):
```json5 theme={null}
{
agent: {
contextPruning: { mode: "off" },
},
}
```
Enable TTL-aware pruning:
```json5 theme={null}
{
agent: {
contextPruning: { mode: "cache-ttl", ttl: "5m" },
},
}
```
Restrict pruning to specific tools:
```json5 theme={null}
{
agent: {
contextPruning: {
mode: "cache-ttl",
tools: { allow: ["exec", "read"], deny: ["*image*"] },
},
},
}
```
See config reference: [Gateway Configuration](/gateway/configuration)

View File

@@ -0,0 +1,192 @@
> ## Documentation Index
> Fetch the complete documentation index at: https://docs.openclaw.ai/llms.txt
> Use this file to discover all available pages before exploring further.
# Session Tools
# Session Tools
Goal: small, hard-to-misuse tool set so agents can list sessions, fetch history, and send to another session.
## Tool Names
* `sessions_list`
* `sessions_history`
* `sessions_send`
* `sessions_spawn`
## Key Model
* Main direct chat bucket is always the literal key `"main"` (resolved to the current agents main key).
* Group chats use `agent:<agentId>:<channel>:group:<id>` or `agent:<agentId>:<channel>:channel:<id>` (pass the full key).
* Cron jobs use `cron:<job.id>`.
* Hooks use `hook:<uuid>` unless explicitly set.
* Node sessions use `node-<nodeId>` unless explicitly set.
`global` and `unknown` are reserved values and are never listed. If `session.scope = "global"`, we alias it to `main` for all tools so callers never see `global`.
## sessions\_list
List sessions as an array of rows.
Parameters:
* `kinds?: string[]` filter: any of `"main" | "group" | "cron" | "hook" | "node" | "other"`
* `limit?: number` max rows (default: server default, clamp e.g. 200)
* `activeMinutes?: number` only sessions updated within N minutes
* `messageLimit?: number` 0 = no messages (default 0); >0 = include last N messages
Behavior:
* `messageLimit > 0` fetches `chat.history` per session and includes the last N messages.
* Tool results are filtered out in list output; use `sessions_history` for tool messages.
* When running in a **sandboxed** agent session, session tools default to **spawned-only visibility** (see below).
Row shape (JSON):
* `key`: session key (string)
* `kind`: `main | group | cron | hook | node | other`
* `channel`: `whatsapp | telegram | discord | signal | imessage | webchat | internal | unknown`
* `displayName` (group display label if available)
* `updatedAt` (ms)
* `sessionId`
* `model`, `contextTokens`, `totalTokens`
* `thinkingLevel`, `verboseLevel`, `systemSent`, `abortedLastRun`
* `sendPolicy` (session override if set)
* `lastChannel`, `lastTo`
* `deliveryContext` (normalized `{ channel, to, accountId }` when available)
* `transcriptPath` (best-effort path derived from store dir + sessionId)
* `messages?` (only when `messageLimit > 0`)
## sessions\_history
Fetch transcript for one session.
Parameters:
* `sessionKey` (required; accepts session key or `sessionId` from `sessions_list`)
* `limit?: number` max messages (server clamps)
* `includeTools?: boolean` (default false)
Behavior:
* `includeTools=false` filters `role: "toolResult"` messages.
* Returns messages array in the raw transcript format.
* When given a `sessionId`, OpenClaw resolves it to the corresponding session key (missing ids error).
## sessions\_send
Send a message into another session.
Parameters:
* `sessionKey` (required; accepts session key or `sessionId` from `sessions_list`)
* `message` (required)
* `timeoutSeconds?: number` (default >0; 0 = fire-and-forget)
Behavior:
* `timeoutSeconds = 0`: enqueue and return `{ runId, status: "accepted" }`.
* `timeoutSeconds > 0`: wait up to N seconds for completion, then return `{ runId, status: "ok", reply }`.
* If wait times out: `{ runId, status: "timeout", error }`. Run continues; call `sessions_history` later.
* If the run fails: `{ runId, status: "error", error }`.
* Announce delivery runs after the primary run completes and is best-effort; `status: "ok"` does not guarantee the announce was delivered.
* Waits via gateway `agent.wait` (server-side) so reconnects don't drop the wait.
* Agent-to-agent message context is injected for the primary run.
* After the primary run completes, OpenClaw runs a **reply-back loop**:
* Round 2+ alternates between requester and target agents.
* Reply exactly `REPLY_SKIP` to stop the pingpong.
* Max turns is `session.agentToAgent.maxPingPongTurns` (05, default 5).
* Once the loop ends, OpenClaw runs the **agenttoagent announce step** (target agent only):
* Reply exactly `ANNOUNCE_SKIP` to stay silent.
* Any other reply is sent to the target channel.
* Announce step includes the original request + round1 reply + latest pingpong reply.
## Channel Field
* For groups, `channel` is the channel recorded on the session entry.
* For direct chats, `channel` maps from `lastChannel`.
* For cron/hook/node, `channel` is `internal`.
* If missing, `channel` is `unknown`.
## Security / Send Policy
Policy-based blocking by channel/chat type (not per session id).
```json theme={null}
{
"session": {
"sendPolicy": {
"rules": [
{
"match": { "channel": "discord", "chatType": "group" },
"action": "deny"
}
],
"default": "allow"
}
}
}
```
Runtime override (per session entry):
* `sendPolicy: "allow" | "deny"` (unset = inherit config)
* Settable via `sessions.patch` or owner-only `/send on|off|inherit` (standalone message).
Enforcement points:
* `chat.send` / `agent` (gateway)
* auto-reply delivery logic
## sessions\_spawn
Spawn a sub-agent run in an isolated session and announce the result back to the requester chat channel.
Parameters:
* `task` (required)
* `label?` (optional; used for logs/UI)
* `agentId?` (optional; spawn under another agent id if allowed)
* `model?` (optional; overrides the sub-agent model; invalid values error)
* `runTimeoutSeconds?` (default 0; when set, aborts the sub-agent run after N seconds)
* `cleanup?` (`delete|keep`, default `keep`)
Allowlist:
* `agents.list[].subagents.allowAgents`: list of agent ids allowed via `agentId` (`["*"]` to allow any). Default: only the requester agent.
Discovery:
* Use `agents_list` to discover which agent ids are allowed for `sessions_spawn`.
Behavior:
* Starts a new `agent:<agentId>:subagent:<uuid>` session with `deliver: false`.
* Sub-agents default to the full tool set **minus session tools** (configurable via `tools.subagents.tools`).
* Sub-agents are not allowed to call `sessions_spawn` (no sub-agent → sub-agent spawning).
* Always non-blocking: returns `{ status: "accepted", runId, childSessionKey }` immediately.
* After completion, OpenClaw runs a sub-agent **announce step** and posts the result to the requester chat channel.
* Reply exactly `ANNOUNCE_SKIP` during the announce step to stay silent.
* Announce replies are normalized to `Status`/`Result`/`Notes`; `Status` comes from runtime outcome (not model text).
* Sub-agent sessions are auto-archived after `agents.defaults.subagents.archiveAfterMinutes` (default: 60).
* Announce replies include a stats line (runtime, tokens, sessionKey/sessionId, transcript path, and optional cost).
## Sandbox Session Visibility
Sandboxed sessions can use session tools, but by default they only see sessions they spawned via `sessions_spawn`.
Config:
```json5 theme={null}
{
agents: {
defaults: {
sandbox: {
// default: "spawned"
sessionToolsVisibility: "spawned", // or "all"
},
},
},
}
```

View File

@@ -0,0 +1,203 @@
> ## Documentation Index
> Fetch the complete documentation index at: https://docs.openclaw.ai/llms.txt
> Use this file to discover all available pages before exploring further.
# Session Management
# Session Management
OpenClaw treats **one direct-chat session per agent** as primary. Direct chats collapse to `agent:<agentId>:<mainKey>` (default `main`), while group/channel chats get their own keys. `session.mainKey` is honored.
Use `session.dmScope` to control how **direct messages** are grouped:
* `main` (default): all DMs share the main session for continuity.
* `per-peer`: isolate by sender id across channels.
* `per-channel-peer`: isolate by channel + sender (recommended for multi-user inboxes).
* `per-account-channel-peer`: isolate by account + channel + sender (recommended for multi-account inboxes).
Use `session.identityLinks` to map provider-prefixed peer ids to a canonical identity so the same person shares a DM session across channels when using `per-peer`, `per-channel-peer`, or `per-account-channel-peer`.
## Secure DM mode (recommended for multi-user setups)
> **Security Warning:** If your agent can receive DMs from **multiple people**, you should strongly consider enabling secure DM mode. Without it, all users share the same conversation context, which can leak private information between users.
**Example of the problem with default settings:**
* Alice (`<SENDER_A>`) messages your agent about a private topic (for example, a medical appointment)
* Bob (`<SENDER_B>`) messages your agent asking "What were we talking about?"
* Because both DMs share the same session, the model may answer Bob using Alice's prior context.
**The fix:** Set `dmScope` to isolate sessions per user:
```json5 theme={null}
// ~/.openclaw/openclaw.json
{
session: {
// Secure DM mode: isolate DM context per channel + sender.
dmScope: "per-channel-peer",
},
}
```
**When to enable this:**
* You have pairing approvals for more than one sender
* You use a DM allowlist with multiple entries
* You set `dmPolicy: "open"`
* Multiple phone numbers or accounts can message your agent
Notes:
* Default is `dmScope: "main"` for continuity (all DMs share the main session). This is fine for single-user setups.
* For multi-account inboxes on the same channel, prefer `per-account-channel-peer`.
* If the same person contacts you on multiple channels, use `session.identityLinks` to collapse their DM sessions into one canonical identity.
* You can verify your DM settings with `openclaw security audit` (see [security](/cli/security)).
## Gateway is the source of truth
All session state is **owned by the gateway** (the “master” OpenClaw). UI clients (macOS app, WebChat, etc.) must query the gateway for session lists and token counts instead of reading local files.
* In **remote mode**, the session store you care about lives on the remote gateway host, not your Mac.
* Token counts shown in UIs come from the gateways store fields (`inputTokens`, `outputTokens`, `totalTokens`, `contextTokens`). Clients do not parse JSONL transcripts to “fix up” totals.
## Where state lives
* On the **gateway host**:
* Store file: `~/.openclaw/agents/<agentId>/sessions/sessions.json` (per agent).
* Transcripts: `~/.openclaw/agents/<agentId>/sessions/<SessionId>.jsonl` (Telegram topic sessions use `.../<SessionId>-topic-<threadId>.jsonl`).
* The store is a map `sessionKey -> { sessionId, updatedAt, ... }`. Deleting entries is safe; they are recreated on demand.
* Group entries may include `displayName`, `channel`, `subject`, `room`, and `space` to label sessions in UIs.
* Session entries include `origin` metadata (label + routing hints) so UIs can explain where a session came from.
* OpenClaw does **not** read legacy Pi/Tau session folders.
## Session pruning
OpenClaw trims **old tool results** from the in-memory context right before LLM calls by default.
This does **not** rewrite JSONL history. See [/concepts/session-pruning](/concepts/session-pruning).
## Pre-compaction memory flush
When a session nears auto-compaction, OpenClaw can run a **silent memory flush**
turn that reminds the model to write durable notes to disk. This only runs when
the workspace is writable. See [Memory](/concepts/memory) and
[Compaction](/concepts/compaction).
## Mapping transports → session keys
* Direct chats follow `session.dmScope` (default `main`).
* `main`: `agent:<agentId>:<mainKey>` (continuity across devices/channels).
* Multiple phone numbers and channels can map to the same agent main key; they act as transports into one conversation.
* `per-peer`: `agent:<agentId>:dm:<peerId>`.
* `per-channel-peer`: `agent:<agentId>:<channel>:dm:<peerId>`.
* `per-account-channel-peer`: `agent:<agentId>:<channel>:<accountId>:dm:<peerId>` (accountId defaults to `default`).
* If `session.identityLinks` matches a provider-prefixed peer id (for example `telegram:123`), the canonical key replaces `<peerId>` so the same person shares a session across channels.
* Group chats isolate state: `agent:<agentId>:<channel>:group:<id>` (rooms/channels use `agent:<agentId>:<channel>:channel:<id>`).
* Telegram forum topics append `:topic:<threadId>` to the group id for isolation.
* Legacy `group:<id>` keys are still recognized for migration.
* Inbound contexts may still use `group:<id>`; the channel is inferred from `Provider` and normalized to the canonical `agent:<agentId>:<channel>:group:<id>` form.
* Other sources:
* Cron jobs: `cron:<job.id>`
* Webhooks: `hook:<uuid>` (unless explicitly set by the hook)
* Node runs: `node-<nodeId>`
## Lifecycle
* Reset policy: sessions are reused until they expire, and expiry is evaluated on the next inbound message.
* Daily reset: defaults to **4:00 AM local time on the gateway host**. A session is stale once its last update is earlier than the most recent daily reset time.
* Idle reset (optional): `idleMinutes` adds a sliding idle window. When both daily and idle resets are configured, **whichever expires first** forces a new session.
* Legacy idle-only: if you set `session.idleMinutes` without any `session.reset`/`resetByType` config, OpenClaw stays in idle-only mode for backward compatibility.
* Per-type overrides (optional): `resetByType` lets you override the policy for `direct`, `group`, and `thread` sessions (thread = Slack/Discord threads, Telegram topics, Matrix threads when provided by the connector).
* Per-channel overrides (optional): `resetByChannel` overrides the reset policy for a channel (applies to all session types for that channel and takes precedence over `reset`/`resetByType`).
* Reset triggers: exact `/new` or `/reset` (plus any extras in `resetTriggers`) start a fresh session id and pass the remainder of the message through. `/new <model>` accepts a model alias, `provider/model`, or provider name (fuzzy match) to set the new session model. If `/new` or `/reset` is sent alone, OpenClaw runs a short “hello” greeting turn to confirm the reset.
* Manual reset: delete specific keys from the store or remove the JSONL transcript; the next message recreates them.
* Isolated cron jobs always mint a fresh `sessionId` per run (no idle reuse).
## Send policy (optional)
Block delivery for specific session types without listing individual ids.
```json5 theme={null}
{
session: {
sendPolicy: {
rules: [
{ action: "deny", match: { channel: "discord", chatType: "group" } },
{ action: "deny", match: { keyPrefix: "cron:" } },
],
default: "allow",
},
},
}
```
Runtime override (owner only):
* `/send on` → allow for this session
* `/send off` → deny for this session
* `/send inherit` → clear override and use config rules
Send these as standalone messages so they register.
## Configuration (optional rename example)
```json5 theme={null}
// ~/.openclaw/openclaw.json
{
session: {
scope: "per-sender", // keep group keys separate
dmScope: "main", // DM continuity (set per-channel-peer/per-account-channel-peer for shared inboxes)
identityLinks: {
alice: ["telegram:123456789", "discord:987654321012345678"],
},
reset: {
// Defaults: mode=daily, atHour=4 (gateway host local time).
// If you also set idleMinutes, whichever expires first wins.
mode: "daily",
atHour: 4,
idleMinutes: 120,
},
resetByType: {
thread: { mode: "daily", atHour: 4 },
direct: { mode: "idle", idleMinutes: 240 },
group: { mode: "idle", idleMinutes: 120 },
},
resetByChannel: {
discord: { mode: "idle", idleMinutes: 10080 },
},
resetTriggers: ["/new", "/reset"],
store: "~/.openclaw/agents/{agentId}/sessions/sessions.json",
mainKey: "main",
},
}
```
## Inspecting
* `openclaw status` — shows store path and recent sessions.
* `openclaw sessions --json` — dumps every entry (filter with `--active <minutes>`).
* `openclaw gateway call sessions.list --params '{}'` — fetch sessions from the running gateway (use `--url`/`--token` for remote gateway access).
* Send `/status` as a standalone message in chat to see whether the agent is reachable, how much of the session context is used, current thinking/verbose toggles, and when your WhatsApp web creds were last refreshed (helps spot relink needs).
* Send `/context list` or `/context detail` to see whats in the system prompt and injected workspace files (and the biggest context contributors).
* Send `/stop` as a standalone message to abort the current run, clear queued followups for that session, and stop any sub-agent runs spawned from it (the reply includes the stopped count).
* Send `/compact` (optional instructions) as a standalone message to summarize older context and free up window space. See [/concepts/compaction](/concepts/compaction).
* JSONL transcripts can be opened directly to review full turns.
## Tips
* Keep the primary key dedicated to 1:1 traffic; let groups keep their own keys.
* When automating cleanup, delete individual keys instead of the whole store to preserve context elsewhere.
## Session origin metadata
Each session entry records where it came from (best-effort) in `origin`:
* `label`: human label (resolved from conversation label + group subject/channel)
* `provider`: normalized channel id (including extensions)
* `from`/`to`: raw routing ids from the inbound envelope
* `accountId`: provider account id (when multi-account)
* `threadId`: thread/topic id when the channel supports it
The origin fields are populated for direct messages, channels, and groups. If a
connector only updates delivery routing (for example, to keep a DM main session
fresh), it should still provide inbound context so the session keeps its
explainer metadata. Extensions can do this by sending `ConversationLabel`,
`GroupSubject`, `GroupChannel`, `GroupSpace`, and `SenderName` in the inbound
context and calling `recordSessionMetaFromInbound` (or passing the same context
to `updateLastRoute`).

View File

@@ -0,0 +1,9 @@
> ## Documentation Index
> Fetch the complete documentation index at: https://docs.openclaw.ai/llms.txt
> Use this file to discover all available pages before exploring further.
# Sessions
# Sessions
Canonical session management docs live in [Session management](/concepts/session).

View File

@@ -0,0 +1,132 @@
> ## Documentation Index
> Fetch the complete documentation index at: https://docs.openclaw.ai/llms.txt
> Use this file to discover all available pages before exploring further.
# Streaming and Chunking
# Streaming + chunking
OpenClaw has two separate “streaming” layers:
* **Block streaming (channels):** emit completed **blocks** as the assistant writes. These are normal channel messages (not token deltas).
* **Token-ish streaming (Telegram only):** update a **draft bubble** with partial text while generating; final message is sent at the end.
There is **no real token streaming** to external channel messages today. Telegram draft streaming is the only partial-stream surface.
## Block streaming (channel messages)
Block streaming sends assistant output in coarse chunks as it becomes available.
```
Model output
└─ text_delta/events
├─ (blockStreamingBreak=text_end)
│ └─ chunker emits blocks as buffer grows
└─ (blockStreamingBreak=message_end)
└─ chunker flushes at message_end
└─ channel send (block replies)
```
Legend:
* `text_delta/events`: model stream events (may be sparse for non-streaming models).
* `chunker`: `EmbeddedBlockChunker` applying min/max bounds + break preference.
* `channel send`: actual outbound messages (block replies).
**Controls:**
* `agents.defaults.blockStreamingDefault`: `"on"`/`"off"` (default off).
* Channel overrides: `*.blockStreaming` (and per-account variants) to force `"on"`/`"off"` per channel.
* `agents.defaults.blockStreamingBreak`: `"text_end"` or `"message_end"`.
* `agents.defaults.blockStreamingChunk`: `{ minChars, maxChars, breakPreference? }`.
* `agents.defaults.blockStreamingCoalesce`: `{ minChars?, maxChars?, idleMs? }` (merge streamed blocks before send).
* Channel hard cap: `*.textChunkLimit` (e.g., `channels.whatsapp.textChunkLimit`).
* Channel chunk mode: `*.chunkMode` (`length` default, `newline` splits on blank lines (paragraph boundaries) before length chunking).
* Discord soft cap: `channels.discord.maxLinesPerMessage` (default 17) splits tall replies to avoid UI clipping.
**Boundary semantics:**
* `text_end`: stream blocks as soon as chunker emits; flush on each `text_end`.
* `message_end`: wait until assistant message finishes, then flush buffered output.
`message_end` still uses the chunker if the buffered text exceeds `maxChars`, so it can emit multiple chunks at the end.
## Chunking algorithm (low/high bounds)
Block chunking is implemented by `EmbeddedBlockChunker`:
* **Low bound:** dont emit until buffer >= `minChars` (unless forced).
* **High bound:** prefer splits before `maxChars`; if forced, split at `maxChars`.
* **Break preference:** `paragraph``newline``sentence``whitespace` → hard break.
* **Code fences:** never split inside fences; when forced at `maxChars`, close + reopen the fence to keep Markdown valid.
`maxChars` is clamped to the channel `textChunkLimit`, so you cant exceed per-channel caps.
## Coalescing (merge streamed blocks)
When block streaming is enabled, OpenClaw can **merge consecutive block chunks**
before sending them out. This reduces “single-line spam” while still providing
progressive output.
* Coalescing waits for **idle gaps** (`idleMs`) before flushing.
* Buffers are capped by `maxChars` and will flush if they exceed it.
* `minChars` prevents tiny fragments from sending until enough text accumulates
(final flush always sends remaining text).
* Joiner is derived from `blockStreamingChunk.breakPreference`
(`paragraph``\n\n`, `newline``\n`, `sentence` → space).
* Channel overrides are available via `*.blockStreamingCoalesce` (including per-account configs).
* Default coalesce `minChars` is bumped to 1500 for Signal/Slack/Discord unless overridden.
## Human-like pacing between blocks
When block streaming is enabled, you can add a **randomized pause** between
block replies (after the first block). This makes multi-bubble responses feel
more natural.
* Config: `agents.defaults.humanDelay` (override per agent via `agents.list[].humanDelay`).
* Modes: `off` (default), `natural` (8002500ms), `custom` (`minMs`/`maxMs`).
* Applies only to **block replies**, not final replies or tool summaries.
## “Stream chunks or everything”
This maps to:
* **Stream chunks:** `blockStreamingDefault: "on"` + `blockStreamingBreak: "text_end"` (emit as you go). Non-Telegram channels also need `*.blockStreaming: true`.
* **Stream everything at end:** `blockStreamingBreak: "message_end"` (flush once, possibly multiple chunks if very long).
* **No block streaming:** `blockStreamingDefault: "off"` (only final reply).
**Channel note:** For non-Telegram channels, block streaming is **off unless**
`*.blockStreaming` is explicitly set to `true`. Telegram can stream drafts
(`channels.telegram.streamMode`) without block replies.
Config location reminder: the `blockStreaming*` defaults live under
`agents.defaults`, not the root config.
## Telegram draft streaming (token-ish)
Telegram is the only channel with draft streaming:
* Uses Bot API `sendMessageDraft` in **private chats with topics**.
* `channels.telegram.streamMode: "partial" | "block" | "off"`.
* `partial`: draft updates with the latest stream text.
* `block`: draft updates in chunked blocks (same chunker rules).
* `off`: no draft streaming.
* Draft chunk config (only for `streamMode: "block"`): `channels.telegram.draftChunk` (defaults: `minChars: 200`, `maxChars: 800`).
* Draft streaming is separate from block streaming; block replies are off by default and only enabled by `*.blockStreaming: true` on non-Telegram channels.
* Final reply is still a normal message.
* `/reasoning stream` writes reasoning into the draft bubble (Telegram only).
When draft streaming is active, OpenClaw disables block streaming for that reply to avoid double-streaming.
```
Telegram (private + topics)
└─ sendMessageDraft (draft bubble)
├─ streamMode=partial → update latest text
└─ streamMode=block → chunker updates draft
└─ final reply → normal message
```
Legend:
* `sendMessageDraft`: Telegram draft bubble (not a real message).
* `final reply`: normal Telegram message send.

View File

@@ -0,0 +1,113 @@
> ## Documentation Index
> Fetch the complete documentation index at: https://docs.openclaw.ai/llms.txt
> Use this file to discover all available pages before exploring further.
# System Prompt
# System Prompt
OpenClaw builds a custom system prompt for every agent run. The prompt is **OpenClaw-owned** and does not use the p-coding-agent default prompt.
The prompt is assembled by OpenClaw and injected into each agent run.
## Structure
The prompt is intentionally compact and uses fixed sections:
* **Tooling**: current tool list + short descriptions.
* **Safety**: short guardrail reminder to avoid power-seeking behavior or bypassing oversight.
* **Skills** (when available): tells the model how to load skill instructions on demand.
* **OpenClaw Self-Update**: how to run `config.apply` and `update.run`.
* **Workspace**: working directory (`agents.defaults.workspace`).
* **Documentation**: local path to OpenClaw docs (repo or npm package) and when to read them.
* **Workspace Files (injected)**: indicates bootstrap files are included below.
* **Sandbox** (when enabled): indicates sandboxed runtime, sandbox paths, and whether elevated exec is available.
* **Current Date & Time**: user-local time, timezone, and time format.
* **Reply Tags**: optional reply tag syntax for supported providers.
* **Heartbeats**: heartbeat prompt and ack behavior.
* **Runtime**: host, OS, node, model, repo root (when detected), thinking level (one line).
* **Reasoning**: current visibility level + /reasoning toggle hint.
Safety guardrails in the system prompt are advisory. They guide model behavior but do not enforce policy. Use tool policy, exec approvals, sandboxing, and channel allowlists for hard enforcement; operators can disable these by design.
## Prompt modes
OpenClaw can render smaller system prompts for sub-agents. The runtime sets a
`promptMode` for each run (not a user-facing config):
* `full` (default): includes all sections above.
* `minimal`: used for sub-agents; omits **Skills**, **Memory Recall**, **OpenClaw
Self-Update**, **Model Aliases**, **User Identity**, **Reply Tags**,
**Messaging**, **Silent Replies**, and **Heartbeats**. Tooling, **Safety**,
Workspace, Sandbox, Current Date & Time (when known), Runtime, and injected
context stay available.
* `none`: returns only the base identity line.
When `promptMode=minimal`, extra injected prompts are labeled **Subagent
Context** instead of **Group Chat Context**.
## Workspace bootstrap injection
Bootstrap files are trimmed and appended under **Project Context** so the model sees identity and profile context without needing explicit reads:
* `AGENTS.md`
* `SOUL.md`
* `TOOLS.md`
* `IDENTITY.md`
* `USER.md`
* `HEARTBEAT.md`
* `BOOTSTRAP.md` (only on brand-new workspaces)
Large files are truncated with a marker. The max per-file size is controlled by
`agents.defaults.bootstrapMaxChars` (default: 20000). Missing files inject a
short missing-file marker.
Internal hooks can intercept this step via `agent:bootstrap` to mutate or replace
the injected bootstrap files (for example swapping `SOUL.md` for an alternate persona).
To inspect how much each injected file contributes (raw vs injected, truncation, plus tool schema overhead), use `/context list` or `/context detail`. See [Context](/concepts/context).
## Time handling
The system prompt includes a dedicated **Current Date & Time** section when the
user timezone is known. To keep the prompt cache-stable, it now only includes
the **time zone** (no dynamic clock or time format).
Use `session_status` when the agent needs the current time; the status card
includes a timestamp line.
Configure with:
* `agents.defaults.userTimezone`
* `agents.defaults.timeFormat` (`auto` | `12` | `24`)
See [Date & Time](/date-time) for full behavior details.
## Skills
When eligible skills exist, OpenClaw injects a compact **available skills list**
(`formatSkillsForPrompt`) that includes the **file path** for each skill. The
prompt instructs the model to use `read` to load the SKILL.md at the listed
location (workspace, managed, or bundled). If no skills are eligible, the
Skills section is omitted.
```
<available_skills>
<skill>
<name>...</name>
<description>...</description>
<location>...</location>
</skill>
</available_skills>
```
This keeps the base prompt small while still enabling targeted skill usage.
## Documentation
When available, the system prompt includes a **Documentation** section that points to the
local OpenClaw docs directory (either `docs/` in the repo workspace or the bundled npm
package docs) and also notes the public mirror, source repo, community Discord, and
ClawHub ([https://clawhub.com](https://clawhub.com)) for skills discovery. The prompt instructs the model to consult local docs first
for OpenClaw behavior, commands, configuration, or architecture, and to run
`openclaw status` itself when possible (asking the user only when it lacks access).

View File

@@ -0,0 +1,89 @@
> ## Documentation Index
> Fetch the complete documentation index at: https://docs.openclaw.ai/llms.txt
> Use this file to discover all available pages before exploring further.
# Timezones
# Timezones
OpenClaw standardizes timestamps so the model sees a **single reference time**.
## Message envelopes (local by default)
Inbound messages are wrapped in an envelope like:
```
[Provider ... 2026-01-05 16:26 PST] message text
```
The timestamp in the envelope is **host-local by default**, with minutes precision.
You can override this with:
```json5 theme={null}
{
agents: {
defaults: {
envelopeTimezone: "local", // "utc" | "local" | "user" | IANA timezone
envelopeTimestamp: "on", // "on" | "off"
envelopeElapsed: "on", // "on" | "off"
},
},
}
```
* `envelopeTimezone: "utc"` uses UTC.
* `envelopeTimezone: "user"` uses `agents.defaults.userTimezone` (falls back to host timezone).
* Use an explicit IANA timezone (e.g., `"Europe/Vienna"`) for a fixed offset.
* `envelopeTimestamp: "off"` removes absolute timestamps from envelope headers.
* `envelopeElapsed: "off"` removes elapsed time suffixes (the `+2m` style).
### Examples
**Local (default):**
```
[Signal Alice +1555 2026-01-18 00:19 PST] hello
```
**Fixed timezone:**
```
[Signal Alice +1555 2026-01-18 06:19 GMT+1] hello
```
**Elapsed time:**
```
[Signal Alice +1555 +2m 2026-01-18T05:19Z] follow-up
```
## Tool payloads (raw provider data + normalized fields)
Tool calls (`channels.discord.readMessages`, `channels.slack.readMessages`, etc.) return **raw provider timestamps**.
We also attach normalized fields for consistency:
* `timestampMs` (UTC epoch milliseconds)
* `timestampUtc` (ISO 8601 UTC string)
Raw provider fields are preserved.
## User timezone for the system prompt
Set `agents.defaults.userTimezone` to tell the model the user's local time zone. If it is
unset, OpenClaw resolves the **host timezone at runtime** (no config write).
```json5 theme={null}
{
agents: { defaults: { userTimezone: "America/Chicago" } },
}
```
The system prompt includes:
* `Current Date & Time` section with local time and timezone
* `Time format: 12-hour` or `24-hour`
You can control the prompt format with `agents.defaults.timeFormat` (`auto` | `12` | `24`).
See [Date & Time](/date-time) for the full behavior and examples.

View File

@@ -0,0 +1,288 @@
> ## Documentation Index
> Fetch the complete documentation index at: https://docs.openclaw.ai/llms.txt
> Use this file to discover all available pages before exploring further.
# TypeBox
# TypeBox as protocol source of truth
Last updated: 2026-01-10
TypeBox is a TypeScript-first schema library. We use it to define the **Gateway
WebSocket protocol** (handshake, request/response, server events). Those schemas
drive **runtime validation**, **JSON Schema export**, and **Swift codegen** for
the macOS app. One source of truth; everything else is generated.
If you want the higher-level protocol context, start with
[Gateway architecture](/concepts/architecture).
## Mental model (30 seconds)
Every Gateway WS message is one of three frames:
* **Request**: `{ type: "req", id, method, params }`
* **Response**: `{ type: "res", id, ok, payload | error }`
* **Event**: `{ type: "event", event, payload, seq?, stateVersion? }`
The first frame **must** be a `connect` request. After that, clients can call
methods (e.g. `health`, `send`, `chat.send`) and subscribe to events (e.g.
`presence`, `tick`, `agent`).
Connection flow (minimal):
```
Client Gateway
|---- req:connect -------->|
|<---- res:hello-ok --------|
|<---- event:tick ----------|
|---- req:health ---------->|
|<---- res:health ----------|
```
Common methods + events:
| Category | Examples | Notes |
| --------- | --------------------------------------------------------- | ---------------------------------- |
| Core | `connect`, `health`, `status` | `connect` must be first |
| Messaging | `send`, `poll`, `agent`, `agent.wait` | side-effects need `idempotencyKey` |
| Chat | `chat.history`, `chat.send`, `chat.abort`, `chat.inject` | WebChat uses these |
| Sessions | `sessions.list`, `sessions.patch`, `sessions.delete` | session admin |
| Nodes | `node.list`, `node.invoke`, `node.pair.*` | Gateway WS + node actions |
| Events | `tick`, `presence`, `agent`, `chat`, `health`, `shutdown` | server push |
Authoritative list lives in `src/gateway/server.ts` (`METHODS`, `EVENTS`).
## Where the schemas live
* Source: `src/gateway/protocol/schema.ts`
* Runtime validators (AJV): `src/gateway/protocol/index.ts`
* Server handshake + method dispatch: `src/gateway/server.ts`
* Node client: `src/gateway/client.ts`
* Generated JSON Schema: `dist/protocol.schema.json`
* Generated Swift models: `apps/macos/Sources/OpenClawProtocol/GatewayModels.swift`
## Current pipeline
* `pnpm protocol:gen`
* writes JSON Schema (draft07) to `dist/protocol.schema.json`
* `pnpm protocol:gen:swift`
* generates Swift gateway models
* `pnpm protocol:check`
* runs both generators and verifies the output is committed
## How the schemas are used at runtime
* **Server side**: every inbound frame is validated with AJV. The handshake only
accepts a `connect` request whose params match `ConnectParams`.
* **Client side**: the JS client validates event and response frames before
using them.
* **Method surface**: the Gateway advertises the supported `methods` and
`events` in `hello-ok`.
## Example frames
Connect (first message):
```json theme={null}
{
"type": "req",
"id": "c1",
"method": "connect",
"params": {
"minProtocol": 2,
"maxProtocol": 2,
"client": {
"id": "openclaw-macos",
"displayName": "macos",
"version": "1.0.0",
"platform": "macos 15.1",
"mode": "ui",
"instanceId": "A1B2"
}
}
}
```
Hello-ok response:
```json theme={null}
{
"type": "res",
"id": "c1",
"ok": true,
"payload": {
"type": "hello-ok",
"protocol": 2,
"server": { "version": "dev", "connId": "ws-1" },
"features": { "methods": ["health"], "events": ["tick"] },
"snapshot": {
"presence": [],
"health": {},
"stateVersion": { "presence": 0, "health": 0 },
"uptimeMs": 0
},
"policy": { "maxPayload": 1048576, "maxBufferedBytes": 1048576, "tickIntervalMs": 30000 }
}
}
```
Request + response:
```json theme={null}
{ "type": "req", "id": "r1", "method": "health" }
```
```json theme={null}
{ "type": "res", "id": "r1", "ok": true, "payload": { "ok": true } }
```
Event:
```json theme={null}
{ "type": "event", "event": "tick", "payload": { "ts": 1730000000 }, "seq": 12 }
```
## Minimal client (Node.js)
Smallest useful flow: connect + health.
```ts theme={null}
import { WebSocket } from "ws";
const ws = new WebSocket("ws://127.0.0.1:18789");
ws.on("open", () => {
ws.send(
JSON.stringify({
type: "req",
id: "c1",
method: "connect",
params: {
minProtocol: 3,
maxProtocol: 3,
client: {
id: "cli",
displayName: "example",
version: "dev",
platform: "node",
mode: "cli",
},
},
}),
);
});
ws.on("message", (data) => {
const msg = JSON.parse(String(data));
if (msg.type === "res" && msg.id === "c1" && msg.ok) {
ws.send(JSON.stringify({ type: "req", id: "h1", method: "health" }));
}
if (msg.type === "res" && msg.id === "h1") {
console.log("health:", msg.payload);
ws.close();
}
});
```
## Worked example: add a method endtoend
Example: add a new `system.echo` request that returns `{ ok: true, text }`.
1. **Schema (source of truth)**
Add to `src/gateway/protocol/schema.ts`:
```ts theme={null}
export const SystemEchoParamsSchema = Type.Object(
{ text: NonEmptyString },
{ additionalProperties: false },
);
export const SystemEchoResultSchema = Type.Object(
{ ok: Type.Boolean(), text: NonEmptyString },
{ additionalProperties: false },
);
```
Add both to `ProtocolSchemas` and export types:
```ts theme={null}
SystemEchoParams: SystemEchoParamsSchema,
SystemEchoResult: SystemEchoResultSchema,
```
```ts theme={null}
export type SystemEchoParams = Static<typeof SystemEchoParamsSchema>;
export type SystemEchoResult = Static<typeof SystemEchoResultSchema>;
```
2. **Validation**
In `src/gateway/protocol/index.ts`, export an AJV validator:
```ts theme={null}
export const validateSystemEchoParams = ajv.compile<SystemEchoParams>(SystemEchoParamsSchema);
```
3. **Server behavior**
Add a handler in `src/gateway/server-methods/system.ts`:
```ts theme={null}
export const systemHandlers: GatewayRequestHandlers = {
"system.echo": ({ params, respond }) => {
const text = String(params.text ?? "");
respond(true, { ok: true, text });
},
};
```
Register it in `src/gateway/server-methods.ts` (already merges `systemHandlers`),
then add `"system.echo"` to `METHODS` in `src/gateway/server.ts`.
4. **Regenerate**
```bash theme={null}
pnpm protocol:check
```
5. **Tests + docs**
Add a server test in `src/gateway/server.*.test.ts` and note the method in docs.
## Swift codegen behavior
The Swift generator emits:
* `GatewayFrame` enum with `req`, `res`, `event`, and `unknown` cases
* Strongly typed payload structs/enums
* `ErrorCode` values and `GATEWAY_PROTOCOL_VERSION`
Unknown frame types are preserved as raw payloads for forward compatibility.
## Versioning + compatibility
* `PROTOCOL_VERSION` lives in `src/gateway/protocol/schema.ts`.
* Clients send `minProtocol` + `maxProtocol`; the server rejects mismatches.
* The Swift models keep unknown frame types to avoid breaking older clients.
## Schema patterns and conventions
* Most objects use `additionalProperties: false` for strict payloads.
* `NonEmptyString` is the default for IDs and method/event names.
* The top-level `GatewayFrame` uses a **discriminator** on `type`.
* Methods with side effects usually require an `idempotencyKey` in params
(example: `send`, `poll`, `agent`, `chat.send`).
## Live schema JSON
Generated JSON Schema is in the repo at `dist/protocol.schema.json`. The
published raw file is typically available at:
* [https://raw.githubusercontent.com/openclaw/openclaw/main/dist/protocol.schema.json](https://raw.githubusercontent.com/openclaw/openclaw/main/dist/protocol.schema.json)
## When you change schemas
1. Update the TypeBox schemas.
2. Run `pnpm protocol:check`.
3. Commit the regenerated schema + Swift models.

View File

@@ -0,0 +1,67 @@
> ## Documentation Index
> Fetch the complete documentation index at: https://docs.openclaw.ai/llms.txt
> Use this file to discover all available pages before exploring further.
# Typing Indicators
# Typing indicators
Typing indicators are sent to the chat channel while a run is active. Use
`agents.defaults.typingMode` to control **when** typing starts and `typingIntervalSeconds`
to control **how often** it refreshes.
## Defaults
When `agents.defaults.typingMode` is **unset**, OpenClaw keeps the legacy behavior:
* **Direct chats**: typing starts immediately once the model loop begins.
* **Group chats with a mention**: typing starts immediately.
* **Group chats without a mention**: typing starts only when message text begins streaming.
* **Heartbeat runs**: typing is disabled.
## Modes
Set `agents.defaults.typingMode` to one of:
* `never` — no typing indicator, ever.
* `instant` — start typing **as soon as the model loop begins**, even if the run
later returns only the silent reply token.
* `thinking` — start typing on the **first reasoning delta** (requires
`reasoningLevel: "stream"` for the run).
* `message` — start typing on the **first non-silent text delta** (ignores
the `NO_REPLY` silent token).
Order of “how early it fires”:
`never``message``thinking``instant`
## Configuration
```json5 theme={null}
{
agent: {
typingMode: "thinking",
typingIntervalSeconds: 6,
},
}
```
You can override mode or cadence per session:
```json5 theme={null}
{
session: {
typingMode: "message",
typingIntervalSeconds: 4,
},
}
```
## Notes
* `message` mode wont show typing for silent-only replies (e.g. the `NO_REPLY`
token used to suppress output).
* `thinking` only fires if the run streams reasoning (`reasoningLevel: "stream"`).
If the model doesnt emit reasoning deltas, typing wont start.
* Heartbeats never show typing, regardless of mode.
* `typingIntervalSeconds` controls the **refresh cadence**, not the start time.
The default is 6 seconds.

View File

@@ -0,0 +1,33 @@
> ## Documentation Index
> Fetch the complete documentation index at: https://docs.openclaw.ai/llms.txt
> Use this file to discover all available pages before exploring further.
# Usage Tracking
# Usage tracking
## What it is
* Pulls provider usage/quota directly from their usage endpoints.
* No estimated costs; only the provider-reported windows.
## Where it shows up
* `/status` in chats: emojirich status card with session tokens + estimated cost (API key only). Provider usage shows for the **current model provider** when available.
* `/usage off|tokens|full` in chats: per-response usage footer (OAuth shows tokens only).
* `/usage cost` in chats: local cost summary aggregated from OpenClaw session logs.
* CLI: `openclaw status --usage` prints a full per-provider breakdown.
* CLI: `openclaw channels list` prints the same usage snapshot alongside provider config (use `--no-usage` to skip).
* macOS menu bar: “Usage” section under Context (only if available).
## Providers + credentials
* **Anthropic (Claude)**: OAuth tokens in auth profiles.
* **GitHub Copilot**: OAuth tokens in auth profiles.
* **Gemini CLI**: OAuth tokens in auth profiles.
* **Antigravity**: OAuth tokens in auth profiles.
* **OpenAI Codex**: OAuth tokens in auth profiles (accountId used when present).
* **MiniMax**: API key (coding plan key; `MINIMAX_CODE_PLAN_KEY` or `MINIMAX_API_KEY`); uses the 5hour coding plan window.
* **z.ai**: API key via env/config/auth store.
Usage is hidden if no matching OAuth/API credentials exist.

View File

@@ -0,0 +1,26 @@
> ## Documentation Index
> Fetch the complete documentation index at: https://docs.openclaw.ai/llms.txt
> Use this file to discover all available pages before exploring further.
# Credits
## The name
OpenClaw = CLAW + TARDIS, because every space lobster needs a time and space machine.
## Credits
* **Peter Steinberger** ([@steipete](https://x.com/steipete)) - Creator, lobster whisperer
* **Mario Zechner** ([@badlogicc](https://x.com/badlogicgames)) - Pi creator, security pen tester
* **Clawd** - The space lobster who demanded a better name
## Core contributors
* **Maxim Vovshin** (@Hyaxia, [36747317+Hyaxia@users.noreply.github.com](mailto:36747317+Hyaxia@users.noreply.github.com)) - Blogwatcher skill
* **Nacho Iacovino** (@nachoiacovino, [nacho.iacovino@gmail.com](mailto:nacho.iacovino@gmail.com)) - Location parsing (Telegram and WhatsApp)
## License
MIT - Free as a lobster in the ocean.
> "We are all just playing with our own prompts." (An AI, probably high on tokens)

View File

@@ -0,0 +1,45 @@
> ## Documentation Index
> Fetch the complete documentation index at: https://docs.openclaw.ai/llms.txt
> Use this file to discover all available pages before exploring further.
# Device Model Database
# Device model database (friendly names)
The macOS companion app shows friendly Apple device model names in the **Instances** UI by mapping Apple model identifiers (e.g. `iPad16,6`, `Mac16,6`) to human-readable names.
The mapping is vendored as JSON under:
* `apps/macos/Sources/OpenClaw/Resources/DeviceModels/`
## Data source
We currently vendor the mapping from the MIT-licensed repository:
* `kyle-seongwoo-jun/apple-device-identifiers`
To keep builds deterministic, the JSON files are pinned to specific upstream commits (recorded in `apps/macos/Sources/OpenClaw/Resources/DeviceModels/NOTICE.md`).
## Updating the database
1. Pick the upstream commits you want to pin to (one for iOS, one for macOS).
2. Update the commit hashes in `apps/macos/Sources/OpenClaw/Resources/DeviceModels/NOTICE.md`.
3. Re-download the JSON files, pinned to those commits:
```bash theme={null}
IOS_COMMIT="<commit sha for ios-device-identifiers.json>"
MAC_COMMIT="<commit sha for mac-device-identifiers.json>"
curl -fsSL "https://raw.githubusercontent.com/kyle-seongwoo-jun/apple-device-identifiers/${IOS_COMMIT}/ios-device-identifiers.json" \
-o apps/macos/Sources/OpenClaw/Resources/DeviceModels/ios-device-identifiers.json
curl -fsSL "https://raw.githubusercontent.com/kyle-seongwoo-jun/apple-device-identifiers/${MAC_COMMIT}/mac-device-identifiers.json" \
-o apps/macos/Sources/OpenClaw/Resources/DeviceModels/mac-device-identifiers.json
```
4. Ensure `apps/macos/Sources/OpenClaw/Resources/DeviceModels/LICENSE.apple-device-identifiers.txt` still matches upstream (replace it if the upstream license changes).
5. Verify the macOS app builds cleanly (no warnings):
```bash theme={null}
swift build --package-path apps/macos
```

View File

@@ -0,0 +1,41 @@
> ## Documentation Index
> Fetch the complete documentation index at: https://docs.openclaw.ai/llms.txt
> Use this file to discover all available pages before exploring further.
# RPC Adapters
# RPC adapters
OpenClaw integrates external CLIs via JSON-RPC. Two patterns are used today.
## Pattern A: HTTP daemon (signal-cli)
* `signal-cli` runs as a daemon with JSON-RPC over HTTP.
* Event stream is SSE (`/api/v1/events`).
* Health probe: `/api/v1/check`.
* OpenClaw owns lifecycle when `channels.signal.autoStart=true`.
See [Signal](/channels/signal) for setup and endpoints.
## Pattern B: stdio child process (legacy: imsg)
> **Note:** For new iMessage setups, use [BlueBubbles](/channels/bluebubbles) instead.
* OpenClaw spawns `imsg rpc` as a child process (legacy iMessage integration).
* JSON-RPC is line-delimited over stdin/stdout (one JSON object per line).
* No TCP port, no daemon required.
Core methods used:
* `watch.subscribe` → notifications (`method: "message"`)
* `watch.unsubscribe`
* `send`
* `chats.list` (probe/diagnostics)
See [iMessage](/channels/imessage) for legacy setup and addressing (`chat_id` preferred).
## Adapter guidelines
* Gateway owns the process (start/stop tied to provider lifecycle).
* Keep RPC clients resilient: timeouts, restart on exit.
* Prefer stable IDs (e.g., `chat_id`) over display strings.

View File

@@ -0,0 +1,282 @@
> ## Documentation Index
> Fetch the complete documentation index at: https://docs.openclaw.ai/llms.txt
> Use this file to discover all available pages before exploring further.
# Session Management Deep Dive
# Session Management & Compaction (Deep Dive)
This document explains how OpenClaw manages sessions end-to-end:
* **Session routing** (how inbound messages map to a `sessionKey`)
* **Session store** (`sessions.json`) and what it tracks
* **Transcript persistence** (`*.jsonl`) and its structure
* **Transcript hygiene** (provider-specific fixups before runs)
* **Context limits** (context window vs tracked tokens)
* **Compaction** (manual + auto-compaction) and where to hook pre-compaction work
* **Silent housekeeping** (e.g. memory writes that shouldnt produce user-visible output)
If you want a higher-level overview first, start with:
* [/concepts/session](/concepts/session)
* [/concepts/compaction](/concepts/compaction)
* [/concepts/session-pruning](/concepts/session-pruning)
* [/reference/transcript-hygiene](/reference/transcript-hygiene)
***
## Source of truth: the Gateway
OpenClaw is designed around a single **Gateway process** that owns session state.
* UIs (macOS app, web Control UI, TUI) should query the Gateway for session lists and token counts.
* In remote mode, session files are on the remote host; “checking your local Mac files” wont reflect what the Gateway is using.
***
## Two persistence layers
OpenClaw persists sessions in two layers:
1. **Session store (`sessions.json`)**
* Key/value map: `sessionKey -> SessionEntry`
* Small, mutable, safe to edit (or delete entries)
* Tracks session metadata (current session id, last activity, toggles, token counters, etc.)
2. **Transcript (`<sessionId>.jsonl`)**
* Append-only transcript with tree structure (entries have `id` + `parentId`)
* Stores the actual conversation + tool calls + compaction summaries
* Used to rebuild the model context for future turns
***
## On-disk locations
Per agent, on the Gateway host:
* Store: `~/.openclaw/agents/<agentId>/sessions/sessions.json`
* Transcripts: `~/.openclaw/agents/<agentId>/sessions/<sessionId>.jsonl`
* Telegram topic sessions: `.../<sessionId>-topic-<threadId>.jsonl`
OpenClaw resolves these via `src/config/sessions.ts`.
***
## Session keys (`sessionKey`)
A `sessionKey` identifies *which conversation bucket* youre in (routing + isolation).
Common patterns:
* Main/direct chat (per agent): `agent:<agentId>:<mainKey>` (default `main`)
* Group: `agent:<agentId>:<channel>:group:<id>`
* Room/channel (Discord/Slack): `agent:<agentId>:<channel>:channel:<id>` or `...:room:<id>`
* Cron: `cron:<job.id>`
* Webhook: `hook:<uuid>` (unless overridden)
The canonical rules are documented at [/concepts/session](/concepts/session).
***
## Session ids (`sessionId`)
Each `sessionKey` points at a current `sessionId` (the transcript file that continues the conversation).
Rules of thumb:
* **Reset** (`/new`, `/reset`) creates a new `sessionId` for that `sessionKey`.
* **Daily reset** (default 4:00 AM local time on the gateway host) creates a new `sessionId` on the next message after the reset boundary.
* **Idle expiry** (`session.reset.idleMinutes` or legacy `session.idleMinutes`) creates a new `sessionId` when a message arrives after the idle window. When daily + idle are both configured, whichever expires first wins.
Implementation detail: the decision happens in `initSessionState()` in `src/auto-reply/reply/session.ts`.
***
## Session store schema (`sessions.json`)
The stores value type is `SessionEntry` in `src/config/sessions.ts`.
Key fields (not exhaustive):
* `sessionId`: current transcript id (filename is derived from this unless `sessionFile` is set)
* `updatedAt`: last activity timestamp
* `sessionFile`: optional explicit transcript path override
* `chatType`: `direct | group | room` (helps UIs and send policy)
* `provider`, `subject`, `room`, `space`, `displayName`: metadata for group/channel labeling
* Toggles:
* `thinkingLevel`, `verboseLevel`, `reasoningLevel`, `elevatedLevel`
* `sendPolicy` (per-session override)
* Model selection:
* `providerOverride`, `modelOverride`, `authProfileOverride`
* Token counters (best-effort / provider-dependent):
* `inputTokens`, `outputTokens`, `totalTokens`, `contextTokens`
* `compactionCount`: how often auto-compaction completed for this session key
* `memoryFlushAt`: timestamp for the last pre-compaction memory flush
* `memoryFlushCompactionCount`: compaction count when the last flush ran
The store is safe to edit, but the Gateway is the authority: it may rewrite or rehydrate entries as sessions run.
***
## Transcript structure (`*.jsonl`)
Transcripts are managed by `@mariozechner/pi-coding-agent`s `SessionManager`.
The file is JSONL:
* First line: session header (`type: "session"`, includes `id`, `cwd`, `timestamp`, optional `parentSession`)
* Then: session entries with `id` + `parentId` (tree)
Notable entry types:
* `message`: user/assistant/toolResult messages
* `custom_message`: extension-injected messages that *do* enter model context (can be hidden from UI)
* `custom`: extension state that does *not* enter model context
* `compaction`: persisted compaction summary with `firstKeptEntryId` and `tokensBefore`
* `branch_summary`: persisted summary when navigating a tree branch
OpenClaw intentionally does **not** “fix up” transcripts; the Gateway uses `SessionManager` to read/write them.
***
## Context windows vs tracked tokens
Two different concepts matter:
1. **Model context window**: hard cap per model (tokens visible to the model)
2. **Session store counters**: rolling stats written into `sessions.json` (used for /status and dashboards)
If youre tuning limits:
* The context window comes from the model catalog (and can be overridden via config).
* `contextTokens` in the store is a runtime estimate/reporting value; dont treat it as a strict guarantee.
For more, see [/token-use](/reference/token-use).
***
## Compaction: what it is
Compaction summarizes older conversation into a persisted `compaction` entry in the transcript and keeps recent messages intact.
After compaction, future turns see:
* The compaction summary
* Messages after `firstKeptEntryId`
Compaction is **persistent** (unlike session pruning). See [/concepts/session-pruning](/concepts/session-pruning).
***
## When auto-compaction happens (Pi runtime)
In the embedded Pi agent, auto-compaction triggers in two cases:
1. **Overflow recovery**: the model returns a context overflow error → compact → retry.
2. **Threshold maintenance**: after a successful turn, when:
`contextTokens > contextWindow - reserveTokens`
Where:
* `contextWindow` is the models context window
* `reserveTokens` is headroom reserved for prompts + the next model output
These are Pi runtime semantics (OpenClaw consumes the events, but Pi decides when to compact).
***
## Compaction settings (`reserveTokens`, `keepRecentTokens`)
Pis compaction settings live in Pi settings:
```json5 theme={null}
{
compaction: {
enabled: true,
reserveTokens: 16384,
keepRecentTokens: 20000,
},
}
```
OpenClaw also enforces a safety floor for embedded runs:
* If `compaction.reserveTokens < reserveTokensFloor`, OpenClaw bumps it.
* Default floor is `20000` tokens.
* Set `agents.defaults.compaction.reserveTokensFloor: 0` to disable the floor.
* If its already higher, OpenClaw leaves it alone.
Why: leave enough headroom for multi-turn “housekeeping” (like memory writes) before compaction becomes unavoidable.
Implementation: `ensurePiCompactionReserveTokens()` in `src/agents/pi-settings.ts`
(called from `src/agents/pi-embedded-runner.ts`).
***
## User-visible surfaces
You can observe compaction and session state via:
* `/status` (in any chat session)
* `openclaw status` (CLI)
* `openclaw sessions` / `sessions --json`
* Verbose mode: `🧹 Auto-compaction complete` + compaction count
***
## Silent housekeeping (`NO_REPLY`)
OpenClaw supports “silent” turns for background tasks where the user should not see intermediate output.
Convention:
* The assistant starts its output with `NO_REPLY` to indicate “do not deliver a reply to the user”.
* OpenClaw strips/suppresses this in the delivery layer.
As of `2026.1.10`, OpenClaw also suppresses **draft/typing streaming** when a partial chunk begins with `NO_REPLY`, so silent operations dont leak partial output mid-turn.
***
## Pre-compaction “memory flush” (implemented)
Goal: before auto-compaction happens, run a silent agentic turn that writes durable
state to disk (e.g. `memory/YYYY-MM-DD.md` in the agent workspace) so compaction cant
erase critical context.
OpenClaw uses the **pre-threshold flush** approach:
1. Monitor session context usage.
2. When it crosses a “soft threshold” (below Pis compaction threshold), run a silent
“write memory now” directive to the agent.
3. Use `NO_REPLY` so the user sees nothing.
Config (`agents.defaults.compaction.memoryFlush`):
* `enabled` (default: `true`)
* `softThresholdTokens` (default: `4000`)
* `prompt` (user message for the flush turn)
* `systemPrompt` (extra system prompt appended for the flush turn)
Notes:
* The default prompt/system prompt include a `NO_REPLY` hint to suppress delivery.
* The flush runs once per compaction cycle (tracked in `sessions.json`).
* The flush runs only for embedded Pi sessions (CLI backends skip it).
* The flush is skipped when the session workspace is read-only (`workspaceAccess: "ro"` or `"none"`).
* See [Memory](/concepts/memory) for the workspace file layout and write patterns.
Pi also exposes a `session_before_compact` hook in the extension API, but OpenClaws
flush logic lives on the Gateway side today.
***
## Troubleshooting checklist
* Session key wrong? Start with [/concepts/session](/concepts/session) and confirm the `sessionKey` in `/status`.
* Store vs transcript mismatch? Confirm the Gateway host and the store path from `openclaw status`.
* Compaction spam? Check:
* model context window (too small)
* compaction settings (`reserveTokens` too high for the model window can cause earlier compaction)
* tool-result bloat: enable/tune session pruning
* Silent turns leaking? Confirm the reply starts with `NO_REPLY` (exact token) and youre on a build that includes the streaming suppression fix.

View File

@@ -0,0 +1,52 @@
> ## Documentation Index
> Fetch the complete documentation index at: https://docs.openclaw.ai/llms.txt
> Use this file to discover all available pages before exploring further.
# Tests
# Tests
* Full testing kit (suites, live, Docker): [Testing](/help/testing)
* `pnpm test:force`: Kills any lingering gateway process holding the default control port, then runs the full Vitest suite with an isolated gateway port so server tests dont collide with a running instance. Use this when a prior gateway run left port 18789 occupied.
* `pnpm test:coverage`: Runs Vitest with V8 coverage. Global thresholds are 70% lines/branches/functions/statements. Coverage excludes integration-heavy entrypoints (CLI wiring, gateway/telegram bridges, webchat static server) to keep the target focused on unit-testable logic.
* `pnpm test:e2e`: Runs gateway end-to-end smoke tests (multi-instance WS/HTTP/node pairing).
* `pnpm test:live`: Runs provider live tests (minimax/zai). Requires API keys and `LIVE=1` (or provider-specific `*_LIVE_TEST=1`) to unskip.
## Model latency bench (local keys)
Script: [`scripts/bench-model.ts`](https://github.com/openclaw/openclaw/blob/main/scripts/bench-model.ts)
Usage:
* `source ~/.profile && pnpm tsx scripts/bench-model.ts --runs 10`
* Optional env: `MINIMAX_API_KEY`, `MINIMAX_BASE_URL`, `MINIMAX_MODEL`, `ANTHROPIC_API_KEY`
* Default prompt: “Reply with a single word: ok. No punctuation or extra text.”
Last run (2025-12-31, 20 runs):
* minimax median 1279ms (min 1114, max 2431)
* opus median 2454ms (min 1224, max 3170)
## Onboarding E2E (Docker)
Docker is optional; this is only needed for containerized onboarding smoke tests.
Full cold-start flow in a clean Linux container:
```bash theme={null}
scripts/e2e/onboard-docker.sh
```
This script drives the interactive wizard via a pseudo-tty, verifies config/workspace/session files, then starts the gateway and runs `openclaw health`.
## QR import smoke (Docker)
Ensures `qrcode-terminal` loads under Node 22+ in Docker:
```bash theme={null}
pnpm test:docker:qr
```