Files
openclaw-skill/openclaw-knowhow-skill/output/openclaw-docs_data/pages/Agent_Runtime__bb9503e662.json
Selig 4c966a3ad2 Initial commit: OpenClaw Skill Collection
6 custom skills (assign-task, dispatch-webhook, daily-briefing,
task-capture, qmd-brain, tts-voice) with technical documentation.
Compatible with Claude Code, OpenClaw, Codex CLI, and OpenCode.
2026-03-13 10:58:30 +08:00

54 lines
5.9 KiB
JSON
Executable File
Raw Blame History

This file contains ambiguous Unicode characters
This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.
{
"title": "Agent Runtime 🤖",
"content": "OpenClaw runs a single embedded agent runtime derived from **pi-mono**.\n\n## Workspace (required)\n\nOpenClaw uses a single agent workspace directory (`agents.defaults.workspace`) as the agents **only** working directory (`cwd`) for tools and context.\n\nRecommended: use `openclaw setup` to create `~/.openclaw/openclaw.json` if missing and initialize the workspace files.\n\nFull workspace layout + backup guide: [Agent workspace](/concepts/agent-workspace)\n\nIf `agents.defaults.sandbox` is enabled, non-main sessions can override this with\nper-session workspaces under `agents.defaults.sandbox.workspaceRoot` (see\n[Gateway configuration](/gateway/configuration)).\n\n## Bootstrap files (injected)\n\nInside `agents.defaults.workspace`, OpenClaw expects these user-editable files:\n\n* `AGENTS.md` — operating instructions + “memory”\n* `SOUL.md` — persona, boundaries, tone\n* `TOOLS.md` — user-maintained tool notes (e.g. `imsg`, `sag`, conventions)\n* `BOOTSTRAP.md` — one-time first-run ritual (deleted after completion)\n* `IDENTITY.md` — agent name/vibe/emoji\n* `USER.md` — user profile + preferred address\n\nOn the first turn of a new session, OpenClaw injects the contents of these files directly into the agent context.\n\nBlank files are skipped. Large files are trimmed and truncated with a marker so prompts stay lean (read the file for full content).\n\nIf a file is missing, OpenClaw injects a single “missing file” marker line (and `openclaw setup` will create a safe default template).\n\n`BOOTSTRAP.md` is only created for a **brand new workspace** (no other bootstrap files present). If you delete it after completing the ritual, it should not be recreated on later restarts.\n\nTo disable bootstrap file creation entirely (for pre-seeded workspaces), set:\n\nCore tools (read/exec/edit/write and related system tools) are always available,\nsubject to tool policy. `apply_patch` is optional and gated by\n`tools.exec.applyPatch`. `TOOLS.md` does **not** control which tools exist; its\nguidance for how *you* want them used.\n\nOpenClaw loads skills from three locations (workspace wins on name conflict):\n\n* Bundled (shipped with the install)\n* Managed/local: `~/.openclaw/skills`\n* Workspace: `<workspace>/skills`\n\nSkills can be gated by config/env (see `skills` in [Gateway configuration](/gateway/configuration)).\n\n## pi-mono integration\n\nOpenClaw reuses pieces of the pi-mono codebase (models/tools), but **session management, discovery, and tool wiring are OpenClaw-owned**.\n\n* No pi-coding agent runtime.\n* No `~/.pi/agent` or `<workspace>/.pi` settings are consulted.\n\nSession transcripts are stored as JSONL at:\n\n* `~/.openclaw/agents/<agentId>/sessions/<SessionId>.jsonl`\n\nThe session ID is stable and chosen by OpenClaw.\nLegacy Pi/Tau session folders are **not** read.\n\n## Steering while streaming\n\nWhen queue mode is `steer`, inbound messages are injected into the current run.\nThe queue is checked **after each tool call**; if a queued message is present,\nremaining tool calls from the current assistant message are skipped (error tool\nresults with \"Skipped due to queued user message.\"), then the queued user\nmessage is injected before the next assistant response.\n\nWhen queue mode is `followup` or `collect`, inbound messages are held until the\ncurrent turn ends, then a new agent turn starts with the queued payloads. See\n[Queue](/concepts/queue) for mode + debounce/cap behavior.\n\nBlock streaming sends completed assistant blocks as soon as they finish; it is\n**off by default** (`agents.defaults.blockStreamingDefault: \"off\"`).\nTune the boundary via `agents.defaults.blockStreamingBreak` (`text_end` vs `message_end`; defaults to text\\_end).\nControl soft block chunking with `agents.defaults.blockStreamingChunk` (defaults to\n8001200 chars; prefers paragraph breaks, then newlines; sentences last).\nCoalesce streamed chunks with `agents.defaults.blockStreamingCoalesce` to reduce\nsingle-line spam (idle-based merging before send). Non-Telegram channels require\nexplicit `*.blockStreaming: true` to enable block replies.\nVerbose tool summaries are emitted at tool start (no debounce); Control UI\nstreams tool output via agent events when available.\nMore details: [Streaming + chunking](/concepts/streaming).\n\nModel refs in config (for example `agents.defaults.model` and `agents.defaults.models`) are parsed by splitting on the **first** `/`.\n\n* Use `provider/model` when configuring models.\n* If the model ID itself contains `/` (OpenRouter-style), include the provider prefix (example: `openrouter/moonshotai/kimi-k2`).\n* If you omit the provider, OpenClaw treats the input as an alias or a model for the **default provider** (only works when there is no `/` in the model ID).\n\n## Configuration (minimal)\n\n* `agents.defaults.workspace`\n* `channels.whatsapp.allowFrom` (strongly recommended)\n\n*Next: [Group Chats](/concepts/group-messages)* 🦞",
"code_samples": [],
"headings": [
{
"level": "h2",
"text": "Workspace (required)",
"id": "workspace-(required)"
},
{
"level": "h2",
"text": "Bootstrap files (injected)",
"id": "bootstrap-files-(injected)"
},
{
"level": "h2",
"text": "Built-in tools",
"id": "built-in-tools"
},
{
"level": "h2",
"text": "Skills",
"id": "skills"
},
{
"level": "h2",
"text": "pi-mono integration",
"id": "pi-mono-integration"
},
{
"level": "h2",
"text": "Sessions",
"id": "sessions"
},
{
"level": "h2",
"text": "Steering while streaming",
"id": "steering-while-streaming"
},
{
"level": "h2",
"text": "Model refs",
"id": "model-refs"
},
{
"level": "h2",
"text": "Configuration (minimal)",
"id": "configuration-(minimal)"
}
],
"url": "llms-txt#agent-runtime-🤖",
"links": []
}