{ "title": "Messages", "content": "This page ties together how OpenClaw handles inbound messages, sessions, queueing,\nstreaming, and reasoning visibility.\n\n## Message flow (high level)\n\nKey knobs live in configuration:\n\n* `messages.*` for prefixes, queueing, and group behavior.\n* `agents.defaults.*` for block streaming and chunking defaults.\n* Channel overrides (`channels.whatsapp.*`, `channels.telegram.*`, etc.) for caps and streaming toggles.\n\nSee [Configuration](/gateway/configuration) for full schema.\n\nChannels can redeliver the same message after reconnects. OpenClaw keeps a\nshort-lived cache keyed by channel/account/peer/session/message id so duplicate\ndeliveries do not trigger another agent run.\n\n## Inbound debouncing\n\nRapid consecutive messages from the **same sender** can be batched into a single\nagent turn via `messages.inbound`. Debouncing is scoped per channel + conversation\nand uses the most recent message for reply threading/IDs.\n\nConfig (global default + per-channel overrides):\n\n* Debounce applies to **text-only** messages; media/attachments flush immediately.\n* Control commands bypass debouncing so they remain standalone.\n\n## Sessions and devices\n\nSessions are owned by the gateway, not by clients.\n\n* Direct chats collapse into the agent main session key.\n* Groups/channels get their own session keys.\n* The session store and transcripts live on the gateway host.\n\nMultiple devices/channels can map to the same session, but history is not fully\nsynced back to every client. Recommendation: use one primary device for long\nconversations to avoid divergent context. The Control UI and TUI always show the\ngateway-backed session transcript, so they are the source of truth.\n\nDetails: [Session management](/concepts/session).\n\n## Inbound bodies and history context\n\nOpenClaw separates the **prompt body** from the **command body**:\n\n* `Body`: prompt text sent to the agent. This may include channel envelopes and\n optional history wrappers.\n* `CommandBody`: raw user text for directive/command parsing.\n* `RawBody`: legacy alias for `CommandBody` (kept for compatibility).\n\nWhen a channel supplies history, it uses a shared wrapper:\n\n* `[Chat messages since your last reply - for context]`\n* `[Current message - respond to this]`\n\nFor **non-direct chats** (groups/channels/rooms), the **current message body** is prefixed with the\nsender label (same style used for history entries). This keeps real-time and queued/history\nmessages consistent in the agent prompt.\n\nHistory buffers are **pending-only**: they include group messages that did *not*\ntrigger a run (for example, mention-gated messages) and **exclude** messages\nalready in the session transcript.\n\nDirective stripping only applies to the **current message** section so history\nremains intact. Channels that wrap history should set `CommandBody` (or\n`RawBody`) to the original message text and keep `Body` as the combined prompt.\nHistory buffers are configurable via `messages.groupChat.historyLimit` (global\ndefault) and per-channel overrides like `channels.slack.historyLimit` or\n`channels.telegram.accounts..historyLimit` (set `0` to disable).\n\n## Queueing and followups\n\nIf a run is already active, inbound messages can be queued, steered into the\ncurrent run, or collected for a followup turn.\n\n* Configure via `messages.queue` (and `messages.queue.byChannel`).\n* Modes: `interrupt`, `steer`, `followup`, `collect`, plus backlog variants.\n\nDetails: [Queueing](/concepts/queue).\n\n## Streaming, chunking, and batching\n\nBlock streaming sends partial replies as the model produces text blocks.\nChunking respects channel text limits and avoids splitting fenced code.\n\n* `agents.defaults.blockStreamingDefault` (`on|off`, default off)\n* `agents.defaults.blockStreamingBreak` (`text_end|message_end`)\n* `agents.defaults.blockStreamingChunk` (`minChars|maxChars|breakPreference`)\n* `agents.defaults.blockStreamingCoalesce` (idle-based batching)\n* `agents.defaults.humanDelay` (human-like pause between block replies)\n* Channel overrides: `*.blockStreaming` and `*.blockStreamingCoalesce` (non-Telegram channels require explicit `*.blockStreaming: true`)\n\nDetails: [Streaming + chunking](/concepts/streaming).\n\n## Reasoning visibility and tokens\n\nOpenClaw can expose or hide model reasoning:\n\n* `/reasoning on|off|stream` controls visibility.\n* Reasoning content still counts toward token usage when produced by the model.\n* Telegram supports reasoning stream into the draft bubble.\n\nDetails: [Thinking + reasoning directives](/tools/thinking) and [Token use](/token-use).\n\n## Prefixes, threading, and replies\n\nOutbound message formatting is centralized in `messages`:\n\n* `messages.responsePrefix`, `channels..responsePrefix`, and `channels..accounts..responsePrefix` (outbound prefix cascade), plus `channels.whatsapp.messagePrefix` (WhatsApp inbound prefix)\n* Reply threading via `replyToMode` and per-channel defaults\n\nDetails: [Configuration](/gateway/configuration#messages) and channel docs.", "code_samples": [ { "code": "Inbound message\n -> routing/bindings -> session key\n -> queue (if a run is active)\n -> agent run (streaming + tools)\n -> outbound replies (channel limits + chunking)", "language": "unknown" } ], "headings": [ { "level": "h2", "text": "Message flow (high level)", "id": "message-flow-(high-level)" }, { "level": "h2", "text": "Inbound dedupe", "id": "inbound-dedupe" }, { "level": "h2", "text": "Inbound debouncing", "id": "inbound-debouncing" }, { "level": "h2", "text": "Sessions and devices", "id": "sessions-and-devices" }, { "level": "h2", "text": "Inbound bodies and history context", "id": "inbound-bodies-and-history-context" }, { "level": "h2", "text": "Queueing and followups", "id": "queueing-and-followups" }, { "level": "h2", "text": "Streaming, chunking, and batching", "id": "streaming,-chunking,-and-batching" }, { "level": "h2", "text": "Reasoning visibility and tokens", "id": "reasoning-visibility-and-tokens" }, { "level": "h2", "text": "Prefixes, threading, and replies", "id": "prefixes,-threading,-and-replies" } ], "url": "llms-txt#messages", "links": [] }