16 Commits

Author SHA1 Message Date
8bacc868bd Add 5 missing skills to repo for sync coverage
github-repo-search, gooddays-calendar, luxtts,
openclaw-tavily-search, skill-vetter — previously only
in workspace, now tracked in Gitea for full sync.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-14 20:36:30 +08:00
6451d73732 Merge pull request 'improve(daily-briefing): 優化輸出格式與排版,提升閱讀體驗' (#3) from kaiwu/openclaw-skill:improve/daily-briefing-ux into main 2026-03-14 20:27:50 +08:00
394492168b Merge pull request 'improve(tts-voice): avoid shell-based curl execution in auth/health checks' (#4) from yucheng/openclaw-skill:improve/tts-voice-safe-curl-spawn into main 2026-03-14 20:27:41 +08:00
c0be5c46b8 Merge pull request 'improve(qmd-brain): 強化命令執行安全,降低注入風險' (#5) from tiangong/openclaw-skill:improve/qmd-brain-command-injection-hardening into main 2026-03-14 20:27:27 +08:00
b48fd7259b Add obsidian-official-cli skill from ClawhHub
Obsidian CLI (v1.12+) reference skill for all agents — covers file ops,
search, tasks, templates, plugins, sync, and dev tools.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-14 20:23:00 +08:00
4cc98e7c0f improve(qmd-brain): harden command execution against injection 2026-03-14 15:03:34 +08:00
yucheng
d144f5641e improve(tts-voice): use spawnSync for curl auth/health checks 2026-03-14 12:03:52 +08:00
20801b19be improve(daily-briefing): 優化輸出格式與排版,提升閱讀體驗 2026-03-14 09:00:54 +08:00
f1a6df4ca4 add 6 skills to repo + update skill-review for xiaoming
- Add code-interpreter, kokoro-tts, remotion-best-practices,
  research-to-paper-slides, summarize, tavily-tool to source repo
- skill-review: add main/xiaoming agent mapping in handler.ts + SKILL.md
- tts-voice: handler.ts updates from agent workspace

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-13 22:59:43 +08:00
da6e932d51 Merge pull request 'improve(dispatch-webhook): 強化輸入驗證與參數邊界防護' (#2) from tiangong/openclaw-skill:improve/dispatch-webhook-input-guardrails into main 2026-03-13 22:03:25 +08:00
76a2d97563 Merge pull request 'improve(tts-voice): 改用 spawnSync 參數陣列避免 shell quoting 問題' (#1) from yucheng/openclaw-skill:improve/tts-voice-safe-curl-args into main 2026-03-13 22:03:17 +08:00
5918d671bb improve(dispatch-webhook): 強化輸入驗證與參數邊界防護 2026-03-13 15:04:29 +08:00
36e0d349b2 improve(tts-voice): use spawnSync args to avoid shell quoting issues 2026-03-13 12:02:57 +08:00
16c8ddcfa6 feat: add ProjectManager skill package
包含 5 個專案管理 skills、2 個主專案定義、4 個共用規則及 dashboard。

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-13 12:02:15 +08:00
9df7c7c4cb docs: add skill-review to README 2026-03-13 11:11:52 +08:00
9ab15e99e5 feat(skill-review): Agent PR workflow skill
Enables agents (tiangong, kaiwu, yucheng) to review skills
and submit improvement PRs via Gitea fork → branch → PR workflow.
2026-03-13 11:11:27 +08:00
59 changed files with 4304 additions and 72 deletions

12
ProjectManager/README.md Normal file
View File

@@ -0,0 +1,12 @@
# OpenClaw Project Package
此資料夾為可直接匯入或參考的原始檔案結構,包含:
- `skills/`5 個 OpenClaw skills
- `rules/`附件、文件編號、SOP、工作記錄模板
- `dashboards/`:專案總覽
- `projects/`2 個主專案的 `project.yaml`
## 主專案
- `P-ONLINE-DOC-AGENT`
- `P-ETL-VISUAL-PLATFORM`

View File

@@ -0,0 +1,30 @@
# Projects Overview
## 主專案總覽
| Project ID | 名稱 | 狀態 | 進度 | 下一步 |
|---|---|---:|---:|---|
| P-ONLINE-DOC-AGENT | Agent 操作線上文件系統 | active | 25% | 建立 EFLOW 附件規則 |
| P-ETL-VISUAL-PLATFORM | ETL 資料庫轉換可視化網頁 | active | 35% | 建立 SP 與目標欄位 mapping v1 |
## 子專案總覽
### P-ONLINE-DOC-AGENT
| Subproject ID | 名稱 | 狀態 | 備註 |
|---|---|---|---|
| SP-OFFICIAL-DOC | 線上公文操作 | doing | 建立 SOP v1 |
| SP-EFLOW | EFLOW 操作 | doing | 整理送件與附件流程 |
| SP-FORMBUILD | FORMBUILD 操作 | todo | 尚待蒐集畫面與欄位 |
| SP-ATTACHMENT-RULES | 附件規則 | doing | 定義資料夾與命名 |
| SP-DOC-ID-RULES | 文件編號規則 | todo | 定義 CASE 與系統編號 |
### P-ETL-VISUAL-PLATFORM
| Subproject ID | 名稱 | 狀態 | 備註 |
|---|---|---|---|
| SP-ETL-PIPELINE | ETL 流程整理 | doing | 蒐集流程與來源資料 |
| SP-DB-SP-ANALYSIS | SP 分析 | doing | 匯整 SP 清單 |
| SP-TARGET-FIELD-MAPPING | 目標欄位 mapping | todo | 建立欄位對照表 |
| SP-LLM-DIFF-CHECK | LLM 差異比對 | todo | 定義比較規則 |
| SP-VISUAL-WEB | 可視化頁面 | todo | 定義顯示模組 |

View File

@@ -0,0 +1,13 @@
project_id: P-ETL-VISUAL-PLATFORM
name: ETL 資料庫轉換可視化網頁
status: active
objective: >-
建立 ETL 資料庫轉換可視化網頁,並以 LLM 協助比對 Stored Procedure 與目標文件欄位差異。
current_phase: analysis
next_action: 建立 SP 與目標欄位 mapping 表 v1
subprojects:
- SP-ETL-PIPELINE
- SP-DB-SP-ANALYSIS
- SP-TARGET-FIELD-MAPPING
- SP-LLM-DIFF-CHECK
- SP-VISUAL-WEB

View File

@@ -0,0 +1,14 @@
project_id: P-ONLINE-DOC-AGENT
name: Agent 操作線上文件系統
status: active
objective: >-
建立一套可由 Agent 依 SOP 操作的線上文件流程涵蓋線上公文、EFLOW、FORMBUILD
並整合附件規則、文件編號規則與工作記錄。
current_phase: definition
next_action: 建立 EFLOW 附件規則與操作步驟 v1
subprojects:
- SP-OFFICIAL-DOC
- SP-EFLOW
- SP-FORMBUILD
- SP-ATTACHMENT-RULES
- SP-DOC-ID-RULES

View File

@@ -0,0 +1,27 @@
# 附件歸檔規則
## 目錄結構
```text
attachments/
official-doc/
DOC-YYYY-NNN/
eflow/
EFL-YYYY-NNN/
formbuild/
FBD-YYYY-NNN/
```
## 檔名格式
`[文件編號]_[附件類型]_[版本]_[日期].[副檔名]`
### 範例
- `DOC-2026-001_申請書_v1_2026-03-12.pdf`
- `EFL-2026-003_核准函_v2_2026-03-12.pdf`
## Agent 上傳前檢查
- 是否有文件編號
- 是否放在正確資料夾
- 檔名是否正確
- 版本是否正確
- 是否缺少必要附件

View File

@@ -0,0 +1,20 @@
# 文件編號規則
## 系統文件編號
- 線上公文:`DOC-YYYY-NNN`
- EFLOW`EFL-YYYY-NNN`
- FORMBUILD`FBD-YYYY-NNN`
## 跨系統案件編號
- `CASE-YYYY-NNN`
## 範例
- `CASE-2026-015`
- `DOC-2026-021`
- `EFL-2026-009`
## 規則
1. 新建文件前先判斷系統類型
2. 若屬同一案件,優先掛既有 CASE-ID
3. 所有附件檔名必須以前述文件編號開頭
4. 文件記錄中需保留 CASE 與系統文件編號的對應

View File

@@ -0,0 +1,40 @@
# Web Operation SOP Template
## 系統名稱
- 線上公文 / EFLOW / FORMBUILD
## 操作目的
- 例如:建立新案件、送件、補附件、更新欄位
## 前置資料
- 帳號/登入條件
- 文件編號
- CASE-ID若有
- 附件清單
- 欄位資料
## 操作步驟
1. 進入系統
2. 開啟目標頁面
3. 輸入或確認必要欄位
4. 上傳附件
5. 執行送出或儲存
6. 確認結果
## 上傳附件檢查
- 路徑:
- 檔名:
- 版本:
- 是否齊全:
## 結果記錄
- 成功 / 失敗 / 阻塞
- 畫面狀態
- 文件編號
- 下一步
## 常見錯誤
- 登入失敗
- 欄位缺漏
- 附件格式不符
- 權限不足

View File

@@ -0,0 +1,25 @@
# Worklog Format
## 日期
- YYYY-MM-DD
## 所屬主專案
- P-ONLINE-DOC-AGENT / P-ETL-VISUAL-PLATFORM
## 所屬子專案
- 例如SP-EFLOW
## 本次完成
-
## 本次阻塞
-
## 使用文件 / 附件
-
## 文件編號 / CASE-ID
-
## 下一步
-

View File

@@ -0,0 +1,62 @@
---
name: attachment-filing-rules
description: 用於規範不同線上系統所需附件的資料夾放置、命名、版本與 Agent 取用方式。
---
# Purpose
本 skill 用來統一附件管理,讓 Agent 能穩定找到正確檔案並完成上傳。
# Scope
適用系統:
- 線上公文
- EFLOW
- FORMBUILD
# Folder Structure
```text
attachments/
official-doc/
DOC-YYYY-NNN/
eflow/
EFL-YYYY-NNN/
formbuild/
FBD-YYYY-NNN/
```
# File Naming Convention
格式:
`[文件編號]_[附件類型]_[版本]_[日期].[副檔名]`
範例:
- `DOC-2026-001_申請書_v1_2026-03-12.pdf`
- `EFL-2026-003_核准函_v2_2026-03-12.pdf`
- `FBD-2026-007_附件清單_v1_2026-03-12.xlsx`
# Rules
## 1. 上傳前檢查
Agent 上傳附件前必須確認:
1. 有文件編號
2. 位於正確系統資料夾
3. 檔名格式正確
4. 版本號正確
5. 是最新檔案
6. 必要附件已齊全
## 2. 禁止事項
- 不可上傳檔名不明或臨時檔
- 不可上傳無版本資訊之重複檔案
- 不可跨系統誤用資料夾中的附件
## 3. 回應格式
當被要求尋找或準備附件時,回應:
- 系統類型
- 文件編號
- 預期附件類型
- 建議資料夾路徑
- 檔名檢查結果
- 缺漏項目

View File

@@ -0,0 +1,54 @@
---
name: doc-id-convention
description: 用於規範線上公文、EFLOW、FORMBUILD 的文件編號與跨系統案件識別方式。
---
# Purpose
本 skill 用來讓文件、附件、案件、流程之間能一致對應。
# ID Types
## 1. 系統文件編號
- 線上公文:`DOC-YYYY-NNN`
- EFLOW`EFL-YYYY-NNN`
- FORMBUILD`FBD-YYYY-NNN`
範例:
- `DOC-2026-001`
- `EFL-2026-014`
- `FBD-2026-007`
## 2. 案件編號(跨系統)
若同一件事情會跨多個系統,建立案件編號:
- `CASE-YYYY-NNN`
範例:
- `CASE-2026-015`
- `DOC-2026-021`
- `EFL-2026-009`
# Rules
## 1. 新建文件
- 先判斷系統類型
- 分配對應前綴編號
- 確認是否屬於既有 CASE
## 2. 附件命名
所有附件檔名必須以前述文件編號開頭。
## 3. 記錄關聯
若文件跨系統,必須明確記錄:
- CASE-ID
- 各系統文件編號
- 關聯說明
# Response Format
當需要建立或查詢文件編號時,輸出:
- 系統類型
- 是否已有 CASE-ID
- 建議文件編號
- 關聯文件
- 建議資料夾名稱

View File

@@ -0,0 +1,76 @@
---
name: etl-visual-project
description: 用於管理 ETL 資料庫轉換可視化網頁專案,並包含 SP 分析、目標欄位 mapping 與 LLM 差異比對。
---
# Purpose
此 skill 用於處理:
- ETL 資料流程整理
- Stored Procedure 分析
- 目標文件欄位 mapping
- LLM 比較 SP 與目標欄位差異
- 可視化網頁的需求與模組規劃
# Project
主專案:
`P-ETL-VISUAL-PLATFORM`
子專案:
- `SP-ETL-PIPELINE`
- `SP-DB-SP-ANALYSIS`
- `SP-TARGET-FIELD-MAPPING`
- `SP-LLM-DIFF-CHECK`
- `SP-VISUAL-WEB`
# Core Objective
建立一套能呈現 ETL 資料庫轉換流程的可視化網頁。
# Branch Objective
使用 LLM 比較:
- Stored Procedure 輸出或邏輯
- 目標文件定義的欄位
- 欄位差異與缺漏
- 型別差異
- 命名差異
- 規則差異
# Workflow
## 1. 資料蒐集
- 蒐集 SP 清單
- 蒐集目標文件欄位定義
- 蒐集資料表與 ETL 流向
## 2. Mapping
- 建立欄位對照表
- 標記來源欄位、目標欄位、轉換規則
## 3. LLM Diff Check
- 比較 SP 欄位與目標文件欄位
- 輸出缺漏、命名不一致、型別不一致
## 4. Visual Web
- 定義頁面模組
- 呈現流程圖、欄位 mapping、差異報表
# Standard Output
每次回應時,輸出:
1. 所屬子專案
2. 本次目標
3. 所需輸入資料
4. 產出物
5. 差異或阻塞
6. 下一步
# Example Tasks
- 匯出 SP 欄位清單
- 建立目標文件欄位表
- 建立 mapping 表 v1
- 用 LLM 產出欄位差異報告
- 設計 ETL 可視化頁面需求草稿

View File

@@ -0,0 +1,88 @@
---
name: online-doc-agent-ops
description: 用於協助 Agent 操作線上文件系統包括線上公文、EFLOW、FORMBUILD並依 SOP、附件規則、文件編號規則執行。
---
# Purpose
此 skill 用於管理與執行以下系統的 Agent 操作:
- 線上公文
- EFLOW
- FORMBUILD
# Supported Scope
## 主專案
`P-ONLINE-DOC-AGENT`
## 子專案
- `SP-OFFICIAL-DOC`
- `SP-EFLOW`
- `SP-FORMBUILD`
- `SP-ATTACHMENT-RULES`
- `SP-DOC-ID-RULES`
# Execution Rules
## 1. 操作前必做檢查
每次操作前,先確認:
1. 目標系統是什麼
2. 這次操作的目的為何
3. 是否需要附件
4. 是否已有文件編號
5. 是否需要建立操作記錄
## 2. 系統判斷
### 線上公文
適用於:公文建立、上傳、送簽、帶附件等流程
### EFLOW
適用於:流程送件、附件上傳、流程狀態追蹤等
### FORMBUILD
適用於:表單填寫、欄位輸入、附件上傳、資料提交等
## 3. 附件處理規則
若系統需要附件:
- 先取得文件編號
- 依系統類型到指定資料夾找檔案
- 驗證檔名是否符合規則
- 驗證是否為最新版本
- 驗證附件是否齊全
## 4. 操作輸出
每次完成操作後,至少記錄:
- 日期時間
- 操作系統
- 文件編號
- 操作目的
- 使用附件
- 結果(成功 / 失敗 / 阻塞)
- 下一步
# Standard Response Format
當使用此 skill 時,請依下列格式回應:
1. 系統名稱
2. 本次操作目的
3. 需要的前置資料
4. 執行步驟
5. 附件檢查
6. 完成後記錄
7. 可能錯誤與處理方式
# Subsystem Notes
## SP-OFFICIAL-DOC
重點:公文流程、附件、送件狀態
## SP-EFLOW
重點:流程節點、附件、送件前檢查
## SP-FORMBUILD
重點:表單欄位、填寫規則、送出驗證

View File

@@ -0,0 +1,98 @@
---
name: project-governance
description: 用於判斷新工作應歸屬哪個主專案、子專案或共用規則,並維持專案、子專案、任務、規則四個層級的邊界清楚。
---
# Purpose
此 skill 用來避免將「類別、專案、子專案、任務」混淆。
適用於以下情境:
- 使用者提出新的工作、想法或需求
- 需要判斷是否建立新專案或新子專案
- 需要把任務正確掛到既有專案
- 需要判斷某件事是共用規則,而不是專案內容
# Core Rules
## 1. 四層模型
1. 規則Rules
- 共用命名規範
- 文件編號規則
- 附件歸檔規則
- SOP 與記錄格式
2. 主專案Projects
- `P-ONLINE-DOC-AGENT`
- `P-ETL-VISUAL-PLATFORM`
3. 子專案Subprojects
- 某主專案下的模組、工作流、系統分支
4. 任務Tasks
- 可執行、可完成、可驗收的最小工作單位
## 2. 分類判斷規則
當收到新工作時,依序判斷:
### A. 這是共用規則嗎?
若是關於以下內容,歸入 Rules
- 附件資料夾如何放
- 文件怎麼編號
- Agent 如何記錄工作
- 線上操作 SOP 的共同格式
### B. 這是屬於哪個主專案?
- 若與線上公文、EFLOW、FORMBUILD 的操作、自動化、附件、送件流程有關,歸入 `P-ONLINE-DOC-AGENT`
- 若與 ETL、Stored Procedure、欄位 mapping、LLM 比對、資料轉換可視化頁面有關,歸入 `P-ETL-VISUAL-PLATFORM`
### C. 是否應建立子專案?
符合以下任一條件時,建立子專案:
- 有獨立系統或獨立頁面
- 有獨立流程或 SOP
- 有獨立輸入輸出或文件產出
- 有明顯可拆分的技術模組
### D. 是否只是任務?
若該工作可在單次工作期內完成,且有明確完成條件,則建立為任務,不建立子專案。
# Required Output Format
在分析新工作時,固定輸出:
1. 歸屬層級Rule / Project / Subproject / Task
2. 所屬主專案
3. 所屬子專案(若有)
4. 建議任務名稱
5. 下一步最小可執行動作
6. 是否需要寫入工作記錄
# Examples
## Example 1
輸入EFLOW 送件時要帶 PDF 附件,並確認檔名格式
輸出:
- 層級Rule + Task
- 主專案P-ONLINE-DOC-AGENT
- 子專案SP-EFLOW / SP-ATTACHMENT-RULES
- 任務:定義 EFLOW 附件命名與上傳前檢查規則
- 下一步:建立 EFLOW 附件規則草稿 v1
## Example 2
輸入:比較 Stored Procedure 輸出欄位與目標文件欄位差異
輸出:
- 層級Subproject + Task
- 主專案P-ETL-VISUAL-PLATFORM
- 子專案SP-LLM-DIFF-CHECK
- 任務:建立 SP 與目標文件欄位對照表 v1
- 下一步:蒐集 SP 欄位清單與目標文件欄位清單
# Response Style
- 優先做歸類,再做執行建議
- 不把技術類別直接當成主專案
- 若工作本質是規則,必須明確指出它不是獨立主專案

View File

@@ -23,6 +23,7 @@
| `task-capture` | Telegram 快速記錄待辦(自動優先級 + 截止日) | 生活安排 |
| `qmd-brain` | 知識庫搜尋BM25 + pgvector 向量檢索) | 知識庫 |
| `tts-voice` | 文字轉語音LuxTTS 聲音克隆) | 多媒體 |
| `skill-review` | Agent 自動審查 skills 並提交 Gitea PR | DevOps |
## 目錄結構
@@ -37,7 +38,8 @@ openclaw-skill/
│ ├── daily-briefing/ # 每日簡報
│ ├── task-capture/ # 快速記錄待辦
│ ├── qmd-brain/ # 知識庫搜尋
── tts-voice/ # 文字轉語音
── tts-voice/ # 文字轉語音
│ └── skill-review/ # Agent PR 審查工作流
├── chapters/ # 技術手冊分章
└── openclaw-knowhow-skill/ # OpenClaw 官方文件與範本
```

View File

@@ -0,0 +1,150 @@
---
name: code-interpreter
description: Local Python code execution for calculations, tabular data inspection, CSV/JSON processing, simple plotting, text transformation, quick experiments, and reproducible analysis inside the OpenClaw workspace. Use when the user wants ChatGPT-style code interpreter behavior locally: run Python, analyze files, compute exact answers, transform data, inspect tables, or generate output files/artifacts. Prefer this for low-risk local analysis; do not use it for untrusted code, secrets handling, privileged actions, or network-dependent tasks.
---
# Code Interpreter
Run local Python code through the bundled runner.
## Safety boundary
This is **local execution**, not a hardened container. Treat it as a convenience tool for trusted, low-risk tasks.
Always:
- Keep work inside the OpenClaw workspace when possible.
- Prefer reading/writing files under the current task directory or an explicit artifact directory.
- Keep timeouts short by default.
- Avoid network access unless the user explicitly asks and the task truly needs it.
- Do not execute untrusted code copied from the web or other people.
- Do not expose secrets, tokens, SSH keys, browser cookies, or system files to the script.
Do not use this skill for:
- system administration
- package installation loops
- long-running servers
- privileged operations
- destructive file changes outside the workspace
- executing arbitrary third-party code verbatim
## Runner
Run from the OpenClaw workspace:
```bash
python3 {baseDir}/scripts/run_code.py --code 'print(2 + 2)'
```
Or pass a script file:
```bash
python3 {baseDir}/scripts/run_code.py --file path/to/script.py
```
Or pipe code via stdin:
```bash
cat my_script.py | python3 {baseDir}/scripts/run_code.py --stdin
```
## Useful options
```bash
# set timeout seconds (default 20)
python3 {baseDir}/scripts/run_code.py --code '...' --timeout 10
# run from a specific working directory inside workspace
python3 {baseDir}/scripts/run_code.py --file script.py --cwd /home/selig/.openclaw/workspace/project
# keep outputs in a known artifact directory inside workspace
python3 {baseDir}/scripts/run_code.py --file script.py --artifact-dir /home/selig/.openclaw/workspace/.tmp/my-analysis
# save full stdout / stderr
python3 {baseDir}/scripts/run_code.py --code '...' --stdout-file out.txt --stderr-file err.txt
```
## Built-in environment
The runner uses the dedicated interpreter at:
- `/home/selig/.openclaw/workspace/.venv-code-interpreter/bin/python` (use the venv path directly; do not resolve the symlink to system Python)
This keeps plotting/data-analysis dependencies stable without touching the system Python.
The runner exposes these variables to the script:
- `OPENCLAW_WORKSPACE`
- `CODE_INTERPRETER_RUN_DIR`
- `CODE_INTERPRETER_ARTIFACT_DIR`
It also writes a helper file in the run directory:
```python
from ci_helpers import save_text, save_json
```
Use those helpers to save artifacts into `CODE_INTERPRETER_ARTIFACT_DIR`.
## V4 automatic data analysis
For automatic profiling/report generation from a local data file, use:
- `scripts/analyze_data.py`
- Reference: `references/v4-usage.md`
This flow is ideal when the user wants a fast "analyze this CSV/JSON/Excel and give me a report + plots" result.
## Output
The runner prints compact JSON:
```json
{
"ok": true,
"exitCode": 0,
"timeout": false,
"runDir": "...",
"artifactDir": "...",
"packageStatus": {"pandas": true, "numpy": true, "matplotlib": false},
"artifacts": [{"path": "...", "bytes": 123}],
"stdout": "...",
"stderr": "..."
}
```
## Workflow
1. Decide whether the task is a good fit for local trusted execution.
2. Write the smallest script that solves the problem.
3. Use `--artifact-dir` when the user may want generated files preserved.
4. Run with a short timeout.
5. Inspect `stdout`, `stderr`, and `artifacts`.
6. If producing files, mention their exact paths in the reply.
## Patterns
### Exact calculation
Use a one-liner with `--code`.
### File analysis
Read input files from workspace, then write summaries/derived files back to `artifactDir`.
### Automatic report bundle
When the user wants a quick profiling pass, run `scripts/analyze_data.py` against the file and return the generated `summary.json`, `report.md`, `preview.csv`, and any PNG plots.
### Table inspection
Prefer pandas when available; otherwise fall back to csv/json stdlib.
### Plotting
If `matplotlib` is available, write PNG files to `artifactDir`. Use a forced CJK font strategy for Chinese charts. The bundled default is Google Noto Sans CJK TC under `assets/fonts/` when present, then system fallbacks. Apply the chosen font not only via rcParams but also directly to titles, axis labels, tick labels, and legend text through FontProperties. This avoids tofu/garbled Chinese and suppresses missing-glyph warnings reliably. If plotting is unavailable, continue with tabular/text output.
### Reusable logic
Write a small `.py` file in the current task area, run with `--file`, then keep it if it may be reused.
## Notes
- The runner launches `python3 -B` with a minimal environment.
- It creates an isolated temp run directory under `workspace/.tmp/code-interpreter-runs/`.
- `stdout` / `stderr` are truncated in the JSON preview if very large; save to files when needed.
- `MPLBACKEND=Agg` is set so headless plotting works when matplotlib is installed.
- If a task needs stronger isolation than this local runner provides, do not force it—use a real sandbox/container approach instead.

View File

@@ -0,0 +1,29 @@
# V4 Usage
## Purpose
Generate an automatic data analysis bundle from a local data file.
## Command
```bash
/home/selig/.openclaw/workspace/.venv-code-interpreter/bin/python \
/home/selig/.openclaw/workspace/skills/code-interpreter/scripts/analyze_data.py \
/path/to/input.csv \
--artifact-dir /home/selig/.openclaw/workspace/.tmp/my-analysis
```
## Outputs
- `summary.json` — machine-readable profile
- `report.md` — human-readable summary
- `preview.csv` — first 50 rows after parsing
- `*.png` — generated plots when matplotlib is available
## Supported inputs
- `.csv`
- `.tsv`
- `.json`
- `.xlsx`
- `.xls`

View File

@@ -0,0 +1,285 @@
#!/usr/bin/env python3
import argparse
import json
import math
import os
from pathlib import Path
try:
import pandas as pd
except ImportError:
raise SystemExit(
'pandas is required. Run with the code-interpreter venv:\n'
' ~/.openclaw/workspace/.venv-code-interpreter/bin/python analyze_data.py ...'
)
try:
import matplotlib
import matplotlib.pyplot as plt
HAS_MPL = True
except Exception:
HAS_MPL = False
ZH_FONT_CANDIDATES = [
'/home/selig/.openclaw/workspace/skills/code-interpreter/assets/fonts/NotoSansCJKtc-Regular.otf',
'/usr/share/fonts/truetype/droid/DroidSansFallbackFull.ttf',
]
def configure_matplotlib_fonts() -> tuple[str | None, object | None]:
if not HAS_MPL:
return None, None
chosen = None
chosen_prop = None
for path in ZH_FONT_CANDIDATES:
if Path(path).exists():
try:
from matplotlib import font_manager
font_manager.fontManager.addfont(path)
font_prop = font_manager.FontProperties(fname=path)
font_name = font_prop.get_name()
matplotlib.rcParams['font.family'] = [font_name]
matplotlib.rcParams['axes.unicode_minus'] = False
chosen = font_name
chosen_prop = font_prop
break
except Exception:
continue
return chosen, chosen_prop
def apply_font(ax, font_prop) -> None:
if not font_prop:
return
title = ax.title
if title:
title.set_fontproperties(font_prop)
ax.xaxis.label.set_fontproperties(font_prop)
ax.yaxis.label.set_fontproperties(font_prop)
for label in ax.get_xticklabels():
label.set_fontproperties(font_prop)
for label in ax.get_yticklabels():
label.set_fontproperties(font_prop)
legend = ax.get_legend()
if legend:
for text in legend.get_texts():
text.set_fontproperties(font_prop)
legend.get_title().set_fontproperties(font_prop)
def detect_format(path: Path) -> str:
ext = path.suffix.lower()
if ext in {'.csv', '.tsv', '.txt'}:
return 'delimited'
if ext == '.json':
return 'json'
if ext in {'.xlsx', '.xls'}:
return 'excel'
raise SystemExit(f'Unsupported file type: {ext}')
def load_df(path: Path) -> pd.DataFrame:
fmt = detect_format(path)
if fmt == 'delimited':
sep = '\t' if path.suffix.lower() == '.tsv' else ','
return pd.read_csv(path, sep=sep)
if fmt == 'json':
try:
return pd.read_json(path)
except ValueError:
return pd.DataFrame(json.loads(path.read_text(encoding='utf-8')))
if fmt == 'excel':
return pd.read_excel(path)
raise SystemExit('Unsupported format')
def safe_name(s: str) -> str:
keep = []
for ch in s:
if ch.isalnum() or ch in ('-', '_'):
keep.append(ch)
elif ch in (' ', '/'):
keep.append('_')
out = ''.join(keep).strip('_')
return out[:80] or 'column'
def series_stats(s: pd.Series) -> dict:
non_null = s.dropna()
result = {
'dtype': str(s.dtype),
'nonNull': int(non_null.shape[0]),
'nulls': int(s.isna().sum()),
'unique': int(non_null.nunique()) if len(non_null) else 0,
}
if pd.api.types.is_numeric_dtype(s):
result.update({
'min': None if non_null.empty else float(non_null.min()),
'max': None if non_null.empty else float(non_null.max()),
'mean': None if non_null.empty else float(non_null.mean()),
'sum': None if non_null.empty else float(non_null.sum()),
})
else:
top = non_null.astype(str).value_counts().head(5)
result['topValues'] = [{
'value': str(idx),
'count': int(val),
} for idx, val in top.items()]
return result
def maybe_parse_dates(df: pd.DataFrame) -> tuple[pd.DataFrame, list[str]]:
parsed = []
out = df.copy()
for col in out.columns:
if out[col].dtype == 'object':
sample = out[col].dropna().astype(str).head(20)
if sample.empty:
continue
parsed_col = pd.to_datetime(out[col], errors='coerce')
success_ratio = float(parsed_col.notna().mean()) if len(out[col]) else 0.0
if success_ratio >= 0.6:
out[col] = parsed_col
parsed.append(str(col))
return out, parsed
def write_report(df: pd.DataFrame, summary: dict, out_dir: Path) -> Path:
lines = []
lines.append('# Data Analysis Report')
lines.append('')
lines.append(f"- Source: `{summary['source']}`")
lines.append(f"- Rows: **{summary['rows']}**")
lines.append(f"- Columns: **{summary['columns']}**")
lines.append(f"- Generated plots: **{len(summary['plots'])}**")
if summary['parsedDateColumns']:
lines.append(f"- Parsed date columns: {', '.join(summary['parsedDateColumns'])}")
lines.append('')
lines.append('## Columns')
lines.append('')
for name, meta in summary['columnProfiles'].items():
lines.append(f"### {name}")
lines.append(f"- dtype: `{meta['dtype']}`")
lines.append(f"- non-null: {meta['nonNull']}")
lines.append(f"- nulls: {meta['nulls']}")
lines.append(f"- unique: {meta['unique']}")
if 'mean' in meta:
lines.append(f"- min / max: {meta['min']} / {meta['max']}")
lines.append(f"- mean / sum: {meta['mean']} / {meta['sum']}")
elif meta.get('topValues'):
preview = ', '.join([f"{x['value']} ({x['count']})" for x in meta['topValues'][:5]])
lines.append(f"- top values: {preview}")
lines.append('')
report = out_dir / 'report.md'
report.write_text('\n'.join(lines).strip() + '\n', encoding='utf-8')
return report
def generate_plots(df: pd.DataFrame, out_dir: Path, font_prop=None) -> list[str]:
if not HAS_MPL:
return []
plots = []
numeric_cols = [c for c in df.columns if pd.api.types.is_numeric_dtype(df[c])]
date_cols = [c for c in df.columns if pd.api.types.is_datetime64_any_dtype(df[c])]
cat_cols = [c for c in df.columns if not pd.api.types.is_numeric_dtype(df[c]) and not pd.api.types.is_datetime64_any_dtype(df[c])]
if numeric_cols:
col = numeric_cols[0]
plt.figure(figsize=(7, 4))
bins = min(20, max(5, int(math.sqrt(max(1, df[col].dropna().shape[0])))))
df[col].dropna().hist(bins=bins)
plt.title(f'Histogram of {col}', fontproperties=font_prop)
plt.xlabel(str(col), fontproperties=font_prop)
plt.ylabel('Count', fontproperties=font_prop)
apply_font(plt.gca(), font_prop)
path = out_dir / f'hist_{safe_name(str(col))}.png'
plt.tight_layout()
plt.savefig(path, dpi=160)
plt.close()
plots.append(str(path))
if cat_cols and numeric_cols:
cat, num = cat_cols[0], numeric_cols[0]
grp = df.groupby(cat, dropna=False)[num].sum().sort_values(ascending=False).head(12)
if not grp.empty:
plt.figure(figsize=(8, 4.5))
grp.plot(kind='bar')
plt.title(f'{num} by {cat}', fontproperties=font_prop)
plt.xlabel(str(cat), fontproperties=font_prop)
plt.ylabel(f'Sum of {num}', fontproperties=font_prop)
apply_font(plt.gca(), font_prop)
plt.tight_layout()
path = out_dir / f'bar_{safe_name(str(num))}_by_{safe_name(str(cat))}.png'
plt.savefig(path, dpi=160)
plt.close()
plots.append(str(path))
if date_cols and numeric_cols:
date_col, num = date_cols[0], numeric_cols[0]
grp = df[[date_col, num]].dropna().sort_values(date_col)
if not grp.empty:
plt.figure(figsize=(8, 4.5))
plt.plot(grp[date_col], grp[num], marker='o')
plt.title(f'{num} over time', fontproperties=font_prop)
plt.xlabel(str(date_col), fontproperties=font_prop)
plt.ylabel(str(num), fontproperties=font_prop)
apply_font(plt.gca(), font_prop)
plt.tight_layout()
path = out_dir / f'line_{safe_name(str(num))}_over_time.png'
plt.savefig(path, dpi=160)
plt.close()
plots.append(str(path))
return plots
def main() -> int:
parser = argparse.ArgumentParser(description='Automatic data analysis report generator')
parser.add_argument('input', help='Input data file (csv/json/xlsx)')
parser.add_argument('--artifact-dir', required=True, help='Output artifact directory')
args = parser.parse_args()
input_path = Path(args.input).expanduser().resolve()
artifact_dir = Path(args.artifact_dir).expanduser().resolve()
artifact_dir.mkdir(parents=True, exist_ok=True)
df = load_df(input_path)
original_columns = [str(c) for c in df.columns]
df, parsed_dates = maybe_parse_dates(df)
chosen_font, chosen_font_prop = configure_matplotlib_fonts()
preview_path = artifact_dir / 'preview.csv'
df.head(50).to_csv(preview_path, index=False)
summary = {
'source': str(input_path),
'rows': int(df.shape[0]),
'columns': int(df.shape[1]),
'columnNames': original_columns,
'parsedDateColumns': parsed_dates,
'columnProfiles': {str(c): series_stats(df[c]) for c in df.columns},
'plots': [],
'plotFont': chosen_font,
}
summary['plots'] = generate_plots(df, artifact_dir, chosen_font_prop)
summary_path = artifact_dir / 'summary.json'
summary_path.write_text(json.dumps(summary, ensure_ascii=False, indent=2), encoding='utf-8')
report_path = write_report(df, summary, artifact_dir)
result = {
'ok': True,
'input': str(input_path),
'artifactDir': str(artifact_dir),
'summary': str(summary_path),
'report': str(report_path),
'preview': str(preview_path),
'plots': summary['plots'],
}
print(json.dumps(result, ensure_ascii=False, indent=2))
return 0
if __name__ == '__main__':
raise SystemExit(main())

View File

@@ -0,0 +1,241 @@
#!/usr/bin/env python3
import argparse
import importlib.util
import json
import os
import pathlib
import shutil
import subprocess
import sys
import tempfile
import time
from typing import Optional
WORKSPACE = pathlib.Path('/home/selig/.openclaw/workspace').resolve()
RUNS_DIR = WORKSPACE / '.tmp' / 'code-interpreter-runs'
MAX_PREVIEW = 12000
ARTIFACT_SCAN_LIMIT = 100
PACKAGE_PROBES = ['pandas', 'numpy', 'matplotlib']
PYTHON_BIN = str(WORKSPACE / '.venv-code-interpreter' / 'bin' / 'python')
def current_python_paths(run_dir_path: pathlib.Path) -> str:
"""Build PYTHONPATH: run_dir (for ci_helpers) only.
Venv site-packages are already on sys.path when using PYTHON_BIN."""
return str(run_dir_path)
def read_code(args: argparse.Namespace) -> str:
sources = [bool(args.code), bool(args.file), bool(args.stdin)]
if sum(sources) != 1:
raise SystemExit('Provide exactly one of --code, --file, or --stdin')
if args.code:
return args.code
if args.file:
return pathlib.Path(args.file).read_text(encoding='utf-8')
return sys.stdin.read()
def ensure_within_workspace(path_str: Optional[str], must_exist: bool = True) -> pathlib.Path:
if not path_str:
return WORKSPACE
p = pathlib.Path(path_str).expanduser().resolve()
if p != WORKSPACE and WORKSPACE not in p.parents:
raise SystemExit(f'Path must stay inside workspace: {WORKSPACE}')
if must_exist and (not p.exists() or not p.is_dir()):
raise SystemExit(f'Path not found or not a directory: {p}')
return p
def ensure_output_path(path_str: Optional[str]) -> Optional[pathlib.Path]:
if not path_str:
return None
p = pathlib.Path(path_str).expanduser().resolve()
p.parent.mkdir(parents=True, exist_ok=True)
return p
def write_text(path_str: Optional[str], text: str) -> None:
p = ensure_output_path(path_str)
if not p:
return
p.write_text(text, encoding='utf-8')
def truncate(text: str) -> str:
if len(text) <= MAX_PREVIEW:
return text
extra = len(text) - MAX_PREVIEW
return text[:MAX_PREVIEW] + f'\n...[truncated {extra} chars]'
def package_status() -> dict:
out: dict[str, bool] = {}
for name in PACKAGE_PROBES:
proc = subprocess.run(
[PYTHON_BIN, '-c', f"import importlib.util; print('1' if importlib.util.find_spec('{name}') else '0')"],
capture_output=True,
text=True,
encoding='utf-8',
errors='replace',
)
out[name] = proc.stdout.strip() == '1'
return out
def rel_to(path: pathlib.Path, base: pathlib.Path) -> str:
try:
return str(path.relative_to(base))
except Exception:
return str(path)
def scan_artifacts(base_dir: pathlib.Path, root_label: str) -> list[dict]:
if not base_dir.exists():
return []
items: list[dict] = []
for p in sorted(base_dir.rglob('*')):
if len(items) >= ARTIFACT_SCAN_LIMIT:
break
if p.is_file():
try:
size = p.stat().st_size
except Exception:
size = None
items.append({
'root': root_label,
'path': str(p),
'relative': rel_to(p, base_dir),
'bytes': size,
})
return items
def write_helper(run_dir_path: pathlib.Path, artifact_dir: pathlib.Path) -> None:
helper = run_dir_path / 'ci_helpers.py'
helper.write_text(
"""
from pathlib import Path
import json
import os
WORKSPACE = Path(os.environ['OPENCLAW_WORKSPACE'])
RUN_DIR = Path(os.environ['CODE_INTERPRETER_RUN_DIR'])
ARTIFACT_DIR = Path(os.environ['CODE_INTERPRETER_ARTIFACT_DIR'])
def save_text(name: str, text: str) -> str:
path = ARTIFACT_DIR / name
path.parent.mkdir(parents=True, exist_ok=True)
path.write_text(text, encoding='utf-8')
return str(path)
def save_json(name: str, data) -> str:
path = ARTIFACT_DIR / name
path.parent.mkdir(parents=True, exist_ok=True)
path.write_text(json.dumps(data, ensure_ascii=False, indent=2), encoding='utf-8')
return str(path)
""".lstrip(),
encoding='utf-8',
)
def main() -> int:
parser = argparse.ArgumentParser(description='Local Python runner for OpenClaw code-interpreter skill')
parser.add_argument('--code', help='Python code to execute')
parser.add_argument('--file', help='Path to a Python file to execute')
parser.add_argument('--stdin', action='store_true', help='Read Python code from stdin')
parser.add_argument('--cwd', help='Working directory inside workspace')
parser.add_argument('--artifact-dir', help='Artifact directory inside workspace to keep outputs')
parser.add_argument('--timeout', type=int, default=20, help='Timeout seconds (default: 20)')
parser.add_argument('--stdout-file', help='Optional file path to save full stdout')
parser.add_argument('--stderr-file', help='Optional file path to save full stderr')
parser.add_argument('--keep-run-dir', action='store_true', help='Keep generated temp run directory even on success')
args = parser.parse_args()
code = read_code(args)
cwd = ensure_within_workspace(args.cwd)
RUNS_DIR.mkdir(parents=True, exist_ok=True)
run_dir_path = pathlib.Path(tempfile.mkdtemp(prefix='run-', dir=str(RUNS_DIR))).resolve()
artifact_dir = ensure_within_workspace(args.artifact_dir, must_exist=False) if args.artifact_dir else (run_dir_path / 'artifacts')
artifact_dir.mkdir(parents=True, exist_ok=True)
script_path = run_dir_path / 'main.py'
script_path.write_text(code, encoding='utf-8')
write_helper(run_dir_path, artifact_dir)
env = {
'PATH': os.environ.get('PATH', '/usr/bin:/bin'),
'HOME': str(run_dir_path),
'PYTHONPATH': current_python_paths(run_dir_path),
'PYTHONIOENCODING': 'utf-8',
'PYTHONUNBUFFERED': '1',
'OPENCLAW_WORKSPACE': str(WORKSPACE),
'CODE_INTERPRETER_RUN_DIR': str(run_dir_path),
'CODE_INTERPRETER_ARTIFACT_DIR': str(artifact_dir),
'MPLBACKEND': 'Agg',
}
started = time.time()
timed_out = False
exit_code = None
stdout = ''
stderr = ''
try:
proc = subprocess.run(
[PYTHON_BIN, '-B', str(script_path)],
cwd=str(cwd),
env=env,
capture_output=True,
text=True,
encoding='utf-8',
errors='replace',
timeout=max(1, args.timeout),
)
exit_code = proc.returncode
stdout = proc.stdout
stderr = proc.stderr
except subprocess.TimeoutExpired as exc:
timed_out = True
exit_code = 124
raw_out = exc.stdout or ''
raw_err = exc.stderr or ''
stdout = raw_out if isinstance(raw_out, str) else raw_out.decode('utf-8', errors='replace')
stderr = (raw_err if isinstance(raw_err, str) else raw_err.decode('utf-8', errors='replace')) + f'\nExecution timed out after {args.timeout}s.'
duration = round(time.time() - started, 3)
write_text(args.stdout_file, stdout)
write_text(args.stderr_file, stderr)
artifacts = scan_artifacts(artifact_dir, 'artifactDir')
if artifact_dir != run_dir_path:
artifacts.extend(scan_artifacts(run_dir_path / 'artifacts', 'runArtifacts'))
result = {
'ok': (exit_code == 0 and not timed_out),
'exitCode': exit_code,
'timeout': timed_out,
'durationSec': duration,
'cwd': str(cwd),
'runDir': str(run_dir_path),
'artifactDir': str(artifact_dir),
'packageStatus': package_status(),
'artifacts': artifacts,
'stdout': truncate(stdout),
'stderr': truncate(stderr),
}
print(json.dumps(result, ensure_ascii=False, indent=2))
if not args.keep_run_dir and result['ok'] and artifact_dir != run_dir_path:
shutil.rmtree(run_dir_path, ignore_errors=True)
return 0 if result['ok'] else 1
if __name__ == '__main__':
raise SystemExit(main())

View File

@@ -37,28 +37,28 @@ tools:
## 輸出格式範例
```
☀️ **早安2026-02-20 週五**
```markdown
# ☀️ 早安2026-02-20 週五
🌤️ **今日天氣(台北)**
氣溫 16-22°C多雲偶晴東北風 2-3 級
穿著建議:可帶薄外套
## 🌤️ 今日天氣(台北)
**氣溫:** 16-22°C多雲偶晴東北風 2-3 級
💡 **穿著建議:** 可帶薄外套
📅 **今日行程**
09:00 - 週會(視訊)
14:00 - 客戶簡報
16:30 - Code Review
## 📅 今日行程
- 09:00 - 週會(視訊)
- 14:00 - 客戶簡報
- 16:30 - Code Review
**待辦事項3 項)**
[ ] 完成 API 文件
[ ] 回覆客戶 email
[ ] 更新 deploy 腳本
## ✅ 待辦事項3 項)
- [ ] 完成 API 文件
- [ ] 回覆客戶 email
- [ ] 更新 deploy 腳本
💡 **今日提醒**
SSL 憑證 90 天後到期2026-05-20
本週 sprint 截止日2026-02-21
## 💡 今日提醒
- ⚠️ SSL 憑證 90 天後到期2026-05-20
- 🎯 本週 sprint 截止日2026-02-21
有什麼想先處理的嗎?
*有什麼想先處理的嗎?*
```
## Cron 設定

View File

@@ -12,6 +12,56 @@ interface DispatchInput {
retries?: number;
}
const ALLOWED_TARGETS = new Set<DispatchInput['target']>(['vps-a', 'vps-b']);
function clampInt(value: unknown, min: number, max: number, fallback: number): number {
const n = Number(value);
if (!Number.isFinite(n)) return fallback;
return Math.min(max, Math.max(min, Math.floor(n)));
}
function sanitizeTaskId(taskId: unknown): string {
if (taskId == null) return '';
return String(taskId).replace(/[\r\n]/g, '').slice(0, 128);
}
function validateInput(raw: any): DispatchInput {
const input = raw as Partial<DispatchInput>;
if (!input || typeof input !== 'object') {
throw new Error('dispatch-webhook 輸入格式錯誤:必須提供 input 物件');
}
if (!input.target || !ALLOWED_TARGETS.has(input.target as DispatchInput['target'])) {
throw new Error('dispatch-webhook 參數錯誤target 必須是 vps-a 或 vps-b');
}
if (!input.webhookUrl || typeof input.webhookUrl !== 'string') {
throw new Error(`${input.target.toUpperCase()} Webhook URL 未設定。請在環境變數設定 VPS_A_WEBHOOK_URL 或 VPS_B_WEBHOOK_URL`);
}
let parsedUrl: URL;
try {
parsedUrl = new URL(input.webhookUrl);
} catch {
throw new Error('Webhook URL 格式錯誤,請提供有效的 http/https URL');
}
if (!['http:', 'https:'].includes(parsedUrl.protocol)) {
throw new Error('Webhook URL 協定不支援,僅允許 http 或 https');
}
if (!input.webhookToken || typeof input.webhookToken !== 'string') {
throw new Error(`${input.target.toUpperCase()} Webhook Token 未設定`);
}
if (!input.payload || typeof input.payload !== 'object' || Array.isArray(input.payload)) {
throw new Error('dispatch-webhook 參數錯誤payload 必須是 JSON 物件');
}
return input as DispatchInput;
}
async function fetchWithTimeout(url: string, options: RequestInit, timeoutMs: number): Promise<Response> {
const controller = new AbortController();
const timer = setTimeout(() => controller.abort(), timeoutMs);
@@ -23,17 +73,11 @@ async function fetchWithTimeout(url: string, options: RequestInit, timeoutMs: nu
}
export async function handler(ctx: any) {
const input: DispatchInput = ctx.input || ctx.params;
const input = validateInput(ctx.input || ctx.params);
if (!input.webhookUrl) {
throw new Error(`${input.target.toUpperCase()} Webhook URL 未設定。請在環境變數設定 VPS_A_WEBHOOK_URL 或 VPS_B_WEBHOOK_URL`);
}
if (!input.webhookToken) {
throw new Error(`${input.target.toUpperCase()} Webhook Token 未設定`);
}
const timeoutMs = input.timeoutMs ?? 30000;
const maxRetries = input.retries ?? 3;
const timeoutMs = clampInt(input.timeoutMs, 1000, 120000, 30000);
const maxRetries = clampInt(input.retries, 1, 5, 3);
const taskIdHeader = sanitizeTaskId(input.payload.task_id);
let lastError: Error | null = null;
for (let attempt = 1; attempt <= maxRetries; attempt++) {
@@ -46,7 +90,7 @@ export async function handler(ctx: any) {
'Content-Type': 'application/json',
'Authorization': `Bearer ${input.webhookToken}`,
'X-OpenClaw-Version': '1.0',
'X-OpenClaw-Task-Id': String(input.payload.task_id || ''),
'X-OpenClaw-Task-Id': taskIdHeader,
},
body: JSON.stringify(input.payload),
},
@@ -72,8 +116,8 @@ export async function handler(ctx: any) {
};
} catch (err: any) {
lastError = err;
if (err.message?.includes('401') || err.message?.includes('Token')) {
lastError = err instanceof Error ? err : new Error(String(err));
if (lastError.message.includes('401') || lastError.message.includes('Token')) {
break; // 認證錯誤不重試
}
if (attempt < maxRetries) {

View File

@@ -0,0 +1,253 @@
---
name: github-repo-search
description: 帮助用户搜索和筛选 GitHub 开源项目,输出结构化推荐报告。当用户说"帮我找开源项目"、"搜一下GitHub上有什么"、"找找XX方向的仓库"、"开源项目推荐"、"github搜索"、"/github-search"时触发。
---
# GitHub 开源项目搜索助手
## 用途
从用户自然语言需求出发经过需求挖掘、检索词拆解、GitHub 检索、过滤分类、深度解读,最终产出结构化推荐结果。
目标不是"给很多链接",而是"给用户可理解、可比较、可决策、可直接行动的候选仓库列表"。
## 适用范围V1.1
- 数据源GitHub 公开仓库。
- 默认不授权(不使用用户 Token
- 默认硬过滤:`stars >= 100``archived=false``is:public`
- 默认输出单榜单Top N榜单内按"仓库归属类型"标注。
- 本流程默认不包含安装与落地实施(除非用户单独提出)。
### 配额说明(必须知晓)
- 未授权 Core API`60 次/小时`
- Search API`10 次/分钟`(独立于 Core 额度)。
- 需要在报告中注明检索时间与配额状态,避免结果不可复现。
## 工作流程
### 环节一:需求收敛(必须完成,不可跳过)
> **硬性门控**:环节一是整个流程的前置条件。无论用户的需求描述多么清晰,都必须走完本环节并获得用户明确确认后,才能进入环节二。禁止根据用户的初始描述直接推断需求并开始检索。即使用户说"直接搜就行",也要先输出需求摘要让用户确认。
#### 第一步:需求挖掘与对齐
**目标**:把"我想看看 XX"转成可执行、可排序、可解释的检索目标。
**需确认信息(最少)**
1. 主题agent 记忆、RAG、浏览器自动化
2. 数量Top 10 / Top 20
3. 最低 stars默认 100
4. 排序模式(必须二选一):`相关性优先` / `星标优先`(默认:相关性优先)
5. 目标形态(必须二选一或多选):
`可直接使用的产品` / `可二次开发的框架` / `资料清单/方法论`
**建议补充信息(可选)**
1. 偏好技术栈Python/TS/Go 等)
2. 使用场景(学习、生产、对标)
3. 排除项(教程仓库、归档仓库、纯论文复现等)
4. 部署偏好(本地优先/云端优先/混合)
**阶段输出(固定格式)**
```text
核心诉求:
- 主题xxx
- 数量Top N
- 最低 stars>= 100
- 排序模式:相关性优先 / 星标优先(默认:相关性优先)
- 目标形态xxx
- 偏好xxx可空
- 排除xxx可空
```
向用户确认以上信息。**用户明确确认后才能进入环节二,否则停在这里继续对齐。**
---
### 环节二:检索执行(以下环节由模型自主执行,无需用户介入,直到环节四交付报告)
#### 第二步检索词拆解5-10 组)
**目标**:平衡"召回率"和"相关性",避免只靠单词硬搜导致偏题。
**拆词规则**
每组 query 由以下维度组合:
1. 核心词:用户目标词
2. 同义词:替代表达(如 long-term memory / stateful memory
3. 场景词coding、mcp、tool、platform、awesome、curated
4. 技术词agent、sdk、framework、database、os
5. 排除思路:不在 query 里硬写过多负例,放到后续过滤阶段
**产出格式**
```text
Query-1: "xxx"
目的:高召回核心主题
Query-2: "xxx"
目的:补同义词盲区
```
#### 第三步:执行检索与候选召回
**执行原则**
1. 每组 query 都执行检索(建议每组 30-50 条)。
2. 合并结果形成候选池。
3.`owner/repo` 去重。
4. 记录检索时间与 API 额度信息。
**候选池字段(最少)**
1. `owner/repo`
2. `stars`
3. `description`
4. `repo_url`
5. `archived`
6. `language`
7. `updated_at`
8. `topics`
9. `license`
#### 第四步:去重与硬过滤
**硬过滤(默认)**
1. `stars >= 100`
2. `archived = false`
3. `is:public`
**可选硬过滤(按需)**
1. `fork = false`
2. 指定语言:`language:xxx`
3. 更新时效:最近 6-12 个月
---
### 环节三:质量精炼
#### 第五步:噪音剔除与相关性重排
**目标**:解决"命中 memory 但其实不是 agent memory"的噪音问题。
**噪音剔除规则(示例)**
1. 与主题无关的通用工程仓库(即使 stars 很高)
2. 关键词误命中仓库(仅描述中偶然出现 memory/agent
3. 无实质内容或异常仓库
**排序原则V1.1**
`star` 不再作为主排序,只作为召回门槛之一。
建议综合排序权重:
1. 需求相关性35%
2. 场景适用性30%
3. 活跃度更新时效15%
4. 工程成熟度(文档/示例/可维护15%
5. stars5%
#### 第六步:仓库归属类型分类(必须)
**目标**:让用户一眼看懂"这个仓库到底是什么角色",避免把框架、应用、目录混为一谈。
**推荐类型字典**
1. 通用框架层
2. 应用产品层(可直接使用)
3. 记忆层/上下文基础设施
4. MCP 服务层
5. 目录清单层awesome/curated
6. 垂直场景方案层
7. 方法论/研究层
#### 第七步:深读与项目介绍撰写(必须)
**目标**:不是"仓库简介复述",而是输出"对用户有决策价值"的详细介绍。
**深读最低要求**
每个入选仓库至少查看:
1. README 核心定位段
2. 快速开始/功能章节标题
3. 近期维护信号更新时间、Issue/PR 活跃)
**项目介绍写作要求(固定)**
"项目介绍"必须包含两部分并写细:
1. 这是什么:它在系统架构中的角色和边界
2. 为什么推荐:它在用户当前目标下的价值(不是泛泛优点)
可补充:
1. 典型适用场景1-2 条)
2. 限制或不适用场景1 条)
---
### 环节四:交付与迭代
#### 第八步:单榜生成与报告交付(最终)
**交付结构(固定)**
1. 需求摘要
2. 检索词清单5-10 组 + 目的)
3. 筛选与重排规则(明确写出)
4. 结果总览(原始召回/去重后/过滤后)
5. Top N 单榜(表格)
6. 结论与下一步建议
**Top N 表格字段(固定)**
| 仓库 | 星标 | 仓库归属类型 | 项目介绍(是什么 + 推荐理由) | 其它信息补充 | 链接 |
|---|---:|---|---|---|---|
**"其它信息补充"建议内容**
- 语言 / License / 最近更新时间
- 上手复杂度(低/中/高)
- 风险提示(若有)
#### 第九步:用户确认与迭代(可选)
**迭代触发条件**
用户反馈"太泛/太窄/不够准/解释不够细"。
**迭代动作**
1. 调整检索词(增加场景词或同义词)
2. 调整 stars 门槛100 -> 200/500
3. 增加限定(语言/方向/更新时间)
4. 调整类型权重(例如优先应用层或优先框架层)
---
## 默认参数V1.1
1. 最低 stars`100`
2. 默认输出:`Top 10`
3. 默认过滤:`archived=false`
4. 默认必须分类:是
5. 默认项目介绍粒度:详细(至少"是什么 + 为什么推荐"
## 质量检查清单(交付前自检)
1. 是否完成需求对齐并明确"目标形态"
2. 是否有 5-10 组 query 且每组有目的
3. 是否记录了检索时间与配额状态
4. 是否执行了去重、硬过滤和噪音剔除
5. 是否完成仓库归属类型分类
6. 是否每个推荐都有详细项目介绍(不是一句话)
7. 是否使用固定表格字段交付
8. 是否避免把安装实施混入本流程

View File

@@ -0,0 +1,46 @@
---
name: gooddays-calendar
description: 讀寫 GoodDays 行程與今日吉時資訊。支援登入取得 JWT、查詢 `/api/unified-events`,以及呼叫 `/api/mystical/daily` 取得今日吉時/神祕學資料。
---
# gooddays-calendar
此 skill 用於整合 GoodDays API讓 agent 可以直接:
1. 登入 GoodDays 取得 JWT
2. 查詢未來事件(`/api/unified-events`
3. 查詢今日吉時/神祕學資訊(`/api/mystical/daily`
4. 用自然語言判斷是要查「吉時」還是「行程」
## API 重點
- Base URL`GOODDAYS_BASE_URL`
- Login`POST /auth/login`
- Mystical daily`POST /api/mystical/daily`
- Events`/api/unified-events`
## Mystical daily 實測格式
必填欄位:
- `year`
- `month`
- `day`
選填欄位:
- `hour`
- `userId`
範例:
```json
{"year":2026,"month":3,"day":13,"hour":9}
```
## 設定來源
從 workspace `.env` 讀取:
- `GOODDAYS_BASE_URL`
- `GOODDAYS_EMAIL`
- `GOODDAYS_PASSWORD`
- `GOODDAYS_USER_ID`
## 後續可擴充
- 新增事件建立/更新/刪除
- 將今日吉時整理成 daily-briefing 可直接引用的格式
-`life-planner` / `daily-briefing` skill 串接

View File

@@ -0,0 +1,192 @@
import { readFileSync, existsSync } from 'fs';
type EnvMap = Record<string, string>;
function loadDotEnv(path: string): EnvMap {
const out: EnvMap = {};
if (!existsSync(path)) return out;
const text = readFileSync(path, 'utf-8');
for (const line of text.split('\n')) {
const trimmed = line.trim();
if (!trimmed || trimmed.startsWith('#')) continue;
const idx = trimmed.indexOf('=');
if (idx === -1) continue;
const key = trimmed.slice(0, idx).trim();
const value = trimmed.slice(idx + 1).trim();
out[key] = value;
}
return out;
}
async function login(baseUrl: string, email: string, password: string): Promise<string> {
const res = await fetch(`${baseUrl}/auth/login`, {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ email, password }),
});
const data = await res.json() as any;
if (!res.ok || !data?.data?.token) {
throw new Error(data?.error || 'GoodDays login failed');
}
return data.data.token;
}
async function getMysticalDaily(baseUrl: string, token: string, payload: any) {
const res = await fetch(`${baseUrl}/api/mystical/daily`, {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'Authorization': `Bearer ${token}`,
},
body: JSON.stringify(payload),
});
const data = await res.json() as any;
if (!res.ok || data?.success === false) {
throw new Error(data?.error || 'GoodDays mystical daily failed');
}
return data;
}
async function getUnifiedEvents(baseUrl: string, token: string, userId: string, startDate: string, endDate: string) {
const url = new URL(`${baseUrl}/api/unified-events`);
url.searchParams.set('userId', userId);
url.searchParams.set('startDate', startDate);
url.searchParams.set('endDate', endDate);
const res = await fetch(url.toString(), {
method: 'GET',
headers: token ? { 'Authorization': `Bearer ${token}` } : {},
});
const data = await res.json() as any;
if (!res.ok || data?.success === false) {
throw new Error(data?.error || 'GoodDays unified-events failed');
}
return data;
}
function parseDateFromMessage(message: string): { year: number; month: number; day: number; hour?: number } {
const now = new Date();
const dateMatch = message.match(/(\d{4})-(\d{1,2})-(\d{1,2})/);
const hourMatch = message.match(/(?:hour|小時|時|點)\s*[:]?\s*(\d{1,2})/i);
if (dateMatch) {
return {
year: Number(dateMatch[1]),
month: Number(dateMatch[2]),
day: Number(dateMatch[3]),
hour: hourMatch ? Number(hourMatch[1]) : undefined,
};
}
return {
year: now.getFullYear(),
month: now.getMonth() + 1,
day: now.getDate(),
hour: hourMatch ? Number(hourMatch[1]) : undefined,
};
}
function formatYmd(year: number, month: number, day: number): string {
return `${year}-${String(month).padStart(2, '0')}-${String(day).padStart(2, '0')}`;
}
function addDays(year: number, month: number, day: number, offset: number): { year: number; month: number; day: number } {
const d = new Date(year, month - 1, day);
d.setDate(d.getDate() + offset);
return { year: d.getFullYear(), month: d.getMonth() + 1, day: d.getDate() };
}
function detectIntent(message: string): 'events' | 'mystical' {
const m = message.toLowerCase();
if (/(行程|事件|日程|schedule|calendar|待會|今天有什麼安排|未來48小時)/i.test(m)) return 'events';
return 'mystical';
}
function summarizeEvents(events: any[]): string {
if (!Array.isArray(events) || events.length === 0) return '• 目前沒有查到符合條件的事件';
return events.slice(0, 20).map((evt: any, idx: number) => {
const title = evt?.title || evt?.name || evt?.summary || `事件 ${idx + 1}`;
const start = evt?.startDate || evt?.start || evt?.start_time || evt?.date || '未知時間';
const end = evt?.endDate || evt?.end || evt?.end_time || '';
return `${title}${start ? `${start}` : ''}${end ? `${end}` : ''}`;
}).join('\n');
}
export async function handler(ctx: any) {
const workspace = ctx.env?.OPENCLAW_WORKSPACE || `${process.env.HOME}/.openclaw/workspace`;
const env = {
...loadDotEnv(`${workspace}/.env`),
...process.env,
} as EnvMap;
const baseUrl = env.GOODDAYS_BASE_URL;
const email = env.GOODDAYS_EMAIL;
const password = env.GOODDAYS_PASSWORD;
const userId = env.GOODDAYS_USER_ID;
const message = ctx.message?.text || ctx.message?.content || '';
if (!baseUrl || !email || !password) {
return { reply: '缺少 GoodDays 設定,請先檢查 workspace/.env。' };
}
try {
const token = await login(baseUrl, email, password);
const datePayload = parseDateFromMessage(message);
const intent = detectIntent(message);
if (intent === 'events') {
const startDate = formatYmd(datePayload.year, datePayload.month, datePayload.day);
const plusOne = addDays(datePayload.year, datePayload.month, datePayload.day, 1);
const endDate = formatYmd(plusOne.year, plusOne.month, plusOne.day);
const result = await getUnifiedEvents(baseUrl, token, userId, startDate, endDate);
const events = result?.data || [];
return {
reply:
`📅 GoodDays 行程查詢\n\n` +
`區間:${startDate} ~ ${endDate}\n` +
`${summarizeEvents(events)}`,
metadata: {
engine: 'gooddays-calendar',
endpoint: '/api/unified-events',
startDate,
endDate,
count: Array.isArray(events) ? events.length : 0,
result,
},
};
}
const payload = { ...datePayload, userId };
if (payload.hour == null) delete (payload as any).hour;
const result = await getMysticalDaily(baseUrl, token, payload);
const d = result?.data || {};
const goodHours = d?.good_hours?.good_hours_display || '未提供';
const isGoodNow = d?.good_hours?.is_good_hour;
const ganzhi = d?.ganzhi?.day || '未知';
const lunar = d?.lunar?.full_date || '未知';
const dongong = d?.dongong?.note || '未提供';
const twelve = d?.twelve_star?.description || '未提供';
return {
reply:
`📅 GoodDays 今日資訊\n\n` +
`日期:${payload.year}-${String(payload.month).padStart(2, '0')}-${String(payload.day).padStart(2, '0')}` +
`${payload.hour != null ? ` ${payload.hour}:00` : ''}` +
`\n干支${ganzhi}` +
`\n農曆${lunar}` +
`\n吉時${goodHours}` +
`\n此刻是否吉時${isGoodNow === true ? '是' : isGoodNow === false ? '否' : '未知'}` +
`\n董公${dongong}` +
`\n十二建星${twelve}`,
metadata: {
engine: 'gooddays-calendar',
endpoint: '/api/mystical/daily',
payload,
result,
},
};
} catch (error: any) {
return {
reply: `❌ GoodDays 查詢失敗:${error?.message || String(error)}`,
metadata: { error: error?.message || String(error) },
};
}
}

1
skills/kokoro-tts Symbolic link
View File

@@ -0,0 +1 @@
/home/selig/.openclaw/workspace/skills/kokoro-tts

47
skills/luxtts/SKILL.md Normal file
View File

@@ -0,0 +1,47 @@
---
name: luxtts
description: 使用本機 LuxTTS 將文字合成為語音,特別適合需要較高品質中文/英文 voice clone 的情況。用於:(1) 使用主人參考音檔做語音克隆,(2) 中英混合朗讀但希望維持主人音色,(3) 比較 LuxTTS 與 Kokoro 的輸出品質,(4) 需要 LuxTTS API-only 本機服務時。
---
# luxtts
此 skill 提供 **LuxTTS** 文字轉語音能力,底層使用本機 **LuxTTS API**
## 目前架構
- systemd 服務:`luxtts`
- Port`7861`
- 綁定:`127.0.0.1`
- Root path`/luxtts`
- 健康檢查:`http://127.0.0.1:7861/luxtts/api/health`
- Web UI**關閉**
- API保留
## 推薦做法
目前最穩定的整合方式是直接呼叫本機 API
```bash
curl -sS -o /tmp/luxtts_test.wav \
-F "ref_audio=@/path/to/reference.wav" \
-F "text=这个世界已经改变了人工智能AI改变了这个世界的运作方式。" \
-F "num_steps=4" \
-F "t_shift=0.9" \
-F "speed=1.0" \
-F "duration=5" \
-F "rms=0.01" \
http://127.0.0.1:7861/luxtts/api/tts
```
## 注意事項
- 目前實測:**中文建議先轉簡體再輸入**。
- LuxTTS 比較適合:
- 主人音色 clone
- 中文/英文都希望保持同一個 clone 聲線
- 品質優先、速度其次
- 若只是快速中文朗讀、且不要求高擬真 clone通常先考慮 `kokoro`
## 命名
之後對外統一稱呼為 **luxtts**

134
skills/luxtts/handler.ts Normal file
View File

@@ -0,0 +1,134 @@
/**
* luxtts skill
* 文字轉語音:透過本機 LuxTTS API 進行 voice clone
*/
import { existsSync, readFileSync } from 'fs';
import { execFileSync } from 'child_process';
const LUXTTS_API = process.env.LUXTTS_API || 'http://127.0.0.1:7861/luxtts/api/tts';
const DEFAULT_REF_AUDIO = process.env.LUXTTS_REF_AUDIO || '/home/selig/.openclaw/workspace/media/refs/ref_from_762.wav';
const OUTPUT_DIR = '/home/selig/.openclaw/workspace/media';
const TRIGGER_WORDS = [
'luxtts', 'lux', '文字轉語音', '語音合成', '唸出來', '說出來', '轉語音', 'voice',
];
const SPEED_MODIFIERS: Record<string, number> = {
'慢速': 0.85,
'slow': 0.85,
'快速': 1.15,
'fast': 1.15,
};
function parseMessage(message: string): { text: string; speed: number } {
let cleaned = message;
let speed = 1.0;
for (const trigger of TRIGGER_WORDS) {
const re = new RegExp(trigger, 'gi');
cleaned = cleaned.replace(re, '');
}
for (const [modifier, value] of Object.entries(SPEED_MODIFIERS)) {
const re = new RegExp(modifier, 'gi');
if (re.test(cleaned)) {
cleaned = cleaned.replace(re, '');
speed = value;
}
}
cleaned = cleaned.replace(/^[\s:,、]+/, '').replace(/[\s:,、]+$/, '').trim();
return { text: cleaned, speed };
}
function ensureDependencies() {
if (!existsSync(DEFAULT_REF_AUDIO)) {
throw new Error(`找不到預設參考音檔:${DEFAULT_REF_AUDIO}`);
}
}
function generateSpeech(text: string, speed: number): string {
const timestamp = Date.now();
const outputPath = `${OUTPUT_DIR}/luxtts_clone_${timestamp}.wav`;
const curlCmd = [
'curl', '-sS', '-o', outputPath,
'-F', `ref_audio=@${DEFAULT_REF_AUDIO}`,
'-F', `text=${text}`,
'-F', 'num_steps=4',
'-F', 't_shift=0.9',
'-F', `speed=${speed}`,
'-F', 'duration=5',
'-F', 'rms=0.01',
LUXTTS_API,
];
execFileSync(curlCmd[0], curlCmd.slice(1), {
timeout: 600000,
stdio: 'pipe',
encoding: 'utf8',
});
if (!existsSync(outputPath)) {
throw new Error('LuxTTS 未產生輸出音檔');
}
const header = readFileSync(outputPath).subarray(0, 16).toString('ascii');
if (!header.includes('RIFF') && !header.includes('WAVE')) {
throw new Error(`LuxTTS 回傳非 WAV 音訊,檔頭:${JSON.stringify(header)}`);
}
return outputPath;
}
export async function handler(ctx: any) {
const message = ctx.message?.text || ctx.message?.content || '';
if (!message.trim()) {
return { reply: '請提供要合成的文字例如「luxtts 这个世界已经改变了」' };
}
const { text, speed } = parseMessage(message);
if (!text) {
return { reply: '請提供要合成的文字例如「luxtts 这个世界已经改变了」' };
}
try {
ensureDependencies();
const outputPath = generateSpeech(text, speed);
return {
reply:
'🔊 luxtts 語音合成完成' +
`\n\n📝 文字:${text}` +
`\n⏩ 語速:${speed}` +
`\n🎙 參考音檔:\`${DEFAULT_REF_AUDIO}\`` +
`\n🌐 API\`${LUXTTS_API}\`` +
`\n📂 檔案:\`${outputPath}\``,
metadata: {
text,
speed,
refAudio: DEFAULT_REF_AUDIO,
output: outputPath,
engine: 'luxtts',
backend: 'luxtts-api',
},
files: [outputPath],
};
} catch (error: any) {
return {
reply:
'❌ luxtts 語音合成失敗,請檢查 luxtts 服務、API 與預設參考音檔是否正常。' +
(error?.message ? `\n\n錯誤${error.message}` : ''),
metadata: {
text,
speed,
refAudio: DEFAULT_REF_AUDIO,
engine: 'luxtts',
backend: 'luxtts-api',
error: error?.message || String(error),
},
};
}
}

View File

@@ -0,0 +1,7 @@
{
"version": 1,
"registry": "https://clawhub.ai",
"slug": "obsidian-official-cli",
"installedVersion": "1.0.0",
"installedAt": 1773490883209
}

View File

@@ -0,0 +1,42 @@
# Changelog
All notable changes to the Obsidian Official CLI Skill will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/).
## [1.0.0] - 2026-02-10
### Added
- Initial release of the Obsidian Official CLI skill
- Comprehensive coverage of Obsidian CLI v1.12+ commands
- File operations: create, read, edit, move, delete
- Search and discovery: full-text search, tag management, link analysis
- Daily notes and task management
- Template and bookmark operations
- Plugin and theme management
- Obsidian Sync integration
- File history and version control
- Developer tools and debugging commands
- TUI mode support with interactive features
- Complete documentation with examples and troubleshooting
- Platform-specific setup instructions (macOS, Linux, Windows)
### Features
- Auto-triggering based on Obsidian-related queries
- Support for all CLI parameter types and flags
- Vault targeting with multiple syntax options
- Copy-to-clipboard functionality
- Comprehensive error handling and validation
- Progressive disclosure design for optimal context usage
### Requirements
- Obsidian 1.12+ with Catalyst license
- CLI enabled in Obsidian settings
- OpenClaw skill system
### Documentation
- Complete command reference
- Usage examples and patterns
- Setup and troubleshooting guides
- TUI keyboard shortcuts
- Best practices for vault management

View File

@@ -0,0 +1,159 @@
# Obsidian **Official CLI** Skill
An OpenClaw skill for working with Obsidian vaults using the **official Obsidian CLI (v1.12+)** - not third-party tools, but Obsidian's own built-in command-line interface with full feature support.
## ✨ Official CLI Features
**This skill uses Obsidian's official CLI** - not third-party integrations - giving you access to **all Obsidian features** from the terminal:
- **File Operations**: Create, read, edit, move, and delete notes with full Obsidian integration
- **Advanced Task Management**: Complete task operations with checkboxes, statuses, and custom markers
- **Database/Bases Support**: Query and manage Obsidian Bases with views and CSV/JSON export
- **Search & Discovery**: Full-text search, tag management, link analysis with Obsidian's search engine
- **Daily Notes & Templates**: Manage daily notes and insert templates with variable resolution
- **Plugin & Theme Management**: Install, enable, disable, and reload plugins/themes directly
- **Obsidian Sync Integration**: Full sync operations, history, and conflict resolution
- **Properties (Frontmatter)**: Read, write, and manage note properties with type validation
- **Workspace Management**: Control layouts, tabs, and saved workspaces
- **Developer Tools**: Console debugging, DOM inspection, screenshots, mobile emulation
- **TUI Mode**: Interactive terminal UI with autocomplete, history, and command palette access
## 📋 Requirements
- **Obsidian 1.12+** with early access (insider builds)
- **Catalyst license** (required for official CLI access)
- **Official CLI enabled** in Obsidian: Settings → General → Command line interface → Enable
- **Obsidian running** (CLI connects to the live Obsidian app for full feature access)
## 🚀 Installation
1. Download the skill file: [`obsidian-official-cli.skill`](obsidian-official-cli.skill)
2. Install via OpenClaw CLI:
```bash
openclaw skills install obsidian-official-cli.skill
```
## 💡 Usage Examples
Once installed, the skill will automatically trigger when you mention Obsidian operations:
- "Create a new note called 'Meeting Notes' using Obsidian CLI"
- "Search for all notes containing 'project' with Obsidian's search engine"
- "Show me all incomplete tasks and toggle their status via CLI"
- "Query my Books database and export to CSV"
- "Install the Dataview plugin and enable it"
- "Take a screenshot of my current Obsidian workspace"
- "Show me all orphaned notes in my vault graph"
## 🛠️ Obsidian CLI Setup
1. **Upgrade to Obsidian 1.12+**: Get early access via insider builds
2. **Enable CLI**: Settings → General → Command line interface → Enable
3. **Register command**: Follow the prompt to add `obsidian` to your PATH
4. **Restart terminal**: Or run `source ~/.zprofile` on macOS
5. **Test setup**: Run `obsidian version`
**Note**: Obsidian must be running for CLI commands to work.
## 🔧 Official CLI Command Coverage
**Complete access to Obsidian's official CLI** - every command from the native interface:
### File & Vault Management
- Native file operations with Obsidian's file resolver
- Folder management and vault organization
- Random note selection and unique name generation
### Advanced Content Features
- **Task Management**: Toggle, update status, custom markers (`todo`, `done`, `[-]`)
- **Properties**: Full frontmatter support with type validation (`list`, `text`, etc.)
- **Templates**: Insert with variable resolution and custom paths
- **Daily Notes**: Dedicated commands with append/prepend support
### Database/Knowledge Features
- **Obsidian Bases**: Query views, export CSV/JSON, create entries
- **Search Engine**: Obsidian's full-text search with context and filters
- **Link Graph**: Backlinks, orphans, deadends via Obsidian's link resolver
- **Tag System**: Complete tag analysis with occurrence counts
### Obsidian Ecosystem Integration
- **Plugin Lifecycle**: Install, enable, disable, reload with Obsidian's plugin manager
- **Theme Engine**: Access to Obsidian's theme system and CSS snippets
- **Sync Service**: Full Obsidian Sync operations, not file-level sync
- **Workspace System**: Save/load layouts, tab management, pane control
### Developer & Power User Features
- **Console Access**: Direct access to Obsidian's developer console
- **DOM Inspection**: Query Obsidian's UI elements and CSS
- **Command Palette**: Execute any registered Obsidian command by ID
- **Mobile Emulation**: Test mobile layouts and responsive behavior
## 🎮 TUI Mode
The skill supports Obsidian's interactive Terminal UI mode with:
- Command autocomplete
- Command history with search
- Keyboard shortcuts
- Multi-command sessions
## 📚 Documentation
The skill includes comprehensive documentation covering:
- Command syntax and parameters
- File targeting patterns (`file=` vs `path=`)
- TUI keyboard shortcuts
- Platform-specific setup instructions
- Troubleshooting guides
## 📁 Repository Structure
```
obsidian-official-cli-skill/
├── SKILL.md # Main skill source code
├── obsidian-official-cli.skill # Packaged skill file
├── README.md # This documentation
├── LICENSE # MIT license
├── CHANGELOG.md # Version history
└── .gitignore # Git ignore rules
```
## 🚀 Installation
Download the skill file from the [releases page](https://github.com/slmoloch/obsidian-official-cli-skill/releases) and install:
```bash
# Download the .skill file from releases, then:
openclaw skills install obsidian-official-cli.skill
```
## 🛠️ Development
**For Developers:**
- `SKILL.md` contains the complete skill implementation
- Edit `SKILL.md` to modify functionality
- Rebuild with `openclaw skills build` after changes
- Test locally before submitting changes
## 🤝 Contributing
Found an issue or want to improve the skill?
1. Open an issue describing the problem/enhancement
2. Fork the repository
3. Make your changes to `SKILL.md`
4. Test your changes locally
5. Submit a pull request
## 📄 License
MIT License - feel free to modify and redistribute.
## 🔗 Links
- [Obsidian Official CLI Documentation](https://help.obsidian.md/cli)
- [OpenClaw Documentation](https://docs.openclaw.ai)
- [ClawHub - Skill Marketplace](https://clawhub.com)
---
**Built for OpenClaw** 🦞 | **Supports Obsidian CLI v1.12+** 📝

View File

@@ -0,0 +1,299 @@
---
name: obsidian-official-cli
description: Work with Obsidian vaults using the official Obsidian CLI (v1.12+). Open, search, create, move, and manage notes from the terminal. Use when working with Obsidian vaults for note management, file operations, searching content, managing tasks, properties, links, plugins, themes, sync operations, or any command-line interaction with Obsidian.
---
# Obsidian CLI
Official command-line interface for Obsidian. Anything you can do in Obsidian can be done from the command line — including developer commands for debugging, screenshots, and plugin reloading.
## Prerequisites
- **Obsidian 1.12+** and **Catalyst license** required
- **Settings → General → Command line interface** → Enable
- Follow prompt to register the `obsidian` command
- Restart terminal or `source ~/.zprofile` (macOS)
- **Note:** Obsidian must be running for CLI to work
Test setup: `obsidian version`
## Core Patterns
### Command Structure
```bash
# Single commands
obsidian <command> [parameters] [flags]
# TUI mode (interactive)
obsidian # Enter TUI with autocomplete and history
# Vault targeting
obsidian vault=Notes <command>
obsidian vault="My Vault" <command>
```
### Parameter Types
- **Parameters:** `name=value` (quote values with spaces)
- **Flags:** Boolean switches (just include to enable)
- **Multiline:** Use ` ` for newlines, `\t` for tabs
- **Copy output:** Add `--copy` to copy to clipboard
## File Operations
### Basic File Management
```bash
# Info and listing
obsidian file # Active file info
obsidian file file=Recipe # Specific file info
obsidian files # List all files
obsidian files folder=Projects/ # Filter by folder
obsidian folders # List folders
# Open and read
obsidian open file=Recipe # Open file
obsidian open path="Inbox/Note.md" newtab
obsidian read # Read active file
obsidian read file=Recipe --copy # Read and copy to clipboard
# Create new notes
obsidian create name="New Note"
obsidian create name="Note" content="# Title Body"
obsidian create path="Inbox/Idea.md" template=Daily
obsidian create name="Note" silent overwrite
# Modify content
obsidian append file=Note content="New line"
obsidian append file=Note content="Same line" inline
obsidian prepend file=Note content="After frontmatter"
# Move and delete
obsidian move file=Note to=Archive/
obsidian move path="Inbox/Old.md" to="Projects/New.md"
obsidian delete file=Note # To trash
obsidian delete file=Note permanent
```
### File Targeting
- `file=<name>` — Wikilink resolution (matches by name)
- `path=<path>` — Exact path from vault root
## Search and Discovery
### Text Search
```bash
obsidian search query="meeting notes"
obsidian search query="TODO" matches # Show context
obsidian search query="project" path=Projects/
obsidian search query="urgent" limit=10 case
obsidian search query="API" format=json
obsidian search:open query="search term" # Open in Obsidian
```
### Tags and Properties
```bash
# Tags
obsidian tags # Active file tags
obsidian tags all # All vault tags
obsidian tags all counts sort=count # By frequency
obsidian tag name=project # Tag info
# Properties (frontmatter)
obsidian properties file=Note
obsidian property:read name=status file=Note
obsidian property:set name=status value=done file=Note
obsidian property:set name=tags value="a,b,c" type=list file=Note
obsidian property:remove name=draft file=Note
```
### Links and Structure
```bash
# Backlinks and outgoing links
obsidian backlinks file=Note # What links to this
obsidian links file=Note # Outgoing links
# Vault analysis
obsidian orphans # No incoming links
obsidian deadends # No outgoing links
obsidian unresolved # Broken links
obsidian unresolved verbose counts
```
## Daily Notes and Tasks
### Daily Notes
```bash
obsidian daily # Open today's note
obsidian daily paneType=split # Open in split
obsidian daily:read # Print contents
obsidian daily:append content="- [ ] New task"
obsidian daily:prepend content="## Morning"
```
### Task Management
```bash
# List tasks
obsidian tasks # Active file
obsidian tasks all # All vault tasks
obsidian tasks all todo # Incomplete only
obsidian tasks file=Recipe # Specific file
obsidian tasks daily # Daily note tasks
# Update tasks
obsidian task ref="Recipe.md:8" toggle
obsidian task file=Recipe line=8 done
obsidian task file=Recipe line=8 todo
obsidian task file=Note line=5 status="-" # Custom [-]
```
## Templates and Bookmarks
### Templates
```bash
obsidian templates # List all templates
obsidian template:read name=Daily
obsidian template:read name=Daily resolve title="My Note"
obsidian template:insert name=Daily # Insert into active file
obsidian create name="Meeting Notes" template=Meeting
```
### Bookmarks
```bash
obsidian bookmarks # List all
obsidian bookmark file="Important.md"
obsidian bookmark file=Note subpath="#Section"
obsidian bookmark folder="Projects/"
obsidian bookmark search="TODO"
obsidian bookmark url="https://..." title="Reference"
```
## Plugin and Theme Management
### Plugins
```bash
# List and info
obsidian plugins # All installed
obsidian plugins:enabled # Only enabled
obsidian plugin id=dataview # Plugin info
# Manage plugins
obsidian plugin:enable id=dataview
obsidian plugin:disable id=dataview
obsidian plugin:install id=dataview enable
obsidian plugin:uninstall id=dataview
obsidian plugin:reload id=my-plugin # Development
```
### Themes and CSS
```bash
# Themes
obsidian themes # List installed
obsidian theme # Active theme
obsidian theme:set name=Minimal
obsidian theme:install name="Theme Name" enable
# CSS Snippets
obsidian snippets # List all
obsidian snippet:enable name=my-snippet
obsidian snippet:disable name=my-snippet
```
## Advanced Features
### Obsidian Sync
```bash
obsidian sync:status # Status and usage
obsidian sync on/off # Resume/pause
obsidian sync:history file=Note
obsidian sync:restore file=Note version=2
obsidian sync:deleted # Deleted files
```
### File History
```bash
obsidian history file=Note # List versions
obsidian history:read file=Note version=1
obsidian history:restore file=Note version=2
obsidian diff file=Note from=2 to=1 # Compare versions
```
### Developer Tools
```bash
# Console and debugging
obsidian devtools # Toggle dev tools
obsidian dev:console # Show console
obsidian dev:errors # JS errors
obsidian eval code="app.vault.getFiles().length"
# Screenshots and DOM
obsidian dev:screenshot path=screenshot.png
obsidian dev:dom selector=".workspace-leaf"
obsidian dev:css selector=".mod-active" prop=background
# Mobile and debugging
obsidian dev:mobile on/off
obsidian dev:debug on/off
```
## Utility Commands
### Workspace and Navigation
```bash
# Workspace management
obsidian workspace # Current layout
obsidian workspace:save name="coding"
obsidian workspace:load name="coding"
obsidian tabs # Open tabs
obsidian tab:open file=Note
# Random and unique
obsidian random # Open random note
obsidian random folder=Inbox newtab
obsidian unique # Create unique name
obsidian wordcount file=Note # Word count
```
### Command Palette
```bash
obsidian commands # List all command IDs
obsidian commands filter=editor # Filter commands
obsidian command id=editor:toggle-bold
obsidian hotkeys # List hotkeys
```
## TUI Mode
Interactive terminal UI with enhanced features:
```bash
obsidian # Enter TUI mode
```
**TUI Shortcuts:**
- **Navigation:** ←/→ (Ctrl+B/F), Home/End (Ctrl+A/E)
- **Editing:** Ctrl+U (delete to start), Ctrl+K (delete to end)
- **Autocomplete:** Tab/↓ (enter), Shift+Tab/Esc (exit)
- **History:** ↑/↓ (Ctrl+P/N), Ctrl+R (reverse search)
- **Other:** Enter (execute), Ctrl+L (clear), Ctrl+C/D (exit)
## Troubleshooting
### Setup Issues
- Use latest installer (1.11.7+) with early access (1.12.x)
- Restart terminal after CLI registration
- Ensure Obsidian is running before using CLI
### Platform-Specific
**macOS:** PATH added to `~/.zprofile`
```bash
# For other shells, add manually:
export PATH="$PATH:/Applications/Obsidian.app/Contents/MacOS"
```
**Linux:** Symlink at `/usr/local/bin/obsidian`
```bash
# Manual creation if needed:
sudo ln -s /path/to/obsidian /usr/local/bin/obsidian
```
**Windows:** Requires `Obsidian.com` terminal redirector (Catalyst Discord)

View File

@@ -0,0 +1,6 @@
{
"ownerId": "kn7etsekxfxt0s4sv9h32j0vd980yb5y",
"slug": "obsidian-official-cli",
"version": "1.0.0",
"publishedAt": 1770778403903
}

View File

@@ -0,0 +1,7 @@
{
"version": 1,
"registry": "https://clawhub.ai",
"slug": "openclaw-tavily-search",
"installedVersion": "0.1.0",
"installedAt": 1773208165907
}

View File

@@ -0,0 +1,48 @@
---
name: tavily-search
description: "Web search via Tavily API (alternative to Brave). Use when the user asks to search the web / look up sources / find links and Brave web_search is unavailable or undesired. Returns a small set of relevant results (title, url, snippet) and can optionally include short answer summaries."
---
# Tavily Search
Use the bundled script to search the web with Tavily.
## Requirements
- Provide API key via either:
- environment variable: `TAVILY_API_KEY`, or
- `~/.openclaw/.env` line: `TAVILY_API_KEY=...`
## Commands
Run from the OpenClaw workspace:
```bash
# raw JSON (default)
python3 {baseDir}/scripts/tavily_search.py --query "..." --max-results 5
# include short answer (if available)
python3 {baseDir}/scripts/tavily_search.py --query "..." --max-results 5 --include-answer
# stable schema (closer to web_search): {query, results:[{title,url,snippet}], answer?}
python3 {baseDir}/scripts/tavily_search.py --query "..." --max-results 5 --format brave
# human-readable Markdown list
python3 {baseDir}/scripts/tavily_search.py --query "..." --max-results 5 --format md
```
## Output
### raw (default)
- JSON: `query`, optional `answer`, `results: [{title,url,content}]`
### brave
- JSON: `query`, optional `answer`, `results: [{title,url,snippet}]`
### md
- A compact Markdown list with title/url/snippet.
## Notes
- Keep `max-results` small by default (35) to reduce token/reading load.
- Prefer returning URLs + snippets; fetch full pages only when needed.

View File

@@ -0,0 +1,6 @@
{
"ownerId": "kn78hhhbxwjs4nrcyn8my5fcw981wmys",
"slug": "openclaw-tavily-search",
"version": "0.1.0",
"publishedAt": 1772121679343
}

View File

@@ -0,0 +1,159 @@
#!/usr/bin/env python3
import argparse
import json
import os
import pathlib
import re
import sys
import urllib.request
TAVILY_URL = "https://api.tavily.com/search"
def load_key():
key = os.environ.get("TAVILY_API_KEY")
if key:
return key.strip()
env_path = pathlib.Path.home() / ".openclaw" / ".env"
if env_path.exists():
try:
txt = env_path.read_text(encoding="utf-8", errors="ignore")
m = re.search(r"^\s*TAVILY_API_KEY\s*=\s*(.+?)\s*$", txt, re.M)
if m:
v = m.group(1).strip().strip('"').strip("'")
if v:
return v
except Exception:
pass
return None
def tavily_search(query: str, max_results: int, include_answer: bool, search_depth: str):
key = load_key()
if not key:
raise SystemExit(
"Missing TAVILY_API_KEY. Set env var TAVILY_API_KEY or add it to ~/.openclaw/.env"
)
payload = {
"api_key": key,
"query": query,
"max_results": max_results,
"search_depth": search_depth,
"include_answer": bool(include_answer),
"include_images": False,
"include_raw_content": False,
}
data = json.dumps(payload).encode("utf-8")
req = urllib.request.Request(
TAVILY_URL,
data=data,
headers={"Content-Type": "application/json", "Accept": "application/json"},
method="POST",
)
with urllib.request.urlopen(req, timeout=30) as resp:
body = resp.read().decode("utf-8", errors="replace")
try:
obj = json.loads(body)
except json.JSONDecodeError:
raise SystemExit(f"Tavily returned non-JSON: {body[:300]}")
out = {
"query": query,
"answer": obj.get("answer"),
"results": [],
}
for r in (obj.get("results") or [])[:max_results]:
out["results"].append(
{
"title": r.get("title"),
"url": r.get("url"),
"content": r.get("content"),
}
)
if not include_answer:
out.pop("answer", None)
return out
def to_brave_like(obj: dict) -> dict:
# A lightweight, stable shape similar to web_search: results with title/url/snippet.
results = []
for r in obj.get("results", []) or []:
results.append(
{
"title": r.get("title"),
"url": r.get("url"),
"snippet": r.get("content"),
}
)
out = {"query": obj.get("query"), "results": results}
if "answer" in obj:
out["answer"] = obj.get("answer")
return out
def to_markdown(obj: dict) -> str:
lines = []
if obj.get("answer"):
lines.append(obj["answer"].strip())
lines.append("")
for i, r in enumerate(obj.get("results", []) or [], 1):
title = (r.get("title") or "").strip() or r.get("url") or "(no title)"
url = r.get("url") or ""
snippet = (r.get("content") or "").strip()
lines.append(f"{i}. {title}")
if url:
lines.append(f" {url}")
if snippet:
lines.append(f" - {snippet}")
return "\n".join(lines).strip() + "\n"
def main():
ap = argparse.ArgumentParser()
ap.add_argument("--query", required=True)
ap.add_argument("--max-results", type=int, default=5)
ap.add_argument("--include-answer", action="store_true")
ap.add_argument(
"--search-depth",
default="basic",
choices=["basic", "advanced"],
help="Tavily search depth",
)
ap.add_argument(
"--format",
default="raw",
choices=["raw", "brave", "md"],
help="Output format: raw (default) | brave (title/url/snippet) | md (human-readable)",
)
args = ap.parse_args()
res = tavily_search(
query=args.query,
max_results=max(1, min(args.max_results, 10)),
include_answer=args.include_answer,
search_depth=args.search_depth,
)
if args.format == "md":
sys.stdout.write(to_markdown(res))
return
if args.format == "brave":
res = to_brave_like(res)
json.dump(res, sys.stdout, ensure_ascii=False)
sys.stdout.write("\n")
if __name__ == "__main__":
main()

View File

@@ -7,13 +7,15 @@
* - embed_to_pg.py (Python venv at /home/selig/apps/qmd-pg/)
*/
import { execSync, exec } from 'child_process';
import { exec, execFile } from 'child_process';
import { promisify } from 'util';
const execAsync = promisify(exec);
const execFileAsync = promisify(execFile);
const QMD_CMD = '/home/selig/.nvm/versions/node/v24.13.1/bin/qmd';
const EMBED_PY = '/home/selig/apps/qmd-pg/venv/bin/python3 /home/selig/apps/qmd-pg/embed_to_pg.py';
const EMBED_PY_BIN = '/home/selig/apps/qmd-pg/venv/bin/python3';
const EMBED_PY_SCRIPT = '/home/selig/apps/qmd-pg/embed_to_pg.py';
const MAX_SEARCH_LEN = 1500; // 回覆中搜尋結果最大字數
interface SearchResult {
@@ -23,17 +25,12 @@ interface SearchResult {
similarity?: number;
}
interface QmdResult {
path: string;
text?: string;
score?: number;
}
/** 執行 qmd BM25 全文搜尋 */
async function qmdSearch(query: string, topK = 5): Promise<string> {
try {
const { stdout } = await execAsync(
`${QMD_CMD} search ${JSON.stringify(query)} --output markdown --limit ${topK}`,
const { stdout } = await execFileAsync(
QMD_CMD,
['search', query, '--output', 'markdown', '--limit', String(topK)],
{ timeout: 15000, env: { ...process.env, HOME: '/home/selig' } }
);
return stdout.trim() || '(無結果)';
@@ -45,8 +42,9 @@ async function qmdSearch(query: string, topK = 5): Promise<string> {
/** 執行 PostgreSQL 向量語意搜尋 */
async function pgSearch(query: string, topK = 5): Promise<SearchResult[]> {
try {
const { stdout } = await execAsync(
`${EMBED_PY} search ${JSON.stringify(query)} --top-k ${topK} --json`,
const { stdout } = await execFileAsync(
EMBED_PY_BIN,
[EMBED_PY_SCRIPT, 'search', query, '--top-k', String(topK), '--json'],
{ timeout: 20000 }
);
return JSON.parse(stdout) as SearchResult[];
@@ -71,7 +69,7 @@ async function triggerEmbed(): Promise<string> {
try {
// 背景執行,不等待完成
exec(
`${QMD_CMD} embed 2>&1 >> /tmp/qmd-embed.log & ${EMBED_PY} embed 2>&1 >> /tmp/qmd-embed.log &`,
`${QMD_CMD} embed 2>&1 >> /tmp/qmd-embed.log & ${EMBED_PY_BIN} ${EMBED_PY_SCRIPT} embed 2>&1 >> /tmp/qmd-embed.log &`,
{ env: { ...process.env, HOME: '/home/selig' } }
);
return '✅ 索引更新已在背景啟動,約需 1-5 分鐘完成。';
@@ -86,8 +84,9 @@ async function getStats(): Promise<string> {
// qmd collection list
try {
const { stdout } = await execAsync(
`${QMD_CMD} collection list`,
const { stdout } = await execFileAsync(
QMD_CMD,
['collection', 'list'],
{ timeout: 5000, env: { ...process.env, HOME: '/home/selig' } }
);
results.push(`**qmd Collections:**\n\`\`\`\n${stdout.trim()}\n\`\`\``);
@@ -97,8 +96,9 @@ async function getStats(): Promise<string> {
// pgvector stats
try {
const { stdout } = await execAsync(
`${EMBED_PY} stats`,
const { stdout } = await execFileAsync(
EMBED_PY_BIN,
[EMBED_PY_SCRIPT, 'stats'],
{ timeout: 10000 }
);
results.push(`**PostgreSQL pgvector:**\n\`\`\`\n${stdout.trim()}\n\`\`\``);

View File

@@ -0,0 +1 @@
/home/selig/.agents/skills/remotion-best-practices

View File

@@ -0,0 +1,125 @@
---
name: research-to-paper-slides
description: Turn local analysis outputs into publication-style drafts and presentation materials. Use when the user already has research/data-analysis artifacts such as summary.json, report.md, preview.csv, plots, or code-interpreter output and wants a complete first-pass paper draft, slide outline, speaker notes, or HTML deck. Especially useful after using the code-interpreter skill on small-to-medium datasets and the next step is to package findings into a paper, report, pitch deck, class slides, or meeting presentation.
---
# research-to-paper-slides
Generate a complete first-pass writing bundle from analysis artifacts.
## Inputs
Best input bundle:
- `summary.json`
- `report.md`
- one or more plot PNG files
Optional:
- `preview.csv`
- raw CSV/JSON/XLSX path for source naming only
- extra notes from the user (audience, tone, purpose)
## Levels
Choose how far the workflow should go:
- `--level v2`**基礎交付版**
- 輸出:`paper.md``slides.md``speaker-notes.md``deck.html`
- 適合:快速草稿、先出第一版內容
- 不包含:`insights.md`、逐圖解讀頁、正式 deck 視覺強化
- `--level v3`**洞察強化版**
- 包含 `v2` 全部內容
- 另外增加:`insights.md`、每張圖各一頁解讀、speaker notes 逐圖講稿
- 適合:內部討論、研究整理、需要把圖表講清楚
- `--level v4`**正式交付版**
- 包含 `v3` 全部內容
- 另外增加:更正式的 deck 視覺版面、PDF-ready 工作流
- 適合:正式簡報、提案、對外展示
## Modes
- `academic` — 論文/研究報告/研討會簡報
- `business` — 內部決策/管理匯報/策略說明
- `pitch` — 提案/募資/對外說服型簡報
## Outputs
Depending on `--level`, the generator creates:
- `paper.md` — structured paper/report draft
- `slides.md` — slide-by-slide content outline
- `speaker-notes.md` — presenter script notes
- `insights.md` — key insights + plot interpretations (`v3` / `v4`)
- `deck.html` — printable deck HTML
- `bundle.json` — machine-readable manifest with `level` and `levelNote`
Optional local export:
- `export_pdf.py` — export `deck.html` to PDF via local headless Chromium
## Workflow
1. Point the generator at an analysis artifact directory.
2. Pass `--mode` for audience style.
3. Pass `--level` for workflow depth.
4. Review the generated markdown/html.
5. If needed, refine wording or structure.
6. If using `v4`, export `deck.html` to PDF.
## Commands
### V2 — 基礎交付版
```bash
python3 {baseDir}/scripts/generate_bundle.py \
--analysis-dir /path/to/analysis/out \
--output-dir /path/to/paper-slides-out \
--title "研究標題" \
--audience "投資人" \
--purpose "簡報" \
--mode business \
--level v2
```
### V3 — 洞察強化版
```bash
python3 {baseDir}/scripts/generate_bundle.py \
--analysis-dir /path/to/analysis/out \
--output-dir /path/to/paper-slides-out \
--title "研究標題" \
--audience "研究者" \
--purpose "研究整理" \
--mode academic \
--level v3
```
### V4 — 正式交付版
```bash
python3 {baseDir}/scripts/generate_bundle.py \
--analysis-dir /path/to/analysis/out \
--output-dir /path/to/paper-slides-out \
--title "研究標題" \
--audience "投資人" \
--purpose "募資簡報" \
--mode pitch \
--level v4
```
## PDF export
If local Chromium is available, try:
```bash
python3 {baseDir}/scripts/export_pdf.py \
--html /path/to/deck.html \
--pdf /path/to/deck.pdf
```
## Notes
- Prefer this skill after `code-interpreter` or any workflow that already produced plots and structured summaries.
- Keep this as a first-pass drafting tool; the output is meant to be edited, not treated as final publication-ready text.
- On this workstation, Chromium CLI `--print-to-pdf` may still fail with host-specific permission/runtime quirks even when directories are writable.
- When the user wants a PDF, try `export_pdf.py` first; if it fails, immediately fall back to OpenClaw browser PDF export on a locally served `deck.html`.

View File

@@ -0,0 +1,14 @@
# PDF Notes
## Current recommended path
1. Generate `deck.html` with this skill.
2. Open `deck.html` in the browser.
3. Export to PDF with browser print/PDF flow.
4. If small textual tweaks are needed after PDF export, use the installed `nano-pdf` skill.
## Why this path
- HTML is easier to iterate than direct PDF generation.
- Existing plot PNG files can be embedded cleanly.
- Browser PDF export preserves layout reliably for first-pass decks.

View File

@@ -0,0 +1,53 @@
#!/usr/bin/env python3
import argparse
import glob
import os
import shutil
import subprocess
import tempfile
from pathlib import Path
def find_browser() -> str:
# Playwright Chromium (most reliable on this workstation)
for pw in sorted(glob.glob(os.path.expanduser('~/.cache/ms-playwright/chromium-*/chrome-linux/chrome')), reverse=True):
if os.access(pw, os.X_OK):
return pw
for name in ['chromium-browser', 'chromium', 'google-chrome', 'google-chrome-stable']:
path = shutil.which(name)
if path:
return path
raise SystemExit('No supported browser found for PDF export. Install Playwright Chromium: npx playwright install chromium')
def main() -> int:
parser = argparse.ArgumentParser(description='Export deck HTML to PDF using headless Chromium')
parser.add_argument('--html', required=True)
parser.add_argument('--pdf', required=True)
args = parser.parse_args()
html_path = Path(args.html).expanduser().resolve()
pdf_path = Path(args.pdf).expanduser().resolve()
pdf_path.parent.mkdir(parents=True, exist_ok=True)
if not html_path.exists():
raise SystemExit(f'Missing HTML input: {html_path}')
browser = find_browser()
with tempfile.TemporaryDirectory(prefix='rtps-chromium-') as profile_dir:
cmd = [
browser,
'--headless',
'--disable-gpu',
'--no-sandbox',
f'--user-data-dir={profile_dir}',
f'--print-to-pdf={pdf_path}',
html_path.as_uri(),
]
subprocess.run(cmd, check=True)
print(str(pdf_path))
return 0
if __name__ == '__main__':
raise SystemExit(main())

View File

@@ -0,0 +1,498 @@
#!/usr/bin/env python3
import argparse
import json
import html
import shutil
from pathlib import Path
from typing import Any
MODES = {'academic', 'business', 'pitch'}
LEVELS = {'v2', 'v3', 'v4'}
LEVEL_NOTES = {
'v2': '基礎交付版paper/slides/speaker-notes/deck',
'v3': '洞察強化版v2 + insights + 每張圖逐頁解讀',
'v4': '正式交付版v3 + 更正式 deck 視覺 + PDF-ready 工作流',
}
def read_json(path: Path) -> dict[str, Any]:
return json.loads(path.read_text(encoding='utf-8'))
def read_text(path: Path) -> str:
return path.read_text(encoding='utf-8')
def find_plots(analysis_dir: Path) -> list[Path]:
return sorted([p for p in analysis_dir.glob('*.png') if p.is_file()])
def build_key_findings(summary: dict[str, Any]) -> list[str]:
findings: list[str] = []
for name, meta in summary.get('columnProfiles', {}).items():
if 'mean' in meta and meta.get('mean') is not None:
findings.append(f"欄位「{name}」平均值約為 {meta['mean']:.2f},總和約為 {meta['sum']:.2f}")
elif meta.get('topValues'):
top = meta['topValues'][0]
findings.append(f"欄位「{name}」最常見值為「{top['value']}」,出現 {top['count']} 次。")
if len(findings) >= 6:
break
if not findings:
findings.append('資料已完成初步整理,但尚缺少足夠特徵以自動歸納具體發現。')
return findings
def build_method_text(summary: dict[str, Any]) -> str:
rows = summary.get('rows', 0)
cols = summary.get('columns', 0)
parsed_dates = summary.get('parsedDateColumns', [])
parts = [f"本研究以一份包含 {rows} 筆資料、{cols} 個欄位的資料集作為分析基礎。"]
if parsed_dates:
parts.append(f"其中已自動辨識日期欄位:{', '.join(parsed_dates)}")
parts.append("分析流程包含欄位剖析、數值摘要、類別分布觀察,以及圖表化初步探索。")
return ''.join(parts)
def build_limitations(summary: dict[str, Any], mode: str) -> list[str]:
base = [
'本版本內容依據自動分析結果生成,仍需依情境補充背景、語境與論證細節。',
'目前主要反映描述性分析與初步視覺化結果,尚未自動進行嚴格因果推論或完整驗證。',
]
if mode == 'pitch':
base[0] = '本版本適合作為提案底稿,但對外簡報前仍需補上商業敘事、案例與風險說明。'
elif mode == 'business':
base[0] = '本版本可支援內部決策討論,但正式匯報前仍建議補充商務脈絡與對照基準。'
elif mode == 'academic':
base[0] = '本版本可作為論文或研究報告草稿,但正式提交前仍需補足文獻回顧、研究問題與方法論細節。'
if not summary.get('plots'):
base.append('本次分析未包含圖表產物,因此視覺化證據仍需後續補充。')
return base
def classify_plot(name: str) -> str:
low = name.lower()
if low.startswith('hist_'):
return 'histogram'
if low.startswith('bar_'):
return 'bar'
if low.startswith('line_'):
return 'line'
return 'plot'
def interpret_plot(plot: Path, mode: str) -> dict[str, str]:
kind = classify_plot(plot.name)
base = {
'histogram': {
'title': f'圖表解讀:{plot.name}',
'summary': '這張 histogram 用來觀察數值欄位的分布狀態、集中區域與可能的離群位置。',
'so_what': '若資料分布偏斜或過度集中,後續可考慮分群、分層或補充異常值檢查。',
},
'bar': {
'title': f'圖表解讀:{plot.name}',
'summary': '這張 bar chart 適合比較不同類別或分組之間的量體差異,幫助快速辨識高低落差。',
'so_what': '若類別差異明顯,後續可針對高表現或低表現組別追查原因與策略。',
},
'line': {
'title': f'圖表解讀:{plot.name}',
'summary': '這張 line chart 用於觀察時間序列變化,幫助辨識趨勢、波動與可能轉折點。',
'so_what': '若趨勢持續上升或下降,建議進一步比對外部事件、季節性與干預因素。',
},
'plot': {
'title': f'圖表解讀:{plot.name}',
'summary': '這張圖表提供一個視覺化切面,有助於快速掌握資料重點與分布特徵。',
'so_what': '建議將圖表與主要論點對齊,補上更具體的背景解讀。',
},
}[kind]
if mode == 'pitch':
base['so_what'] = '簡報時應直接說明這張圖支持了哪個主張,以及它如何增加說服力。'
elif mode == 'business':
base['so_what'] = '建議把這張圖對應到 KPI、風險或下一步行動方便管理層做判斷。'
elif mode == 'academic':
base['so_what'] = '建議將這張圖與研究問題、假設或比較基準一起討論,以提升論證完整度。'
return base
def build_insights(summary: dict[str, Any], plots: list[Path], mode: str) -> list[str]:
insights: list[str] = []
numeric = []
categorical = []
for name, meta in summary.get('columnProfiles', {}).items():
if 'mean' in meta and meta.get('mean') is not None:
numeric.append((name, meta))
elif meta.get('topValues'):
categorical.append((name, meta))
for name, meta in numeric[:3]:
insights.append(f"數值欄位「{name}」平均約 {meta['mean']:.2f},範圍約 {meta['min']:.2f}{meta['max']:.2f}")
for name, meta in categorical[:2]:
top = meta['topValues'][0]
insights.append(f"類別欄位「{name}」目前以「{top['value']}」最常見({top['count']} 次),值得作為第一輪聚焦對象。")
if plots:
insights.append(f"本次已生成 {len(plots)} 張圖表,可直接支撐逐頁圖表解讀與口頭報告。")
if mode == 'pitch':
insights.append('對外提案時,建議把最強的一項數據證據前置,讓聽眾先記住價值主張。')
elif mode == 'business':
insights.append('內部決策簡報時,建議把洞察轉成 KPI、優先順序與負責人。')
elif mode == 'academic':
insights.append('學術/研究情境下,建議將洞察進一步轉成研究問題、比較架構與後續驗證方向。')
return insights
def make_insights_md(title: str, mode: str, summary: dict[str, Any], plots: list[Path]) -> str:
insights = build_insights(summary, plots, mode)
plot_notes = [interpret_plot(p, mode) for p in plots]
lines = [f"# {title}Insights", '', f"- 模式:`{mode}`", '']
lines.append('## 關鍵洞察')
lines.extend([f"- {x}" for x in insights])
lines.append('')
if plot_notes:
lines.append('## 圖表解讀摘要')
for note in plot_notes:
lines.append(f"### {note['title']}")
lines.append(f"- 解讀:{note['summary']}")
lines.append(f"- 延伸:{note['so_what']}")
lines.append('')
return '\n'.join(lines).strip() + '\n'
def make_paper(title: str, audience: str, purpose: str, mode: str, level: str, summary: dict[str, Any], report_md: str, plots: list[Path], insights_md: str | None = None) -> str:
findings = build_key_findings(summary)
method_text = build_method_text(summary)
limitations = build_limitations(summary, mode)
plot_refs = '\n'.join([f"- `{p.name}`" for p in plots]) or '- 無'
findings_md = '\n'.join([f"- {x}" for x in findings])
limitations_md = '\n'.join([f"- {x}" for x in limitations])
if mode == 'academic':
sections = f"## 摘要\n\n本文面向{audience},以「{purpose}」為導向,整理目前資料分析結果並形成學術/研究草稿。\n\n## 研究背景與問題意識\n\n本文件根據既有分析產物自動整理,可作為研究報告、論文初稿或研究提案的起點。\n\n## 研究方法\n\n{method_text}\n\n## 研究發現\n\n{findings_md}\n\n## 討論\n\n目前結果可支撐初步描述性討論,後續可進一步補上研究假設、比較對照與方法嚴謹性。\n\n## 限制\n\n{limitations_md}\n\n## 結論\n\n本分析已形成研究性文件的結構基礎,適合進一步擴展為正式研究報告。"
elif mode == 'business':
sections = f"## 執行摘要\n\n本文面向{audience},目的是支援「{purpose}」的商務溝通與內部決策。\n\n## 商務背景\n\n本文件根據既有分析產物自動整理,適合作為內部簡報、策略討論或管理層報告的第一版。\n\n## 分析方法\n\n{method_text}\n\n## 關鍵洞察\n\n{findings_md}\n\n## 商業意涵\n\n目前資料已足以支撐一輪決策討論,建議進一步對照 KPI、目標值與外部環境。\n\n## 風險與限制\n\n{limitations_md}\n\n## 建議下一步\n\n建議針對最具決策價值的指標建立定期追蹤與後續驗證流程。"
else:
sections = f"## Pitch Summary\n\n本文面向{audience},用於支援「{purpose}」的提案、募資或說服型簡報。\n\n## Opportunity\n\n本文件根據既有分析產物自動整理,可作為提案 deck 與口頭簡報的第一版底稿。\n\n## Evidence\n\n{method_text}\n\n## Key Takeaways\n\n{findings_md}\n\n## Why It Matters\n\n目前結果已可形成明確敘事雛形,後續可補上市場機會、競品比較與具體行動方案。\n\n## Risks\n\n{limitations_md}\n\n## Ask / Next Step\n\n建議將數據證據、主張與下一步行動整合成對外一致的提案版本。"
insight_section = ''
if insights_md:
insight_section = f"\n## 洞察摘要\n\n{insights_md}\n"
return f"# {title}\n\n- 模式:`{mode}`\n- 等級:`{level}` — {LEVEL_NOTES[level]}\n- 對象:{audience}\n- 目的:{purpose}\n\n{sections}\n\n## 圖表與視覺化資產\n\n{plot_refs}{insight_section}\n## 附錄:原始自動分析摘要\n\n{report_md}\n"
def make_slides(title: str, audience: str, purpose: str, mode: str, summary: dict[str, Any], plots: list[Path], level: str) -> str:
findings = build_key_findings(summary)
rows = summary.get('rows', 0)
cols = summary.get('columns', 0)
if mode == 'academic':
slides = [
('封面', [f'標題:{title}', f'對象:{audience}', f'目的:{purpose}', f'等級:{LEVEL_NOTES[level]}']),
('研究問題', ['定義研究背景與核心問題', '說明本次分析欲回答的主題']),
('資料概況', [f'資料筆數:{rows}', f'欄位數:{cols}', '已完成基本欄位剖析與摘要']),
('方法', ['描述性統計', '類別分布觀察', '視覺化探索']),
('研究發現', findings[:3]),
('討論', ['解釋主要發現的可能意義', '連結研究問題與資料結果']),
('限制', build_limitations(summary, mode)[:2]),
('後續研究', ['補充文獻回顧', '加入比較基準與進階分析']),
('結論', ['本份簡報可作為研究報告或論文簡報的第一版底稿']),
]
elif mode == 'business':
slides = [
('封面', [f'標題:{title}', f'對象:{audience}', f'目的:{purpose}', f'等級:{LEVEL_NOTES[level]}']),
('決策問題', ['這份分析要支援什麼決策', '為什麼現在需要處理']),
('資料概況', [f'資料筆數:{rows}', f'欄位數:{cols}', '已完成基本資料盤點']),
('分析方法', ['描述性統計', '類別分布觀察', '視覺化探索']),
('關鍵洞察', findings[:3]),
('商業意涵', ['把數據結果轉成管理層可理解的含義', '指出可能影響的目標或 KPI']),
('風險與限制', build_limitations(summary, mode)[:2]),
('建議行動', ['列出近期可執行事項', '定義需要追蹤的指標']),
('結語', ['本份簡報可作為正式管理簡報的第一版底稿']),
]
else:
slides = [
('封面', [f'標題:{title}', f'對象:{audience}', f'目的:{purpose}', f'等級:{LEVEL_NOTES[level]}']),
('痛點 / 機會', ['說明這份分析解決什麼問題', '點出為什麼值得關注']),
('證據基礎', [f'資料筆數:{rows}', f'欄位數:{cols}', '已完成資料摘要與圖表探索']),
('方法', ['描述性統計', '類別觀察', '關鍵圖表整理']),
('核心亮點', findings[:3]),
('為什麼重要', ['連結價值、影響與說服力', '把發現轉成可傳達的敘事']),
('風險', build_limitations(summary, mode)[:2]),
('Next Step / Ask', ['明確提出下一步', '對齊資源、合作或決策需求']),
('結語', ['本份 deck 可作為提案或募資簡報的第一版底稿']),
]
parts = [f"# {title}|簡報稿\n\n- 模式:`{mode}`\n- 等級:`{level}` — {LEVEL_NOTES[level]}\n"]
slide_no = 1
for heading, bullets in slides:
parts.append(f"## Slide {slide_no}{heading}")
parts.extend([f"- {x}" for x in bullets])
parts.append('')
slide_no += 1
if level in {'v3', 'v4'} and plots:
for plot in plots:
note = interpret_plot(plot, mode)
parts.append(f"## Slide {slide_no}{note['title']}")
parts.append(f"- 圖檔:{plot.name}")
parts.append(f"- 解讀:{note['summary']}")
parts.append(f"- 延伸:{note['so_what']}")
parts.append('')
slide_no += 1
return '\n'.join(parts).strip() + '\n'
def make_speaker_notes(title: str, mode: str, summary: dict[str, Any], plots: list[Path], level: str) -> str:
findings = build_key_findings(summary)
findings_md = '\n'.join([f"- {x}" for x in findings])
opener = {
'academic': '先交代研究背景、研究問題與資料來源,再說明這份內容是研究草稿第一版。',
'business': '先講這份分析支援哪個決策,再交代這份內容的管理價值與時間敏感性。',
'pitch': '先抓住聽眾注意力,說明痛點、機會與這份資料為何值得相信。',
}[mode]
closer = {
'academic': '結尾時回到研究限制與後續研究方向。',
'business': '結尾時回到建議行動與追蹤機制。',
'pitch': '結尾時回到 ask、資源需求與下一步承諾。',
}[mode]
parts = [
f"# {title}Speaker Notes",
'',
f"- 模式:`{mode}`",
f"- 等級:`{level}` — {LEVEL_NOTES[level]}",
'',
'## 開場',
f"- {opener}",
'',
'## 重點提示',
findings_md,
'',
]
if level in {'v3', 'v4'} and plots:
parts.extend(['## 逐圖口頭提示', ''])
for plot in plots:
note = interpret_plot(plot, mode)
parts.append(f"### {plot.name}")
parts.append(f"- {note['summary']}")
parts.append(f"- {note['so_what']}")
parts.append('')
parts.extend(['## 收尾建議', f"- {closer}", '- 針對最重要的一張圖,多講一層其背後的意義與行動建議。', ''])
return '\n'.join(parts)
def make_deck_html(title: str, audience: str, purpose: str, slides_md: str, plots: list[Path], mode: str, level: str) -> str:
if level == 'v4':
theme = {
'academic': {'primary': '#0f172a', 'accent': '#334155', 'bg': '#eef2ff', 'hero': 'linear-gradient(135deg,#0f172a 0%,#1e293b 55%,#475569 100%)'},
'business': {'primary': '#0b3b66', 'accent': '#1d4ed8', 'bg': '#eff6ff', 'hero': 'linear-gradient(135deg,#0b3b66 0%,#1d4ed8 60%,#60a5fa 100%)'},
'pitch': {'primary': '#4c1d95', 'accent': '#7c3aed', 'bg': '#faf5ff', 'hero': 'linear-gradient(135deg,#4c1d95 0%,#7c3aed 60%,#c084fc 100%)'},
}[mode]
primary = theme['primary']
accent = theme['accent']
bg = theme['bg']
hero = theme['hero']
plot_map = {p.name: p for p in plots}
else:
primary = '#1f2937'
accent = '#2563eb'
bg = '#f6f8fb'
hero = None
plot_map = {p.name: p for p in plots}
slide_blocks = []
current = []
current_title = None
for line in slides_md.splitlines():
if line.startswith('## Slide '):
if current_title is not None:
slide_blocks.append((current_title, current))
current_title = line.replace('## ', '', 1)
current = []
elif line.startswith('- 模式:') or line.startswith('- 等級:') or line.startswith('# '):
continue
else:
current.append(line)
if current_title is not None:
slide_blocks.append((current_title, current))
sections = []
for heading, body in slide_blocks:
body_html = []
referenced_plot = None
for line in body:
line = line.strip()
if not line:
continue
if line.startswith('- 圖檔:'):
plot_name = line.replace('- 圖檔:', '', 1).strip()
referenced_plot = plot_map.get(plot_name)
body_html.append(f"<li>{html.escape(line[2:])}</li>")
elif line.startswith('- '):
body_html.append(f"<li>{html.escape(line[2:])}</li>")
else:
body_html.append(f"<p>{html.escape(line)}</p>")
img_html = ''
if referenced_plot and level in {'v3', 'v4'}:
img_html = f"<div class='plot-single'><img src='{html.escape(referenced_plot.name)}' alt='{html.escape(referenced_plot.name)}' /><div class='plot-caption'>圖:{html.escape(referenced_plot.name)}</div></div>"
list_items = ''.join(x for x in body_html if x.startswith('<li>'))
paras = ''.join(x for x in body_html if x.startswith('<p>'))
list_html = f"<ul>{list_items}</ul>" if list_items else ''
if level == 'v4':
sections.append(
f"<section class='slide'><div class='slide-top'><div class='eyebrow'>{html.escape(mode.upper())}</div>"
f"<div class='page-tag'>{html.escape(heading.split('')[0])}</div></div><h2>{html.escape(heading)}</h2>{paras}{list_html}{img_html}</section>"
)
else:
sections.append(f"<section class='slide'><h2>{html.escape(heading)}</h2>{paras}{list_html}{img_html}</section>")
if level == 'v4':
css = f"""
@page {{ size: A4 landscape; margin: 0; }}
@media print {{
body {{ background: #fff; padding: 0; }}
.slide {{ box-shadow: none; margin: 0; min-height: 100vh; border-radius: 0; page-break-after: always; page-break-inside: avoid; border-top-width: 16px; border-top-style: solid; border-top-color: {accent}; }}
.hero {{ box-shadow: none; margin: 0; min-height: 100vh; border-radius: 0; }}
}}
body {{ font-family: -apple-system, BlinkMacSystemFont, 'Segoe UI', Roboto, 'Helvetica Neue', Arial, 'Noto Sans CJK TC', sans-serif; background: {bg}; margin: 0; padding: 32px; color: {primary}; }}
.hero {{ max-width: 1180px; margin: 0 auto 32px; padding: 56px 64px; border-radius: 32px; background: {hero}; color: white; box-shadow: 0 32px 64px rgba(15,23,42,.15); display: flex; flex-direction: column; justify-content: center; min-height: 500px; }}
.hero h1 {{ margin: 12px 0 20px; font-size: 52px; line-height: 1.2; letter-spacing: -0.02em; font-weight: 800; text-wrap: balance; }}
.hero p {{ margin: 8px 0; font-size: 20px; opacity: .9; font-weight: 400; }}
.hero-meta {{ display: grid; grid-template-columns: repeat(auto-fit, minmax(200px, 1fr)); gap: 16px; margin-top: 48px; }}
.hero-card {{ background: rgba(255,255,255,.1); border: 1px solid rgba(255,255,255,.2); border-radius: 20px; padding: 20px 24px; backdrop-filter: blur(10px); }}
.hero-card strong {{ display: block; font-size: 14px; text-transform: uppercase; letter-spacing: 0.05em; opacity: 0.8; margin-bottom: 6px; }}
.slide {{ background: #fff; border-radius: 32px; padding: 48px 56px; margin: 0 auto 32px; max-width: 1180px; min-height: 660px; box-shadow: 0 16px 48px rgba(15,23,42,.08); page-break-after: always; border-top: 16px solid {accent}; position: relative; overflow: hidden; display: flex; flex-direction: column; }}
.slide::after {{ content: ''; position: absolute; right: -80px; top: -80px; width: 240px; height: 240px; background: radial-gradient(circle, {bg} 0%, rgba(255,255,255,0) 70%); pointer-events: none; }}
.slide-top {{ display: flex; justify-content: space-between; align-items: center; margin-bottom: 24px; z-index: 1; }}
h1, h2 {{ margin-top: 0; font-weight: 700; }}
h2 {{ font-size: 36px; margin-bottom: 24px; color: {primary}; letter-spacing: -0.01em; }}
.slide p {{ font-size: 20px; line-height: 1.6; color: #334155; margin-bottom: 16px; }}
.slide ul {{ line-height: 1.6; font-size: 22px; padding-left: 28px; color: #1e293b; margin-top: 8px; flex-grow: 1; }}
.slide li {{ position: relative; padding-left: 8px; }}
.slide li + li {{ margin-top: 14px; }}
.slide li::marker {{ color: {accent}; font-weight: bold; }}
.eyebrow {{ display: inline-flex; align-items: center; padding: 8px 16px; border-radius: 999px; background: {bg}; color: {accent}; font-weight: 800; font-size: 13px; letter-spacing: .1em; box-shadow: 0 2px 8px rgba(0,0,0,0.04); }}
.page-tag {{ color: #94a3b8; font-size: 14px; font-weight: 700; text-transform: uppercase; letter-spacing: 0.05em; }}
.plot-single {{ margin-top: auto; text-align: center; padding-top: 24px; position: relative; display: flex; flex-direction: column; align-items: center; justify-content: center; }}
.plot-single img {{ max-width: 100%; max-height: 380px; border: 1px solid #e2e8f0; border-radius: 20px; background: #f8fafc; box-shadow: 0 12px 32px rgba(15,23,42,.06); padding: 8px; }}
.plot-caption {{ margin-top: 14px; font-size: 15px !important; color: #64748b !important; font-style: italic; text-align: center; background: #f1f5f9; padding: 6px 16px; border-radius: 999px; }}
""".strip()
hero_html = (
f"<div class='hero'><div class='eyebrow'>{html.escape(mode.upper())}</div>"
f"<h1>{html.escape(title)}</h1><p>適用對象:{html.escape(audience)}</p><p>目的:{html.escape(purpose)}</p>"
f"<div class='hero-meta'>"
f"<div class='hero-card'><strong>等級</strong><br>{html.escape(level)}{html.escape(LEVEL_NOTES[level])}</div>"
f"<div class='hero-card'><strong>圖表數量</strong><br>{len(plots)}</div>"
f"<div class='hero-card'><strong>輸出定位</strong><br>正式 deck / PDF-ready</div>"
f"</div></div>"
)
else:
css = f"""
body {{ font-family: Arial, 'Noto Sans CJK TC', sans-serif; background: {bg}; margin: 0; padding: 24px; color: {primary}; }}
.hero {{ max-width: 1100px; margin: 0 auto 24px; padding: 8px 6px; }}
.slide {{ background: #fff; border-radius: 18px; padding: 32px; margin: 0 auto 24px; max-width: 1100px; box-shadow: 0 8px 28px rgba(0,0,0,.08); page-break-after: always; border-top: 10px solid {accent}; }}
h1, h2 {{ margin-top: 0; }}
h1 {{ font-size: 40px; }}
ul {{ line-height: 1.7; }}
.plot-single {{ margin-top: 18px; text-align: center; }}
img {{ max-width: 100%; border: 1px solid #ddd; border-radius: 12px; background: #fff; }}
.plot-caption {{ margin-top: 10px; font-size: 14px; color: #6b7280; font-style: italic; }}
""".strip()
hero_html = (
f"<div class='hero'><h1>{html.escape(title)}</h1><p>對象:{html.escape(audience)}</p>"
f"<p>目的:{html.escape(purpose)}</p><p>等級:{html.escape(level)}{html.escape(LEVEL_NOTES[level])}</p></div>"
)
return (
"<!doctype html><html><head><meta charset='utf-8'>"
f"<title>{html.escape(title)}</title><style>{css}</style></head><body>"
+ hero_html
+ ''.join(sections)
+ "</body></html>"
)
def main() -> int:
parser = argparse.ArgumentParser(
description='Generate paper/slides bundle from analysis outputs',
epilog=(
'Levels: '
'v2=基礎交付版paper/slides/speaker-notes/deck '
'v3=洞察強化版v2 + insights + 每張圖逐頁解讀); '
'v4=正式交付版v3 + 更正式 deck 視覺 + PDF-ready 工作流)'
),
)
parser.add_argument('--analysis-dir', required=True)
parser.add_argument('--output-dir', required=True)
parser.add_argument('--title', default='研究分析草稿')
parser.add_argument('--audience', default='決策者')
parser.add_argument('--purpose', default='研究報告')
parser.add_argument('--mode', default='business', choices=sorted(MODES))
parser.add_argument(
'--level',
default='v4',
choices=sorted(LEVELS),
help='輸出等級v2=基礎交付版v3=洞察強化版v4=正式交付版(預設)',
)
args = parser.parse_args()
analysis_dir = Path(args.analysis_dir).expanduser().resolve()
output_dir = Path(args.output_dir).expanduser().resolve()
output_dir.mkdir(parents=True, exist_ok=True)
summary_path = analysis_dir / 'summary.json'
report_path = analysis_dir / 'report.md'
if not summary_path.exists():
raise SystemExit(f'Missing summary.json in {analysis_dir}')
if not report_path.exists():
raise SystemExit(f'Missing report.md in {analysis_dir}')
summary = read_json(summary_path)
report_md = read_text(report_path)
plots = find_plots(analysis_dir)
insights_md = make_insights_md(args.title, args.mode, summary, plots) if args.level in {'v3', 'v4'} else None
paper_md = make_paper(args.title, args.audience, args.purpose, args.mode, args.level, summary, report_md, plots, insights_md)
slides_md = make_slides(args.title, args.audience, args.purpose, args.mode, summary, plots, args.level)
speaker_notes = make_speaker_notes(args.title, args.mode, summary, plots, args.level)
deck_html = make_deck_html(args.title, args.audience, args.purpose, slides_md, plots, args.mode, args.level)
for plot in plots:
dest = output_dir / plot.name
if dest != plot:
shutil.copy2(plot, dest)
(output_dir / 'paper.md').write_text(paper_md, encoding='utf-8')
(output_dir / 'slides.md').write_text(slides_md, encoding='utf-8')
(output_dir / 'speaker-notes.md').write_text(speaker_notes, encoding='utf-8')
(output_dir / 'deck.html').write_text(deck_html, encoding='utf-8')
if insights_md:
(output_dir / 'insights.md').write_text(insights_md, encoding='utf-8')
manifest_outputs = {
'paper': str(output_dir / 'paper.md'),
'slides': str(output_dir / 'slides.md'),
'speakerNotes': str(output_dir / 'speaker-notes.md'),
'deckHtml': str(output_dir / 'deck.html'),
}
if insights_md:
manifest_outputs['insights'] = str(output_dir / 'insights.md')
manifest = {
'title': args.title,
'audience': args.audience,
'purpose': args.purpose,
'mode': args.mode,
'level': args.level,
'levelNote': LEVEL_NOTES[args.level],
'analysisDir': str(analysis_dir),
'outputs': manifest_outputs,
'plots': [str(p) for p in plots],
}
(output_dir / 'bundle.json').write_text(json.dumps(manifest, ensure_ascii=False, indent=2), encoding='utf-8')
print(json.dumps(manifest, ensure_ascii=False, indent=2))
return 0
if __name__ == '__main__':
raise SystemExit(main())

View File

@@ -0,0 +1,127 @@
---
name: skill-review
description: 審查 openclaw-skill repo 中的 Skills提出改進建議並透過 Gitea PR 提交。每位 Agent 有各自的 fork走標準 fork → branch → PR 流程。
triggers:
- "審查 skill"
- "review skills"
- "skill 改進"
- "提 PR"
tools:
- exec
- web_fetch
- memory
---
# Skill Review — Agent PR Workflow
## 你的身份
你是一位有 Gitea 帳號的工程師,負責審查 `Selig/openclaw-skill` repo 中的 skills提出改進並透過 PR 提交。
## 環境變數
- `GITEA_URL`: Gitea 基礎 URLhttps://git.nature.edu.kg
- `GITEA_TOKEN_<AGENT>`: 你的 Gitea API token根據 agent ID 取對應的)
- Agent → Gitea 帳號對應:
- main → `xiaoming`(小明,專案管理/綜合審查)
- tiangong → `tiangong`(天工,架構/安全)
- kaiwu → `kaiwu`開物UX/前端)
- yucheng → `yucheng`(玉成,全棧/測試)
## 審查重點
根據你的角色,重點審查不同面向:
### 小明main— 專案經理
- 整體 skill 的完整性與一致性
- SKILL.md 描述是否清楚、trigger 是否遺漏常見用法
- 跨 skill 的重複邏輯或可整合之處
- 文件與實作是否同步
### 天工tiangong— 架構設計師
- SKILL.md 的 trigger 設計是否合理、會不會誤觸發
- handler.ts 的錯誤處理、邊界情況
- 安全性:有無注入風險、敏感資訊洩漏
- Skill 之間的協作與依賴關係
### 開物kaiwu— 前端視覺
- SKILL.md 的使用者體驗:描述是否清楚、觸發詞是否直覺
- handler.ts 的輸出格式Telegram markdown 排版、emoji 使用
- 回覆內容的可讀性與美觀度
### 玉成yucheng— 全棧整合
- handler.ts 的程式碼品質:型別安全、效能、可維護性
- 缺少的功能或整合機會
- 測試邊界:空值處理、異常輸入
- 文件完整性
## 工作流程
### Step 1: 同步 fork
```
POST /api/v1/repos/{owner}/{repo}/mirror-sync # 如果有 mirror
```
或者直接用最新的 upstream 內容。
### Step 2: 讀取所有 Skills
讀取 repo 中 `skills/` 目錄下的每個 skill 的 SKILL.md 和 handler.ts。
### Step 3: 選擇改進目標
- 每次只改 **1 個 skill 的 1 個面向**(小而精確的 PR
- 如果所有 skill 都很好,可以提出新 skill 的建議
### Step 4: 透過 Gitea API 提交
1. **建立分支**(從 main
```
POST /api/v1/repos/{owner}/{repo}/branches
{"new_branch_name": "improve/daily-briefing-error-handling", "old_branch_name": "main"}
```
2. **更新檔案**
```
PUT /api/v1/repos/{owner}/{repo}/contents/{filepath}
{"content": "<base64>", "message": "commit message", "branch": "improve/...", "sha": "<current-sha>"}
```
3. **建立 PR**(從 fork 到 upstream
```
POST /api/v1/repos/Selig/openclaw-skill/pulls
{
"title": "improve(daily-briefing): 加強天氣查詢錯誤處理",
"body": "## 改進說明\n...\n## 變更內容\n...\n## 測試建議\n...",
"head": "<agent-username>:improve/daily-briefing-error-handling",
"base": "main"
}
```
## PR 格式規範
### 標題
```
<type>(<skill>): <簡述>
```
Type: `improve`, `fix`, `feat`, `docs`, `refactor`
### 內文
```markdown
## 改進說明
為什麼要做這個改動?發現了什麼問題?
## 變更內容
- 具體改了什麼
## 測試建議
- 如何驗證這個改動是正確的
---
🤖 由 <agent-name> 自動審查並提交
```
## 注意事項
- **一次只提一個 PR**,不要批量修改多個 skill
- **不要修改** handler.ts 中涉及認證、密碼、token 的部分
- **不要刪除** 現有功能,只能改進或新增
- 如果沒有值得改進的地方,回覆「所有 Skills 目前狀態良好,無需改動」即可
- PR 建立後,回覆 PR 的 URL 讓使用者知道

View File

@@ -0,0 +1,241 @@
/**
* skill-review handler
* 提供 Gitea API 操作的輔助函式,供 agent 審查 skill 並提交 PR。
*/
const GITEA_URL = process.env.GITEA_URL || 'https://git.nature.edu.kg';
const UPSTREAM_OWNER = 'Selig';
const REPO_NAME = 'openclaw-skill';
// Agent ID → Gitea 帳號 & token 環境變數對應
const AGENT_MAP: Record<string, { username: string; tokenEnv: string }> = {
main: { username: 'xiaoming', tokenEnv: 'GITEA_TOKEN_XIAOMING' },
tiangong: { username: 'tiangong', tokenEnv: 'GITEA_TOKEN_TIANGONG' },
kaiwu: { username: 'kaiwu', tokenEnv: 'GITEA_TOKEN_KAIWU' },
yucheng: { username: 'yucheng', tokenEnv: 'GITEA_TOKEN_YUCHENG' },
};
interface GiteaFile {
name: string;
path: string;
sha: string;
content?: string;
encoding?: string;
}
async function giteaApi(
token: string,
method: string,
path: string,
body?: any
): Promise<any> {
const url = `${GITEA_URL}/api/v1${path}`;
const opts: RequestInit = {
method,
headers: {
Authorization: `token ${token}`,
'Content-Type': 'application/json',
},
};
if (body) opts.body = JSON.stringify(body);
const res = await fetch(url, opts);
const text = await res.text();
if (!res.ok) {
throw new Error(`Gitea API ${method} ${path}${res.status}: ${text}`);
}
return text ? JSON.parse(text) : null;
}
/** 同步 fork用 Gitea merge upstream API */
async function syncFork(token: string, owner: string): Promise<void> {
try {
// Gitea 1.25: POST /repos/{owner}/{repo}/merge-upstream
await giteaApi(token, 'POST', `/repos/${owner}/${REPO_NAME}/merge-upstream`, {
branch: 'main',
});
} catch (e: any) {
// 如果 API 不存在或已同步,忽略
if (!e.message.includes('409')) {
console.warn('syncFork warning:', e.message);
}
}
}
/** 列出 skills 目錄下的所有 skill */
async function listSkills(token: string, owner: string): Promise<string[]> {
const items = await giteaApi(
token,
'GET',
`/repos/${owner}/${REPO_NAME}/contents/skills?ref=main`
);
return items
.filter((item: any) => item.type === 'dir')
.map((item: any) => item.name);
}
/** 讀取檔案內容 */
async function readFile(
token: string,
owner: string,
filepath: string,
ref = 'main'
): Promise<GiteaFile> {
return giteaApi(
token,
'GET',
`/repos/${owner}/${REPO_NAME}/contents/${filepath}?ref=${ref}`
);
}
/** 建立分支 */
async function createBranch(
token: string,
owner: string,
branchName: string
): Promise<void> {
await giteaApi(token, 'POST', `/repos/${owner}/${REPO_NAME}/branches`, {
new_branch_name: branchName,
old_branch_name: 'main',
});
}
/** 更新檔案(需要 sha */
async function updateFile(
token: string,
owner: string,
filepath: string,
content: string,
sha: string,
branch: string,
message: string
): Promise<void> {
const b64 = Buffer.from(content, 'utf-8').toString('base64');
await giteaApi(
token,
'PUT',
`/repos/${owner}/${REPO_NAME}/contents/${filepath}`,
{ content: b64, sha, message, branch }
);
}
/** 建立新檔案 */
async function createFile(
token: string,
owner: string,
filepath: string,
content: string,
branch: string,
message: string
): Promise<void> {
const b64 = Buffer.from(content, 'utf-8').toString('base64');
await giteaApi(
token,
'POST',
`/repos/${owner}/${REPO_NAME}/contents/${filepath}`,
{ content: b64, message, branch }
);
}
/** 建立 PR從 fork 到 upstream */
async function createPR(
token: string,
agentUsername: string,
title: string,
body: string,
branch: string
): Promise<{ url: string; number: number }> {
const pr = await giteaApi(
token,
'POST',
`/repos/${UPSTREAM_OWNER}/${REPO_NAME}/pulls`,
{
title,
body,
head: `${agentUsername}:${branch}`,
base: 'main',
}
);
return { url: pr.html_url, number: pr.number };
}
export async function handler(ctx: any) {
// 偵測當前 agent
const agentId = ctx.env?.OPENCLAW_AGENT_ID || ctx.agentId || 'unknown';
const agentConfig = AGENT_MAP[agentId];
if (!agentConfig) {
return {
reply: `❌ 無法辨識 agent: ${agentId}\n支援的 agent: ${Object.keys(AGENT_MAP).join(', ')}`,
};
}
const token = ctx.env?.[agentConfig.tokenEnv] || process.env[agentConfig.tokenEnv];
if (!token) {
return {
reply: `❌ 找不到 ${agentConfig.tokenEnv},請確認 .env 設定。`,
};
}
const username = agentConfig.username;
try {
// Step 1: 同步 fork
await syncFork(token, username);
// Step 2: 列出所有 skills
const skills = await listSkills(token, username);
// Step 3: 讀取每個 skill 的內容
const skillContents: Record<string, { skillMd: string; handlerTs: string }> = {};
for (const skill of skills) {
try {
const skillMdFile = await readFile(token, username, `skills/${skill}/SKILL.md`);
const handlerFile = await readFile(token, username, `skills/${skill}/handler.ts`);
skillContents[skill] = {
skillMd: Buffer.from(skillMdFile.content || '', 'base64').toString('utf-8'),
handlerTs: Buffer.from(handlerFile.content || '', 'base64').toString('utf-8'),
};
} catch {
// 跳過讀取失敗的 skill
}
}
// 回傳資料供 agent 分析
return {
reply: `✅ Fork 已同步,共讀取 ${Object.keys(skillContents).length} 個 skills。\n\n請根據你的角色審查以下 skills選擇一個提出改進 PR。`,
data: {
agentId,
username,
skills: skillContents,
// 提供 API helper 資訊,讓 agent 知道可以用 exec 呼叫
api: {
createBranch: 'handler.createBranch(token, owner, branchName)',
updateFile: 'handler.updateFile(token, owner, filepath, content, sha, branch, message)',
createFile: 'handler.createFile(token, owner, filepath, content, branch, message)',
createPR: 'handler.createPR(token, agentUsername, title, body, branch)',
},
},
metadata: { agentId, username, skillCount: Object.keys(skillContents).length },
};
} catch (err: any) {
return {
reply: `❌ 執行失敗: ${err.message}`,
error: err.message,
};
}
}
// 匯出 helper 供 agent 透過 exec 使用
export {
syncFork,
listSkills,
readFile,
createBranch,
updateFile,
createFile,
createPR,
giteaApi,
AGENT_MAP,
};

View File

@@ -0,0 +1,7 @@
{
"version": 1,
"registry": "https://clawhub.ai",
"slug": "skill-vetter",
"installedVersion": "1.0.0",
"installedAt": 1773199291047
}

View File

@@ -0,0 +1,138 @@
---
name: skill-vetter
version: 1.0.0
description: Security-first skill vetting for AI agents. Use before installing any skill from ClawdHub, GitHub, or other sources. Checks for red flags, permission scope, and suspicious patterns.
---
# Skill Vetter 🔒
Security-first vetting protocol for AI agent skills. **Never install a skill without vetting it first.**
## When to Use
- Before installing any skill from ClawdHub
- Before running skills from GitHub repos
- When evaluating skills shared by other agents
- Anytime you're asked to install unknown code
## Vetting Protocol
### Step 1: Source Check
```
Questions to answer:
- [ ] Where did this skill come from?
- [ ] Is the author known/reputable?
- [ ] How many downloads/stars does it have?
- [ ] When was it last updated?
- [ ] Are there reviews from other agents?
```
### Step 2: Code Review (MANDATORY)
Read ALL files in the skill. Check for these **RED FLAGS**:
```
🚨 REJECT IMMEDIATELY IF YOU SEE:
─────────────────────────────────────────
• curl/wget to unknown URLs
• Sends data to external servers
• Requests credentials/tokens/API keys
• Reads ~/.ssh, ~/.aws, ~/.config without clear reason
• Accesses MEMORY.md, USER.md, SOUL.md, IDENTITY.md
• Uses base64 decode on anything
• Uses eval() or exec() with external input
• Modifies system files outside workspace
• Installs packages without listing them
• Network calls to IPs instead of domains
• Obfuscated code (compressed, encoded, minified)
• Requests elevated/sudo permissions
• Accesses browser cookies/sessions
• Touches credential files
─────────────────────────────────────────
```
### Step 3: Permission Scope
```
Evaluate:
- [ ] What files does it need to read?
- [ ] What files does it need to write?
- [ ] What commands does it run?
- [ ] Does it need network access? To where?
- [ ] Is the scope minimal for its stated purpose?
```
### Step 4: Risk Classification
| Risk Level | Examples | Action |
|------------|----------|--------|
| 🟢 LOW | Notes, weather, formatting | Basic review, install OK |
| 🟡 MEDIUM | File ops, browser, APIs | Full code review required |
| 🔴 HIGH | Credentials, trading, system | Human approval required |
| ⛔ EXTREME | Security configs, root access | Do NOT install |
## Output Format
After vetting, produce this report:
```
SKILL VETTING REPORT
═══════════════════════════════════════
Skill: [name]
Source: [ClawdHub / GitHub / other]
Author: [username]
Version: [version]
───────────────────────────────────────
METRICS:
• Downloads/Stars: [count]
• Last Updated: [date]
• Files Reviewed: [count]
───────────────────────────────────────
RED FLAGS: [None / List them]
PERMISSIONS NEEDED:
• Files: [list or "None"]
• Network: [list or "None"]
• Commands: [list or "None"]
───────────────────────────────────────
RISK LEVEL: [🟢 LOW / 🟡 MEDIUM / 🔴 HIGH / ⛔ EXTREME]
VERDICT: [✅ SAFE TO INSTALL / ⚠️ INSTALL WITH CAUTION / ❌ DO NOT INSTALL]
NOTES: [Any observations]
═══════════════════════════════════════
```
## Quick Vet Commands
For GitHub-hosted skills:
```bash
# Check repo stats
curl -s "https://api.github.com/repos/OWNER/REPO" | jq '{stars: .stargazers_count, forks: .forks_count, updated: .updated_at}'
# List skill files
curl -s "https://api.github.com/repos/OWNER/REPO/contents/skills/SKILL_NAME" | jq '.[].name'
# Fetch and review SKILL.md
curl -s "https://raw.githubusercontent.com/OWNER/REPO/main/skills/SKILL_NAME/SKILL.md"
```
## Trust Hierarchy
1. **Official OpenClaw skills** → Lower scrutiny (still review)
2. **High-star repos (1000+)** → Moderate scrutiny
3. **Known authors** → Moderate scrutiny
4. **New/unknown sources** → Maximum scrutiny
5. **Skills requesting credentials** → Human approval always
## Remember
- No skill is worth compromising security
- When in doubt, don't install
- Ask your human for high-risk decisions
- Document what you vet for future reference
---
*Paranoia is a feature.* 🔒🦀

View File

@@ -0,0 +1,6 @@
{
"ownerId": "kn71j6xbmpwfvx4c6y1ez8cd718081mg",
"slug": "skill-vetter",
"version": "1.0.0",
"publishedAt": 1769863429632
}

1
skills/summarize Symbolic link
View File

@@ -0,0 +1 @@
/home/selig/.openclaw/workspace/skills/summarize

View File

@@ -0,0 +1,7 @@
{
"version": 1,
"registry": "https://clawhub.ai",
"slug": "tavily-tool",
"installedVersion": "0.1.1",
"installedAt": 1773199294594
}

View File

@@ -0,0 +1,46 @@
---
name: tavily
description: Use Tavily web search/discovery to find URLs/sources, do lead research, gather up-to-date links, or produce a cited summary from web results.
metadata: {"openclaw":{"requires":{"env":["TAVILY_API_KEY"]},"primaryEnv":"TAVILY_API_KEY"}}
---
# Tavily
Use the bundled CLI to run Tavily searches from the terminal and collect sources fast.
## Quick start (CLI)
The scripts **require** `TAVILY_API_KEY` in the environment (sent as `Authorization: Bearer ...`).
```bash
export TAVILY_API_KEY="..."
node skills/tavily/scripts/tavily_search.js --query "best rust http client" --max_results 5
```
- JSON response is printed to **stdout**.
- A simple URL list is printed to **stderr** by default.
## Common patterns
### Get URLs only
```bash
export TAVILY_API_KEY="..."
node skills/tavily/scripts/tavily_search.js --query "OpenTelemetry collector config" --urls-only
```
### Restrict to (or exclude) specific domains
```bash
export TAVILY_API_KEY="..."
node skills/tavily/scripts/tavily_search.js \
--query "oauth device code flow" \
--include_domains oauth.net,datatracker.ietf.org \
--exclude_domains medium.com
```
## Notes
- The bundled CLI supports a subset of Tavilys request fields (query, max_results, include_domains, exclude_domains).
- For API field notes and more examples, read: `references/tavily-api.md`.
- Wrapper script (optional): `scripts/tavily_search.sh`.

View File

@@ -0,0 +1,6 @@
{
"ownerId": "kn78x7kg14jggfbz385es5bdrn81ddgw",
"slug": "tavily-tool",
"version": "0.1.1",
"publishedAt": 1772290357545
}

View File

@@ -0,0 +1,55 @@
# Tavily API notes (quick reference)
## Endpoint
- Search: `POST https://api.tavily.com/search`
## Auth
- Send the API key via HTTP header: `Authorization: Bearer <TAVILY_API_KEY>`.
- This skills scripts read the key from **env var only**: `TAVILY_API_KEY`.
## Common request fields
```json
{
"query": "...",
"max_results": 5,
"include_domains": ["example.com"],
"exclude_domains": ["spam.com"]
}
```
(Additional Tavily options exist; this skills CLI supports only a common subset for discovery use-cases.)
## Script usage
### JSON output (stdout) + URL list (stderr)
```bash
export TAVILY_API_KEY="..."
node skills/tavily/scripts/tavily_search.js --query "best open source vector database" --max_results 5
```
### URLs only
```bash
export TAVILY_API_KEY="..."
node skills/tavily/scripts/tavily_search.js --query "SvelteKit tutorial" --urls-only
```
### Include / exclude domains
```bash
export TAVILY_API_KEY="..."
node skills/tavily/scripts/tavily_search.js \
--query "websocket load testing" \
--include_domains k6.io,github.com \
--exclude_domains medium.com
```
## Notes
- Exit code `2` indicates missing required args or missing `TAVILY_API_KEY`.
- Exit code `3` indicates network/HTTP failure.
- Exit code `4` indicates a non-JSON response.

View File

@@ -0,0 +1,161 @@
#!/usr/bin/env node
/**
* Tavily Search CLI
*
* - Reads TAVILY_API_KEY from env only.
* - Prints full JSON response to stdout.
* - Prints a simple list of URLs to stderr by default (can be disabled).
*/
const TAVILY_ENDPOINT = 'https://api.tavily.com/search';
function usage(msg) {
if (msg) console.error(`Error: ${msg}\n`);
console.error(`Usage:
tavily_search.js --query "..." [--max_results 5] [--include_domains a.com,b.com] [--exclude_domains x.com,y.com]
Options:
--query, -q Search query (required)
--max_results, -n Max results (default: 5; clamped to 0..20)
--include_domains Comma-separated domains to include
--exclude_domains Comma-separated domains to exclude
--urls-stderr Print URL list to stderr (default: true)
--no-urls-stderr Disable URL list to stderr
--urls-only Print URLs (one per line) to stdout instead of JSON
--help, -h Show help
Env:
TAVILY_API_KEY (required) Tavily API key
Exit codes:
0 success
2 usage / missing required inputs
3 network / HTTP error
4 invalid JSON response
`);
}
function parseArgs(argv) {
const out = {
query: null,
max_results: 5,
include_domains: null,
exclude_domains: null,
urls_stderr: true,
urls_only: false,
help: false,
};
for (let i = 0; i < argv.length; i++) {
const a = argv[i];
if (a === '--help' || a === '-h') out.help = true;
else if (a === '--query' || a === '-q') out.query = argv[++i];
else if (a === '--max_results' || a === '-n') out.max_results = Number(argv[++i]);
else if (a === '--include_domains') out.include_domains = argv[++i];
else if (a === '--exclude_domains') out.exclude_domains = argv[++i];
else if (a === '--urls-stderr') out.urls_stderr = true;
else if (a === '--no-urls-stderr') out.urls_stderr = false;
else if (a === '--urls-only') out.urls_only = true;
else return { error: `Unknown arg: ${a}` };
}
if (Number.isNaN(out.max_results) || !Number.isFinite(out.max_results)) {
return { error: `--max_results must be a number` };
}
// Tavily allows 0..20; clamp to stay in range.
out.max_results = Math.max(0, Math.min(20, Math.trunc(out.max_results)));
const csvToArray = (s) => {
if (!s) return null;
const arr = s.split(',').map(x => x.trim()).filter(Boolean);
return arr.length ? arr : null;
};
out.include_domains = csvToArray(out.include_domains);
out.exclude_domains = csvToArray(out.exclude_domains);
return out;
}
async function main() {
const args = parseArgs(process.argv.slice(2));
if (args.error) {
usage(args.error);
process.exit(2);
}
if (args.help) {
usage();
process.exit(0);
}
const apiKey = process.env.TAVILY_API_KEY;
if (!apiKey) {
usage('TAVILY_API_KEY env var is required');
process.exit(2);
}
if (!args.query) {
usage('--query is required');
process.exit(2);
}
const payload = {
query: args.query,
max_results: args.max_results,
};
if (args.include_domains) payload.include_domains = args.include_domains;
if (args.exclude_domains) payload.exclude_domains = args.exclude_domains;
let res;
try {
res = await fetch(TAVILY_ENDPOINT, {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'Authorization': `Bearer ${apiKey}`,
},
body: JSON.stringify(payload),
});
} catch (e) {
console.error(`Network error calling Tavily: ${e?.message || String(e)}`);
process.exit(3);
}
if (!res.ok) {
let bodyText = '';
try { bodyText = await res.text(); } catch {}
console.error(`Tavily HTTP error: ${res.status} ${res.statusText}`);
if (bodyText) console.error(bodyText);
process.exit(3);
}
let data;
try {
data = await res.json();
} catch (e) {
console.error(`Invalid JSON response from Tavily: ${e?.message || String(e)}`);
process.exit(4);
}
const urls = Array.isArray(data?.results)
? data.results.map(r => r?.url).filter(Boolean)
: [];
if (args.urls_only) {
for (const u of urls) process.stdout.write(`${u}\n`);
process.exit(0);
}
process.stdout.write(JSON.stringify(data, null, 2));
process.stdout.write('\n');
if (args.urls_stderr && urls.length) {
console.error('\nURLs:');
for (const u of urls) console.error(u);
}
}
main().catch((e) => {
console.error(`Unexpected error: ${e?.stack || e?.message || String(e)}`);
process.exit(1);
});

View File

@@ -0,0 +1,9 @@
#!/usr/bin/env bash
set -euo pipefail
# Wrapper to run the Node Tavily search CLI.
# Usage:
# TAVILY_API_KEY=... ./tavily_search.sh --query "..." --max_results 5
DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
exec node "$DIR/tavily_search.js" "$@"

View File

@@ -8,7 +8,7 @@
* - curl CLI
*/
import { execSync } from 'child_process';
import { spawnSync } from 'child_process';
import { readFileSync, existsSync, unlinkSync } from 'fs';
const LUXTTS_BASE = 'http://localhost:7860';
@@ -56,13 +56,22 @@ function ensureCookie(): boolean {
if (!pass) return false;
try {
execSync(
`curl -s -o /dev/null -w "%{http_code}" -c ${COOKIE_JAR} ` +
`-d "username=${user}&password=${pass}" ` +
`${LUXTTS_BASE}/luxtts/login`,
{ timeout: 10000 }
const result = spawnSync(
'curl',
[
'-s',
'-o', '/dev/null',
'-w', '%{http_code}',
'-c', COOKIE_JAR,
'-X', 'POST',
'-d', `username=${user}&password=${pass}`,
`${LUXTTS_BASE}/luxtts/login`,
],
{ timeout: 10000, encoding: 'utf-8' }
);
return existsSync(COOKIE_JAR);
const httpCode = (result.stdout || '').trim();
return result.status === 0 && httpCode === '200' && existsSync(COOKIE_JAR);
} catch {
return false;
}
@@ -71,11 +80,14 @@ function ensureCookie(): boolean {
/** Check if LuxTTS service is alive */
function healthCheck(): boolean {
try {
const result = execSync(
`curl -s -o /dev/null -w "%{http_code}" ${LUXTTS_BASE}/luxtts/api/health`,
{ timeout: 5000 }
).toString().trim();
return result === '200';
const result = spawnSync(
'curl',
['-s', '-o', '/dev/null', '-w', '%{http_code}', `${LUXTTS_BASE}/luxtts/api/health`],
{ timeout: 5000, encoding: 'utf-8' }
);
const httpCode = (result.stdout || '').trim();
return result.status === 0 && httpCode === '200';
} catch {
return false;
}
@@ -113,19 +125,28 @@ function generateSpeech(text: string, params: TtsParams): string | null {
const outPath = `/tmp/tts_output_${timestamp}.wav`;
try {
const httpCode = execSync(
`curl -s -o ${outPath} -w "%{http_code}" ` +
`-b ${COOKIE_JAR} ` +
`-X POST ${LUXTTS_BASE}/luxtts/api/tts ` +
`-F "ref_audio=@${REF_AUDIO}" ` +
`-F "text=${text.replace(/"/g, '\\"')}" ` +
`-F "num_steps=${params.numSteps}" ` +
`-F "t_shift=${params.tShift}" ` +
`-F "speed=${params.speed}"`,
{ timeout: 120000 } // 2 min timeout for CPU synthesis
).toString().trim();
const args = [
'-s',
'-o', outPath,
'-w', '%{http_code}',
'-b', COOKIE_JAR,
'-X', 'POST',
`${LUXTTS_BASE}/luxtts/api/tts`,
'-F', `ref_audio=@${REF_AUDIO}`,
'-F', `text=${text}`,
'-F', `num_steps=${params.numSteps}`,
'-F', `t_shift=${params.tShift}`,
'-F', `speed=${params.speed}`,
];
if (httpCode === '200' && existsSync(outPath)) {
const result = spawnSync('curl', args, {
timeout: 120000, // 2 min timeout for CPU synthesis
encoding: 'utf-8',
});
const httpCode = (result.stdout || '').trim();
if (result.status === 0 && httpCode === '200' && existsSync(outPath)) {
return outPath;
}