Add 5 missing skills to repo for sync coverage
github-repo-search, gooddays-calendar, luxtts, openclaw-tavily-search, skill-vetter — previously only in workspace, now tracked in Gitea for full sync. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
This commit is contained in:
253
skills/github-repo-search/SKILL.md
Normal file
253
skills/github-repo-search/SKILL.md
Normal file
@@ -0,0 +1,253 @@
|
|||||||
|
---
|
||||||
|
name: github-repo-search
|
||||||
|
description: 帮助用户搜索和筛选 GitHub 开源项目,输出结构化推荐报告。当用户说"帮我找开源项目"、"搜一下GitHub上有什么"、"找找XX方向的仓库"、"开源项目推荐"、"github搜索"、"/github-search"时触发。
|
||||||
|
---
|
||||||
|
|
||||||
|
# GitHub 开源项目搜索助手
|
||||||
|
|
||||||
|
## 用途
|
||||||
|
|
||||||
|
从用户自然语言需求出发,经过需求挖掘、检索词拆解、GitHub 检索、过滤分类、深度解读,最终产出结构化推荐结果。
|
||||||
|
|
||||||
|
目标不是"给很多链接",而是"给用户可理解、可比较、可决策、可直接行动的候选仓库列表"。
|
||||||
|
|
||||||
|
## 适用范围(V1.1)
|
||||||
|
|
||||||
|
- 数据源:GitHub 公开仓库。
|
||||||
|
- 默认不授权(不使用用户 Token)。
|
||||||
|
- 默认硬过滤:`stars >= 100`、`archived=false`、`is:public`。
|
||||||
|
- 默认输出:单榜单(Top N),榜单内按"仓库归属类型"标注。
|
||||||
|
- 本流程默认不包含安装与落地实施(除非用户单独提出)。
|
||||||
|
|
||||||
|
### 配额说明(必须知晓)
|
||||||
|
|
||||||
|
- 未授权 Core API:`60 次/小时`。
|
||||||
|
- Search API:`10 次/分钟`(独立于 Core 额度)。
|
||||||
|
- 需要在报告中注明检索时间与配额状态,避免结果不可复现。
|
||||||
|
|
||||||
|
## 工作流程
|
||||||
|
|
||||||
|
### 环节一:需求收敛(必须完成,不可跳过)
|
||||||
|
|
||||||
|
> **硬性门控**:环节一是整个流程的前置条件。无论用户的需求描述多么清晰,都必须走完本环节并获得用户明确确认后,才能进入环节二。禁止根据用户的初始描述直接推断需求并开始检索。即使用户说"直接搜就行",也要先输出需求摘要让用户确认。
|
||||||
|
|
||||||
|
#### 第一步:需求挖掘与对齐
|
||||||
|
|
||||||
|
**目标**:把"我想看看 XX"转成可执行、可排序、可解释的检索目标。
|
||||||
|
|
||||||
|
**需确认信息(最少)**:
|
||||||
|
|
||||||
|
1. 主题(如:agent 记忆、RAG、浏览器自动化)
|
||||||
|
2. 数量(Top 10 / Top 20)
|
||||||
|
3. 最低 stars(默认 100)
|
||||||
|
4. 排序模式(必须二选一):`相关性优先` / `星标优先`(默认:相关性优先)
|
||||||
|
5. 目标形态(必须二选一或多选):
|
||||||
|
`可直接使用的产品` / `可二次开发的框架` / `资料清单/方法论`
|
||||||
|
|
||||||
|
**建议补充信息(可选)**:
|
||||||
|
|
||||||
|
1. 偏好技术栈(Python/TS/Go 等)
|
||||||
|
2. 使用场景(学习、生产、对标)
|
||||||
|
3. 排除项(教程仓库、归档仓库、纯论文复现等)
|
||||||
|
4. 部署偏好(本地优先/云端优先/混合)
|
||||||
|
|
||||||
|
**阶段输出(固定格式)**:
|
||||||
|
|
||||||
|
```text
|
||||||
|
核心诉求:
|
||||||
|
- 主题:xxx
|
||||||
|
- 数量:Top N
|
||||||
|
- 最低 stars:>= 100
|
||||||
|
- 排序模式:相关性优先 / 星标优先(默认:相关性优先)
|
||||||
|
- 目标形态:xxx
|
||||||
|
- 偏好:xxx(可空)
|
||||||
|
- 排除:xxx(可空)
|
||||||
|
```
|
||||||
|
|
||||||
|
向用户确认以上信息。**用户明确确认后才能进入环节二,否则停在这里继续对齐。**
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### 环节二:检索执行(以下环节由模型自主执行,无需用户介入,直到环节四交付报告)
|
||||||
|
|
||||||
|
#### 第二步:检索词拆解(5-10 组)
|
||||||
|
|
||||||
|
**目标**:平衡"召回率"和"相关性",避免只靠单词硬搜导致偏题。
|
||||||
|
|
||||||
|
**拆词规则**:
|
||||||
|
|
||||||
|
每组 query 由以下维度组合:
|
||||||
|
|
||||||
|
1. 核心词:用户目标词
|
||||||
|
2. 同义词:替代表达(如 long-term memory / stateful memory)
|
||||||
|
3. 场景词:coding、mcp、tool、platform、awesome、curated
|
||||||
|
4. 技术词:agent、sdk、framework、database、os
|
||||||
|
5. 排除思路:不在 query 里硬写过多负例,放到后续过滤阶段
|
||||||
|
|
||||||
|
**产出格式**:
|
||||||
|
|
||||||
|
```text
|
||||||
|
Query-1: "xxx"
|
||||||
|
目的:高召回核心主题
|
||||||
|
|
||||||
|
Query-2: "xxx"
|
||||||
|
目的:补同义词盲区
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 第三步:执行检索与候选召回
|
||||||
|
|
||||||
|
**执行原则**:
|
||||||
|
|
||||||
|
1. 每组 query 都执行检索(建议每组 30-50 条)。
|
||||||
|
2. 合并结果形成候选池。
|
||||||
|
3. 按 `owner/repo` 去重。
|
||||||
|
4. 记录检索时间与 API 额度信息。
|
||||||
|
|
||||||
|
**候选池字段(最少)**:
|
||||||
|
|
||||||
|
1. `owner/repo`
|
||||||
|
2. `stars`
|
||||||
|
3. `description`
|
||||||
|
4. `repo_url`
|
||||||
|
5. `archived`
|
||||||
|
6. `language`
|
||||||
|
7. `updated_at`
|
||||||
|
8. `topics`
|
||||||
|
9. `license`
|
||||||
|
|
||||||
|
#### 第四步:去重与硬过滤
|
||||||
|
|
||||||
|
**硬过滤(默认)**:
|
||||||
|
|
||||||
|
1. `stars >= 100`
|
||||||
|
2. `archived = false`
|
||||||
|
3. `is:public`
|
||||||
|
|
||||||
|
**可选硬过滤(按需)**:
|
||||||
|
|
||||||
|
1. `fork = false`
|
||||||
|
2. 指定语言:`language:xxx`
|
||||||
|
3. 更新时效:最近 6-12 个月
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### 环节三:质量精炼
|
||||||
|
|
||||||
|
#### 第五步:噪音剔除与相关性重排
|
||||||
|
|
||||||
|
**目标**:解决"命中 memory 但其实不是 agent memory"的噪音问题。
|
||||||
|
|
||||||
|
**噪音剔除规则(示例)**:
|
||||||
|
|
||||||
|
1. 与主题无关的通用工程仓库(即使 stars 很高)
|
||||||
|
2. 关键词误命中仓库(仅描述中偶然出现 memory/agent)
|
||||||
|
3. 无实质内容或异常仓库
|
||||||
|
|
||||||
|
**排序原则(V1.1)**:
|
||||||
|
|
||||||
|
`star` 不再作为主排序,只作为召回门槛之一。
|
||||||
|
建议综合排序权重:
|
||||||
|
|
||||||
|
1. 需求相关性:35%
|
||||||
|
2. 场景适用性:30%
|
||||||
|
3. 活跃度(更新时效):15%
|
||||||
|
4. 工程成熟度(文档/示例/可维护):15%
|
||||||
|
5. stars:5%
|
||||||
|
|
||||||
|
#### 第六步:仓库归属类型分类(必须)
|
||||||
|
|
||||||
|
**目标**:让用户一眼看懂"这个仓库到底是什么角色",避免把框架、应用、目录混为一谈。
|
||||||
|
|
||||||
|
**推荐类型字典**:
|
||||||
|
|
||||||
|
1. 通用框架层
|
||||||
|
2. 应用产品层(可直接使用)
|
||||||
|
3. 记忆层/上下文基础设施
|
||||||
|
4. MCP 服务层
|
||||||
|
5. 目录清单层(awesome/curated)
|
||||||
|
6. 垂直场景方案层
|
||||||
|
7. 方法论/研究层
|
||||||
|
|
||||||
|
#### 第七步:深读与项目介绍撰写(必须)
|
||||||
|
|
||||||
|
**目标**:不是"仓库简介复述",而是输出"对用户有决策价值"的详细介绍。
|
||||||
|
|
||||||
|
**深读最低要求**:
|
||||||
|
|
||||||
|
每个入选仓库至少查看:
|
||||||
|
|
||||||
|
1. README 核心定位段
|
||||||
|
2. 快速开始/功能章节标题
|
||||||
|
3. 近期维护信号(更新时间、Issue/PR 活跃)
|
||||||
|
|
||||||
|
**项目介绍写作要求(固定)**:
|
||||||
|
|
||||||
|
"项目介绍"必须包含两部分并写细:
|
||||||
|
|
||||||
|
1. 这是什么:它在系统架构中的角色和边界
|
||||||
|
2. 为什么推荐:它在用户当前目标下的价值(不是泛泛优点)
|
||||||
|
|
||||||
|
可补充:
|
||||||
|
|
||||||
|
1. 典型适用场景(1-2 条)
|
||||||
|
2. 限制或不适用场景(1 条)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### 环节四:交付与迭代
|
||||||
|
|
||||||
|
#### 第八步:单榜生成与报告交付(最终)
|
||||||
|
|
||||||
|
**交付结构(固定)**:
|
||||||
|
|
||||||
|
1. 需求摘要
|
||||||
|
2. 检索词清单(5-10 组 + 目的)
|
||||||
|
3. 筛选与重排规则(明确写出)
|
||||||
|
4. 结果总览(原始召回/去重后/过滤后)
|
||||||
|
5. Top N 单榜(表格)
|
||||||
|
6. 结论与下一步建议
|
||||||
|
|
||||||
|
**Top N 表格字段(固定)**:
|
||||||
|
|
||||||
|
| 仓库 | 星标 | 仓库归属类型 | 项目介绍(是什么 + 推荐理由) | 其它信息补充 | 链接 |
|
||||||
|
|---|---:|---|---|---|---|
|
||||||
|
|
||||||
|
**"其它信息补充"建议内容**:
|
||||||
|
|
||||||
|
- 语言 / License / 最近更新时间
|
||||||
|
- 上手复杂度(低/中/高)
|
||||||
|
- 风险提示(若有)
|
||||||
|
|
||||||
|
#### 第九步:用户确认与迭代(可选)
|
||||||
|
|
||||||
|
**迭代触发条件**:
|
||||||
|
|
||||||
|
用户反馈"太泛/太窄/不够准/解释不够细"。
|
||||||
|
|
||||||
|
**迭代动作**:
|
||||||
|
|
||||||
|
1. 调整检索词(增加场景词或同义词)
|
||||||
|
2. 调整 stars 门槛(100 -> 200/500)
|
||||||
|
3. 增加限定(语言/方向/更新时间)
|
||||||
|
4. 调整类型权重(例如优先应用层或优先框架层)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 默认参数(V1.1)
|
||||||
|
|
||||||
|
1. 最低 stars:`100`
|
||||||
|
2. 默认输出:`Top 10`
|
||||||
|
3. 默认过滤:`archived=false`
|
||||||
|
4. 默认必须分类:是
|
||||||
|
5. 默认项目介绍粒度:详细(至少"是什么 + 为什么推荐")
|
||||||
|
|
||||||
|
## 质量检查清单(交付前自检)
|
||||||
|
|
||||||
|
1. 是否完成需求对齐并明确"目标形态"
|
||||||
|
2. 是否有 5-10 组 query 且每组有目的
|
||||||
|
3. 是否记录了检索时间与配额状态
|
||||||
|
4. 是否执行了去重、硬过滤和噪音剔除
|
||||||
|
5. 是否完成仓库归属类型分类
|
||||||
|
6. 是否每个推荐都有详细项目介绍(不是一句话)
|
||||||
|
7. 是否使用固定表格字段交付
|
||||||
|
8. 是否避免把安装实施混入本流程
|
||||||
46
skills/gooddays-calendar/SKILL.md
Normal file
46
skills/gooddays-calendar/SKILL.md
Normal file
@@ -0,0 +1,46 @@
|
|||||||
|
---
|
||||||
|
name: gooddays-calendar
|
||||||
|
description: 讀寫 GoodDays 行程與今日吉時資訊。支援登入取得 JWT、查詢 `/api/unified-events`,以及呼叫 `/api/mystical/daily` 取得今日吉時/神祕學資料。
|
||||||
|
---
|
||||||
|
|
||||||
|
# gooddays-calendar
|
||||||
|
|
||||||
|
此 skill 用於整合 GoodDays API,讓 agent 可以直接:
|
||||||
|
|
||||||
|
1. 登入 GoodDays 取得 JWT
|
||||||
|
2. 查詢未來事件(`/api/unified-events`)
|
||||||
|
3. 查詢今日吉時/神祕學資訊(`/api/mystical/daily`)
|
||||||
|
4. 用自然語言判斷是要查「吉時」還是「行程」
|
||||||
|
|
||||||
|
## API 重點
|
||||||
|
- Base URL:`GOODDAYS_BASE_URL`
|
||||||
|
- Login:`POST /auth/login`
|
||||||
|
- Mystical daily:`POST /api/mystical/daily`
|
||||||
|
- Events:`/api/unified-events`
|
||||||
|
|
||||||
|
## Mystical daily 實測格式
|
||||||
|
必填欄位:
|
||||||
|
- `year`
|
||||||
|
- `month`
|
||||||
|
- `day`
|
||||||
|
|
||||||
|
選填欄位:
|
||||||
|
- `hour`
|
||||||
|
- `userId`
|
||||||
|
|
||||||
|
範例:
|
||||||
|
```json
|
||||||
|
{"year":2026,"month":3,"day":13,"hour":9}
|
||||||
|
```
|
||||||
|
|
||||||
|
## 設定來源
|
||||||
|
從 workspace `.env` 讀取:
|
||||||
|
- `GOODDAYS_BASE_URL`
|
||||||
|
- `GOODDAYS_EMAIL`
|
||||||
|
- `GOODDAYS_PASSWORD`
|
||||||
|
- `GOODDAYS_USER_ID`
|
||||||
|
|
||||||
|
## 後續可擴充
|
||||||
|
- 新增事件建立/更新/刪除
|
||||||
|
- 將今日吉時整理成 daily-briefing 可直接引用的格式
|
||||||
|
- 與 `life-planner` / `daily-briefing` skill 串接
|
||||||
192
skills/gooddays-calendar/handler.ts
Normal file
192
skills/gooddays-calendar/handler.ts
Normal file
@@ -0,0 +1,192 @@
|
|||||||
|
import { readFileSync, existsSync } from 'fs';
|
||||||
|
|
||||||
|
type EnvMap = Record<string, string>;
|
||||||
|
|
||||||
|
function loadDotEnv(path: string): EnvMap {
|
||||||
|
const out: EnvMap = {};
|
||||||
|
if (!existsSync(path)) return out;
|
||||||
|
const text = readFileSync(path, 'utf-8');
|
||||||
|
for (const line of text.split('\n')) {
|
||||||
|
const trimmed = line.trim();
|
||||||
|
if (!trimmed || trimmed.startsWith('#')) continue;
|
||||||
|
const idx = trimmed.indexOf('=');
|
||||||
|
if (idx === -1) continue;
|
||||||
|
const key = trimmed.slice(0, idx).trim();
|
||||||
|
const value = trimmed.slice(idx + 1).trim();
|
||||||
|
out[key] = value;
|
||||||
|
}
|
||||||
|
return out;
|
||||||
|
}
|
||||||
|
|
||||||
|
async function login(baseUrl: string, email: string, password: string): Promise<string> {
|
||||||
|
const res = await fetch(`${baseUrl}/auth/login`, {
|
||||||
|
method: 'POST',
|
||||||
|
headers: { 'Content-Type': 'application/json' },
|
||||||
|
body: JSON.stringify({ email, password }),
|
||||||
|
});
|
||||||
|
const data = await res.json() as any;
|
||||||
|
if (!res.ok || !data?.data?.token) {
|
||||||
|
throw new Error(data?.error || 'GoodDays login failed');
|
||||||
|
}
|
||||||
|
return data.data.token;
|
||||||
|
}
|
||||||
|
|
||||||
|
async function getMysticalDaily(baseUrl: string, token: string, payload: any) {
|
||||||
|
const res = await fetch(`${baseUrl}/api/mystical/daily`, {
|
||||||
|
method: 'POST',
|
||||||
|
headers: {
|
||||||
|
'Content-Type': 'application/json',
|
||||||
|
'Authorization': `Bearer ${token}`,
|
||||||
|
},
|
||||||
|
body: JSON.stringify(payload),
|
||||||
|
});
|
||||||
|
const data = await res.json() as any;
|
||||||
|
if (!res.ok || data?.success === false) {
|
||||||
|
throw new Error(data?.error || 'GoodDays mystical daily failed');
|
||||||
|
}
|
||||||
|
return data;
|
||||||
|
}
|
||||||
|
|
||||||
|
async function getUnifiedEvents(baseUrl: string, token: string, userId: string, startDate: string, endDate: string) {
|
||||||
|
const url = new URL(`${baseUrl}/api/unified-events`);
|
||||||
|
url.searchParams.set('userId', userId);
|
||||||
|
url.searchParams.set('startDate', startDate);
|
||||||
|
url.searchParams.set('endDate', endDate);
|
||||||
|
const res = await fetch(url.toString(), {
|
||||||
|
method: 'GET',
|
||||||
|
headers: token ? { 'Authorization': `Bearer ${token}` } : {},
|
||||||
|
});
|
||||||
|
const data = await res.json() as any;
|
||||||
|
if (!res.ok || data?.success === false) {
|
||||||
|
throw new Error(data?.error || 'GoodDays unified-events failed');
|
||||||
|
}
|
||||||
|
return data;
|
||||||
|
}
|
||||||
|
|
||||||
|
function parseDateFromMessage(message: string): { year: number; month: number; day: number; hour?: number } {
|
||||||
|
const now = new Date();
|
||||||
|
const dateMatch = message.match(/(\d{4})-(\d{1,2})-(\d{1,2})/);
|
||||||
|
const hourMatch = message.match(/(?:hour|小時|時|點)\s*[::]?\s*(\d{1,2})/i);
|
||||||
|
if (dateMatch) {
|
||||||
|
return {
|
||||||
|
year: Number(dateMatch[1]),
|
||||||
|
month: Number(dateMatch[2]),
|
||||||
|
day: Number(dateMatch[3]),
|
||||||
|
hour: hourMatch ? Number(hourMatch[1]) : undefined,
|
||||||
|
};
|
||||||
|
}
|
||||||
|
return {
|
||||||
|
year: now.getFullYear(),
|
||||||
|
month: now.getMonth() + 1,
|
||||||
|
day: now.getDate(),
|
||||||
|
hour: hourMatch ? Number(hourMatch[1]) : undefined,
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
|
function formatYmd(year: number, month: number, day: number): string {
|
||||||
|
return `${year}-${String(month).padStart(2, '0')}-${String(day).padStart(2, '0')}`;
|
||||||
|
}
|
||||||
|
|
||||||
|
function addDays(year: number, month: number, day: number, offset: number): { year: number; month: number; day: number } {
|
||||||
|
const d = new Date(year, month - 1, day);
|
||||||
|
d.setDate(d.getDate() + offset);
|
||||||
|
return { year: d.getFullYear(), month: d.getMonth() + 1, day: d.getDate() };
|
||||||
|
}
|
||||||
|
|
||||||
|
function detectIntent(message: string): 'events' | 'mystical' {
|
||||||
|
const m = message.toLowerCase();
|
||||||
|
if (/(行程|事件|日程|schedule|calendar|待會|今天有什麼安排|未來48小時)/i.test(m)) return 'events';
|
||||||
|
return 'mystical';
|
||||||
|
}
|
||||||
|
|
||||||
|
function summarizeEvents(events: any[]): string {
|
||||||
|
if (!Array.isArray(events) || events.length === 0) return '• 目前沒有查到符合條件的事件';
|
||||||
|
return events.slice(0, 20).map((evt: any, idx: number) => {
|
||||||
|
const title = evt?.title || evt?.name || evt?.summary || `事件 ${idx + 1}`;
|
||||||
|
const start = evt?.startDate || evt?.start || evt?.start_time || evt?.date || '未知時間';
|
||||||
|
const end = evt?.endDate || evt?.end || evt?.end_time || '';
|
||||||
|
return `• ${title}${start ? `|${start}` : ''}${end ? ` → ${end}` : ''}`;
|
||||||
|
}).join('\n');
|
||||||
|
}
|
||||||
|
|
||||||
|
export async function handler(ctx: any) {
|
||||||
|
const workspace = ctx.env?.OPENCLAW_WORKSPACE || `${process.env.HOME}/.openclaw/workspace`;
|
||||||
|
const env = {
|
||||||
|
...loadDotEnv(`${workspace}/.env`),
|
||||||
|
...process.env,
|
||||||
|
} as EnvMap;
|
||||||
|
|
||||||
|
const baseUrl = env.GOODDAYS_BASE_URL;
|
||||||
|
const email = env.GOODDAYS_EMAIL;
|
||||||
|
const password = env.GOODDAYS_PASSWORD;
|
||||||
|
const userId = env.GOODDAYS_USER_ID;
|
||||||
|
const message = ctx.message?.text || ctx.message?.content || '';
|
||||||
|
|
||||||
|
if (!baseUrl || !email || !password) {
|
||||||
|
return { reply: '缺少 GoodDays 設定,請先檢查 workspace/.env。' };
|
||||||
|
}
|
||||||
|
|
||||||
|
try {
|
||||||
|
const token = await login(baseUrl, email, password);
|
||||||
|
const datePayload = parseDateFromMessage(message);
|
||||||
|
const intent = detectIntent(message);
|
||||||
|
|
||||||
|
if (intent === 'events') {
|
||||||
|
const startDate = formatYmd(datePayload.year, datePayload.month, datePayload.day);
|
||||||
|
const plusOne = addDays(datePayload.year, datePayload.month, datePayload.day, 1);
|
||||||
|
const endDate = formatYmd(plusOne.year, plusOne.month, plusOne.day);
|
||||||
|
const result = await getUnifiedEvents(baseUrl, token, userId, startDate, endDate);
|
||||||
|
const events = result?.data || [];
|
||||||
|
return {
|
||||||
|
reply:
|
||||||
|
`📅 GoodDays 行程查詢\n\n` +
|
||||||
|
`區間:${startDate} ~ ${endDate}\n` +
|
||||||
|
`${summarizeEvents(events)}`,
|
||||||
|
metadata: {
|
||||||
|
engine: 'gooddays-calendar',
|
||||||
|
endpoint: '/api/unified-events',
|
||||||
|
startDate,
|
||||||
|
endDate,
|
||||||
|
count: Array.isArray(events) ? events.length : 0,
|
||||||
|
result,
|
||||||
|
},
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
|
const payload = { ...datePayload, userId };
|
||||||
|
if (payload.hour == null) delete (payload as any).hour;
|
||||||
|
|
||||||
|
const result = await getMysticalDaily(baseUrl, token, payload);
|
||||||
|
const d = result?.data || {};
|
||||||
|
const goodHours = d?.good_hours?.good_hours_display || '未提供';
|
||||||
|
const isGoodNow = d?.good_hours?.is_good_hour;
|
||||||
|
const ganzhi = d?.ganzhi?.day || '未知';
|
||||||
|
const lunar = d?.lunar?.full_date || '未知';
|
||||||
|
const dongong = d?.dongong?.note || '未提供';
|
||||||
|
const twelve = d?.twelve_star?.description || '未提供';
|
||||||
|
|
||||||
|
return {
|
||||||
|
reply:
|
||||||
|
`📅 GoodDays 今日資訊\n\n` +
|
||||||
|
`日期:${payload.year}-${String(payload.month).padStart(2, '0')}-${String(payload.day).padStart(2, '0')}` +
|
||||||
|
`${payload.hour != null ? ` ${payload.hour}:00` : ''}` +
|
||||||
|
`\n干支:${ganzhi}` +
|
||||||
|
`\n農曆:${lunar}` +
|
||||||
|
`\n吉時:${goodHours}` +
|
||||||
|
`\n此刻是否吉時:${isGoodNow === true ? '是' : isGoodNow === false ? '否' : '未知'}` +
|
||||||
|
`\n董公:${dongong}` +
|
||||||
|
`\n十二建星:${twelve}`,
|
||||||
|
metadata: {
|
||||||
|
engine: 'gooddays-calendar',
|
||||||
|
endpoint: '/api/mystical/daily',
|
||||||
|
payload,
|
||||||
|
result,
|
||||||
|
},
|
||||||
|
};
|
||||||
|
} catch (error: any) {
|
||||||
|
return {
|
||||||
|
reply: `❌ GoodDays 查詢失敗:${error?.message || String(error)}`,
|
||||||
|
metadata: { error: error?.message || String(error) },
|
||||||
|
};
|
||||||
|
}
|
||||||
|
}
|
||||||
47
skills/luxtts/SKILL.md
Normal file
47
skills/luxtts/SKILL.md
Normal file
@@ -0,0 +1,47 @@
|
|||||||
|
---
|
||||||
|
name: luxtts
|
||||||
|
description: 使用本機 LuxTTS 將文字合成為語音,特別適合需要較高品質中文/英文 voice clone 的情況。用於:(1) 使用主人參考音檔做語音克隆,(2) 中英混合朗讀但希望維持主人音色,(3) 比較 LuxTTS 與 Kokoro 的輸出品質,(4) 需要 LuxTTS API-only 本機服務時。
|
||||||
|
---
|
||||||
|
|
||||||
|
# luxtts
|
||||||
|
|
||||||
|
此 skill 提供 **LuxTTS** 文字轉語音能力,底層使用本機 **LuxTTS API**。
|
||||||
|
|
||||||
|
## 目前架構
|
||||||
|
|
||||||
|
- systemd 服務:`luxtts`
|
||||||
|
- Port:`7861`
|
||||||
|
- 綁定:`127.0.0.1`
|
||||||
|
- Root path:`/luxtts`
|
||||||
|
- 健康檢查:`http://127.0.0.1:7861/luxtts/api/health`
|
||||||
|
- Web UI:**關閉**
|
||||||
|
- API:保留
|
||||||
|
|
||||||
|
## 推薦做法
|
||||||
|
|
||||||
|
目前最穩定的整合方式是直接呼叫本機 API:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
curl -sS -o /tmp/luxtts_test.wav \
|
||||||
|
-F "ref_audio=@/path/to/reference.wav" \
|
||||||
|
-F "text=这个世界已经改变了,人工智能AI改变了这个世界的运作方式。" \
|
||||||
|
-F "num_steps=4" \
|
||||||
|
-F "t_shift=0.9" \
|
||||||
|
-F "speed=1.0" \
|
||||||
|
-F "duration=5" \
|
||||||
|
-F "rms=0.01" \
|
||||||
|
http://127.0.0.1:7861/luxtts/api/tts
|
||||||
|
```
|
||||||
|
|
||||||
|
## 注意事項
|
||||||
|
|
||||||
|
- 目前實測:**中文建議先轉簡體再輸入**。
|
||||||
|
- LuxTTS 比較適合:
|
||||||
|
- 主人音色 clone
|
||||||
|
- 中文/英文都希望保持同一個 clone 聲線
|
||||||
|
- 品質優先、速度其次
|
||||||
|
- 若只是快速中文朗讀、且不要求高擬真 clone,通常先考慮 `kokoro`。
|
||||||
|
|
||||||
|
## 命名
|
||||||
|
|
||||||
|
之後對外統一稱呼為 **luxtts**。
|
||||||
134
skills/luxtts/handler.ts
Normal file
134
skills/luxtts/handler.ts
Normal file
@@ -0,0 +1,134 @@
|
|||||||
|
/**
|
||||||
|
* luxtts skill
|
||||||
|
* 文字轉語音:透過本機 LuxTTS API 進行 voice clone
|
||||||
|
*/
|
||||||
|
|
||||||
|
import { existsSync, readFileSync } from 'fs';
|
||||||
|
import { execFileSync } from 'child_process';
|
||||||
|
|
||||||
|
const LUXTTS_API = process.env.LUXTTS_API || 'http://127.0.0.1:7861/luxtts/api/tts';
|
||||||
|
const DEFAULT_REF_AUDIO = process.env.LUXTTS_REF_AUDIO || '/home/selig/.openclaw/workspace/media/refs/ref_from_762.wav';
|
||||||
|
const OUTPUT_DIR = '/home/selig/.openclaw/workspace/media';
|
||||||
|
|
||||||
|
const TRIGGER_WORDS = [
|
||||||
|
'luxtts', 'lux', '文字轉語音', '語音合成', '唸出來', '說出來', '轉語音', 'voice',
|
||||||
|
];
|
||||||
|
|
||||||
|
const SPEED_MODIFIERS: Record<string, number> = {
|
||||||
|
'慢速': 0.85,
|
||||||
|
'slow': 0.85,
|
||||||
|
'快速': 1.15,
|
||||||
|
'fast': 1.15,
|
||||||
|
};
|
||||||
|
|
||||||
|
function parseMessage(message: string): { text: string; speed: number } {
|
||||||
|
let cleaned = message;
|
||||||
|
let speed = 1.0;
|
||||||
|
|
||||||
|
for (const trigger of TRIGGER_WORDS) {
|
||||||
|
const re = new RegExp(trigger, 'gi');
|
||||||
|
cleaned = cleaned.replace(re, '');
|
||||||
|
}
|
||||||
|
|
||||||
|
for (const [modifier, value] of Object.entries(SPEED_MODIFIERS)) {
|
||||||
|
const re = new RegExp(modifier, 'gi');
|
||||||
|
if (re.test(cleaned)) {
|
||||||
|
cleaned = cleaned.replace(re, '');
|
||||||
|
speed = value;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
cleaned = cleaned.replace(/^[\s::,,、]+/, '').replace(/[\s::,,、]+$/, '').trim();
|
||||||
|
return { text: cleaned, speed };
|
||||||
|
}
|
||||||
|
|
||||||
|
function ensureDependencies() {
|
||||||
|
if (!existsSync(DEFAULT_REF_AUDIO)) {
|
||||||
|
throw new Error(`找不到預設參考音檔:${DEFAULT_REF_AUDIO}`);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
function generateSpeech(text: string, speed: number): string {
|
||||||
|
const timestamp = Date.now();
|
||||||
|
const outputPath = `${OUTPUT_DIR}/luxtts_clone_${timestamp}.wav`;
|
||||||
|
const curlCmd = [
|
||||||
|
'curl', '-sS', '-o', outputPath,
|
||||||
|
'-F', `ref_audio=@${DEFAULT_REF_AUDIO}`,
|
||||||
|
'-F', `text=${text}`,
|
||||||
|
'-F', 'num_steps=4',
|
||||||
|
'-F', 't_shift=0.9',
|
||||||
|
'-F', `speed=${speed}`,
|
||||||
|
'-F', 'duration=5',
|
||||||
|
'-F', 'rms=0.01',
|
||||||
|
LUXTTS_API,
|
||||||
|
];
|
||||||
|
|
||||||
|
execFileSync(curlCmd[0], curlCmd.slice(1), {
|
||||||
|
timeout: 600000,
|
||||||
|
stdio: 'pipe',
|
||||||
|
encoding: 'utf8',
|
||||||
|
});
|
||||||
|
|
||||||
|
if (!existsSync(outputPath)) {
|
||||||
|
throw new Error('LuxTTS 未產生輸出音檔');
|
||||||
|
}
|
||||||
|
|
||||||
|
const header = readFileSync(outputPath).subarray(0, 16).toString('ascii');
|
||||||
|
if (!header.includes('RIFF') && !header.includes('WAVE')) {
|
||||||
|
throw new Error(`LuxTTS 回傳非 WAV 音訊,檔頭:${JSON.stringify(header)}`);
|
||||||
|
}
|
||||||
|
|
||||||
|
return outputPath;
|
||||||
|
}
|
||||||
|
|
||||||
|
export async function handler(ctx: any) {
|
||||||
|
const message = ctx.message?.text || ctx.message?.content || '';
|
||||||
|
|
||||||
|
if (!message.trim()) {
|
||||||
|
return { reply: '請提供要合成的文字,例如:「luxtts 这个世界已经改变了」' };
|
||||||
|
}
|
||||||
|
|
||||||
|
const { text, speed } = parseMessage(message);
|
||||||
|
|
||||||
|
if (!text) {
|
||||||
|
return { reply: '請提供要合成的文字,例如:「luxtts 这个世界已经改变了」' };
|
||||||
|
}
|
||||||
|
|
||||||
|
try {
|
||||||
|
ensureDependencies();
|
||||||
|
const outputPath = generateSpeech(text, speed);
|
||||||
|
|
||||||
|
return {
|
||||||
|
reply:
|
||||||
|
'🔊 luxtts 語音合成完成' +
|
||||||
|
`\n\n📝 文字:${text}` +
|
||||||
|
`\n⏩ 語速:${speed}` +
|
||||||
|
`\n🎙️ 參考音檔:\`${DEFAULT_REF_AUDIO}\`` +
|
||||||
|
`\n🌐 API:\`${LUXTTS_API}\`` +
|
||||||
|
`\n📂 檔案:\`${outputPath}\``,
|
||||||
|
metadata: {
|
||||||
|
text,
|
||||||
|
speed,
|
||||||
|
refAudio: DEFAULT_REF_AUDIO,
|
||||||
|
output: outputPath,
|
||||||
|
engine: 'luxtts',
|
||||||
|
backend: 'luxtts-api',
|
||||||
|
},
|
||||||
|
files: [outputPath],
|
||||||
|
};
|
||||||
|
} catch (error: any) {
|
||||||
|
return {
|
||||||
|
reply:
|
||||||
|
'❌ luxtts 語音合成失敗,請檢查 luxtts 服務、API 與預設參考音檔是否正常。' +
|
||||||
|
(error?.message ? `\n\n錯誤:${error.message}` : ''),
|
||||||
|
metadata: {
|
||||||
|
text,
|
||||||
|
speed,
|
||||||
|
refAudio: DEFAULT_REF_AUDIO,
|
||||||
|
engine: 'luxtts',
|
||||||
|
backend: 'luxtts-api',
|
||||||
|
error: error?.message || String(error),
|
||||||
|
},
|
||||||
|
};
|
||||||
|
}
|
||||||
|
}
|
||||||
7
skills/openclaw-tavily-search/.clawhub/origin.json
Normal file
7
skills/openclaw-tavily-search/.clawhub/origin.json
Normal file
@@ -0,0 +1,7 @@
|
|||||||
|
{
|
||||||
|
"version": 1,
|
||||||
|
"registry": "https://clawhub.ai",
|
||||||
|
"slug": "openclaw-tavily-search",
|
||||||
|
"installedVersion": "0.1.0",
|
||||||
|
"installedAt": 1773208165907
|
||||||
|
}
|
||||||
48
skills/openclaw-tavily-search/SKILL.md
Normal file
48
skills/openclaw-tavily-search/SKILL.md
Normal file
@@ -0,0 +1,48 @@
|
|||||||
|
---
|
||||||
|
name: tavily-search
|
||||||
|
description: "Web search via Tavily API (alternative to Brave). Use when the user asks to search the web / look up sources / find links and Brave web_search is unavailable or undesired. Returns a small set of relevant results (title, url, snippet) and can optionally include short answer summaries."
|
||||||
|
---
|
||||||
|
|
||||||
|
# Tavily Search
|
||||||
|
|
||||||
|
Use the bundled script to search the web with Tavily.
|
||||||
|
|
||||||
|
## Requirements
|
||||||
|
|
||||||
|
- Provide API key via either:
|
||||||
|
- environment variable: `TAVILY_API_KEY`, or
|
||||||
|
- `~/.openclaw/.env` line: `TAVILY_API_KEY=...`
|
||||||
|
|
||||||
|
## Commands
|
||||||
|
|
||||||
|
Run from the OpenClaw workspace:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# raw JSON (default)
|
||||||
|
python3 {baseDir}/scripts/tavily_search.py --query "..." --max-results 5
|
||||||
|
|
||||||
|
# include short answer (if available)
|
||||||
|
python3 {baseDir}/scripts/tavily_search.py --query "..." --max-results 5 --include-answer
|
||||||
|
|
||||||
|
# stable schema (closer to web_search): {query, results:[{title,url,snippet}], answer?}
|
||||||
|
python3 {baseDir}/scripts/tavily_search.py --query "..." --max-results 5 --format brave
|
||||||
|
|
||||||
|
# human-readable Markdown list
|
||||||
|
python3 {baseDir}/scripts/tavily_search.py --query "..." --max-results 5 --format md
|
||||||
|
```
|
||||||
|
|
||||||
|
## Output
|
||||||
|
|
||||||
|
### raw (default)
|
||||||
|
- JSON: `query`, optional `answer`, `results: [{title,url,content}]`
|
||||||
|
|
||||||
|
### brave
|
||||||
|
- JSON: `query`, optional `answer`, `results: [{title,url,snippet}]`
|
||||||
|
|
||||||
|
### md
|
||||||
|
- A compact Markdown list with title/url/snippet.
|
||||||
|
|
||||||
|
## Notes
|
||||||
|
|
||||||
|
- Keep `max-results` small by default (3–5) to reduce token/reading load.
|
||||||
|
- Prefer returning URLs + snippets; fetch full pages only when needed.
|
||||||
6
skills/openclaw-tavily-search/_meta.json
Normal file
6
skills/openclaw-tavily-search/_meta.json
Normal file
@@ -0,0 +1,6 @@
|
|||||||
|
{
|
||||||
|
"ownerId": "kn78hhhbxwjs4nrcyn8my5fcw981wmys",
|
||||||
|
"slug": "openclaw-tavily-search",
|
||||||
|
"version": "0.1.0",
|
||||||
|
"publishedAt": 1772121679343
|
||||||
|
}
|
||||||
159
skills/openclaw-tavily-search/scripts/tavily_search.py
Normal file
159
skills/openclaw-tavily-search/scripts/tavily_search.py
Normal file
@@ -0,0 +1,159 @@
|
|||||||
|
#!/usr/bin/env python3
|
||||||
|
import argparse
|
||||||
|
import json
|
||||||
|
import os
|
||||||
|
import pathlib
|
||||||
|
import re
|
||||||
|
import sys
|
||||||
|
import urllib.request
|
||||||
|
|
||||||
|
TAVILY_URL = "https://api.tavily.com/search"
|
||||||
|
|
||||||
|
|
||||||
|
def load_key():
|
||||||
|
key = os.environ.get("TAVILY_API_KEY")
|
||||||
|
if key:
|
||||||
|
return key.strip()
|
||||||
|
|
||||||
|
env_path = pathlib.Path.home() / ".openclaw" / ".env"
|
||||||
|
if env_path.exists():
|
||||||
|
try:
|
||||||
|
txt = env_path.read_text(encoding="utf-8", errors="ignore")
|
||||||
|
m = re.search(r"^\s*TAVILY_API_KEY\s*=\s*(.+?)\s*$", txt, re.M)
|
||||||
|
if m:
|
||||||
|
v = m.group(1).strip().strip('"').strip("'")
|
||||||
|
if v:
|
||||||
|
return v
|
||||||
|
except Exception:
|
||||||
|
pass
|
||||||
|
|
||||||
|
return None
|
||||||
|
|
||||||
|
|
||||||
|
def tavily_search(query: str, max_results: int, include_answer: bool, search_depth: str):
|
||||||
|
key = load_key()
|
||||||
|
if not key:
|
||||||
|
raise SystemExit(
|
||||||
|
"Missing TAVILY_API_KEY. Set env var TAVILY_API_KEY or add it to ~/.openclaw/.env"
|
||||||
|
)
|
||||||
|
|
||||||
|
payload = {
|
||||||
|
"api_key": key,
|
||||||
|
"query": query,
|
||||||
|
"max_results": max_results,
|
||||||
|
"search_depth": search_depth,
|
||||||
|
"include_answer": bool(include_answer),
|
||||||
|
"include_images": False,
|
||||||
|
"include_raw_content": False,
|
||||||
|
}
|
||||||
|
|
||||||
|
data = json.dumps(payload).encode("utf-8")
|
||||||
|
req = urllib.request.Request(
|
||||||
|
TAVILY_URL,
|
||||||
|
data=data,
|
||||||
|
headers={"Content-Type": "application/json", "Accept": "application/json"},
|
||||||
|
method="POST",
|
||||||
|
)
|
||||||
|
|
||||||
|
with urllib.request.urlopen(req, timeout=30) as resp:
|
||||||
|
body = resp.read().decode("utf-8", errors="replace")
|
||||||
|
|
||||||
|
try:
|
||||||
|
obj = json.loads(body)
|
||||||
|
except json.JSONDecodeError:
|
||||||
|
raise SystemExit(f"Tavily returned non-JSON: {body[:300]}")
|
||||||
|
|
||||||
|
out = {
|
||||||
|
"query": query,
|
||||||
|
"answer": obj.get("answer"),
|
||||||
|
"results": [],
|
||||||
|
}
|
||||||
|
|
||||||
|
for r in (obj.get("results") or [])[:max_results]:
|
||||||
|
out["results"].append(
|
||||||
|
{
|
||||||
|
"title": r.get("title"),
|
||||||
|
"url": r.get("url"),
|
||||||
|
"content": r.get("content"),
|
||||||
|
}
|
||||||
|
)
|
||||||
|
|
||||||
|
if not include_answer:
|
||||||
|
out.pop("answer", None)
|
||||||
|
|
||||||
|
return out
|
||||||
|
|
||||||
|
|
||||||
|
def to_brave_like(obj: dict) -> dict:
|
||||||
|
# A lightweight, stable shape similar to web_search: results with title/url/snippet.
|
||||||
|
results = []
|
||||||
|
for r in obj.get("results", []) or []:
|
||||||
|
results.append(
|
||||||
|
{
|
||||||
|
"title": r.get("title"),
|
||||||
|
"url": r.get("url"),
|
||||||
|
"snippet": r.get("content"),
|
||||||
|
}
|
||||||
|
)
|
||||||
|
out = {"query": obj.get("query"), "results": results}
|
||||||
|
if "answer" in obj:
|
||||||
|
out["answer"] = obj.get("answer")
|
||||||
|
return out
|
||||||
|
|
||||||
|
|
||||||
|
def to_markdown(obj: dict) -> str:
|
||||||
|
lines = []
|
||||||
|
if obj.get("answer"):
|
||||||
|
lines.append(obj["answer"].strip())
|
||||||
|
lines.append("")
|
||||||
|
for i, r in enumerate(obj.get("results", []) or [], 1):
|
||||||
|
title = (r.get("title") or "").strip() or r.get("url") or "(no title)"
|
||||||
|
url = r.get("url") or ""
|
||||||
|
snippet = (r.get("content") or "").strip()
|
||||||
|
lines.append(f"{i}. {title}")
|
||||||
|
if url:
|
||||||
|
lines.append(f" {url}")
|
||||||
|
if snippet:
|
||||||
|
lines.append(f" - {snippet}")
|
||||||
|
return "\n".join(lines).strip() + "\n"
|
||||||
|
|
||||||
|
|
||||||
|
def main():
|
||||||
|
ap = argparse.ArgumentParser()
|
||||||
|
ap.add_argument("--query", required=True)
|
||||||
|
ap.add_argument("--max-results", type=int, default=5)
|
||||||
|
ap.add_argument("--include-answer", action="store_true")
|
||||||
|
ap.add_argument(
|
||||||
|
"--search-depth",
|
||||||
|
default="basic",
|
||||||
|
choices=["basic", "advanced"],
|
||||||
|
help="Tavily search depth",
|
||||||
|
)
|
||||||
|
ap.add_argument(
|
||||||
|
"--format",
|
||||||
|
default="raw",
|
||||||
|
choices=["raw", "brave", "md"],
|
||||||
|
help="Output format: raw (default) | brave (title/url/snippet) | md (human-readable)",
|
||||||
|
)
|
||||||
|
args = ap.parse_args()
|
||||||
|
|
||||||
|
res = tavily_search(
|
||||||
|
query=args.query,
|
||||||
|
max_results=max(1, min(args.max_results, 10)),
|
||||||
|
include_answer=args.include_answer,
|
||||||
|
search_depth=args.search_depth,
|
||||||
|
)
|
||||||
|
|
||||||
|
if args.format == "md":
|
||||||
|
sys.stdout.write(to_markdown(res))
|
||||||
|
return
|
||||||
|
|
||||||
|
if args.format == "brave":
|
||||||
|
res = to_brave_like(res)
|
||||||
|
|
||||||
|
json.dump(res, sys.stdout, ensure_ascii=False)
|
||||||
|
sys.stdout.write("\n")
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
main()
|
||||||
7
skills/skill-vetter/.clawhub/origin.json
Normal file
7
skills/skill-vetter/.clawhub/origin.json
Normal file
@@ -0,0 +1,7 @@
|
|||||||
|
{
|
||||||
|
"version": 1,
|
||||||
|
"registry": "https://clawhub.ai",
|
||||||
|
"slug": "skill-vetter",
|
||||||
|
"installedVersion": "1.0.0",
|
||||||
|
"installedAt": 1773199291047
|
||||||
|
}
|
||||||
138
skills/skill-vetter/SKILL.md
Normal file
138
skills/skill-vetter/SKILL.md
Normal file
@@ -0,0 +1,138 @@
|
|||||||
|
---
|
||||||
|
name: skill-vetter
|
||||||
|
version: 1.0.0
|
||||||
|
description: Security-first skill vetting for AI agents. Use before installing any skill from ClawdHub, GitHub, or other sources. Checks for red flags, permission scope, and suspicious patterns.
|
||||||
|
---
|
||||||
|
|
||||||
|
# Skill Vetter 🔒
|
||||||
|
|
||||||
|
Security-first vetting protocol for AI agent skills. **Never install a skill without vetting it first.**
|
||||||
|
|
||||||
|
## When to Use
|
||||||
|
|
||||||
|
- Before installing any skill from ClawdHub
|
||||||
|
- Before running skills from GitHub repos
|
||||||
|
- When evaluating skills shared by other agents
|
||||||
|
- Anytime you're asked to install unknown code
|
||||||
|
|
||||||
|
## Vetting Protocol
|
||||||
|
|
||||||
|
### Step 1: Source Check
|
||||||
|
|
||||||
|
```
|
||||||
|
Questions to answer:
|
||||||
|
- [ ] Where did this skill come from?
|
||||||
|
- [ ] Is the author known/reputable?
|
||||||
|
- [ ] How many downloads/stars does it have?
|
||||||
|
- [ ] When was it last updated?
|
||||||
|
- [ ] Are there reviews from other agents?
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step 2: Code Review (MANDATORY)
|
||||||
|
|
||||||
|
Read ALL files in the skill. Check for these **RED FLAGS**:
|
||||||
|
|
||||||
|
```
|
||||||
|
🚨 REJECT IMMEDIATELY IF YOU SEE:
|
||||||
|
─────────────────────────────────────────
|
||||||
|
• curl/wget to unknown URLs
|
||||||
|
• Sends data to external servers
|
||||||
|
• Requests credentials/tokens/API keys
|
||||||
|
• Reads ~/.ssh, ~/.aws, ~/.config without clear reason
|
||||||
|
• Accesses MEMORY.md, USER.md, SOUL.md, IDENTITY.md
|
||||||
|
• Uses base64 decode on anything
|
||||||
|
• Uses eval() or exec() with external input
|
||||||
|
• Modifies system files outside workspace
|
||||||
|
• Installs packages without listing them
|
||||||
|
• Network calls to IPs instead of domains
|
||||||
|
• Obfuscated code (compressed, encoded, minified)
|
||||||
|
• Requests elevated/sudo permissions
|
||||||
|
• Accesses browser cookies/sessions
|
||||||
|
• Touches credential files
|
||||||
|
─────────────────────────────────────────
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step 3: Permission Scope
|
||||||
|
|
||||||
|
```
|
||||||
|
Evaluate:
|
||||||
|
- [ ] What files does it need to read?
|
||||||
|
- [ ] What files does it need to write?
|
||||||
|
- [ ] What commands does it run?
|
||||||
|
- [ ] Does it need network access? To where?
|
||||||
|
- [ ] Is the scope minimal for its stated purpose?
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step 4: Risk Classification
|
||||||
|
|
||||||
|
| Risk Level | Examples | Action |
|
||||||
|
|------------|----------|--------|
|
||||||
|
| 🟢 LOW | Notes, weather, formatting | Basic review, install OK |
|
||||||
|
| 🟡 MEDIUM | File ops, browser, APIs | Full code review required |
|
||||||
|
| 🔴 HIGH | Credentials, trading, system | Human approval required |
|
||||||
|
| ⛔ EXTREME | Security configs, root access | Do NOT install |
|
||||||
|
|
||||||
|
## Output Format
|
||||||
|
|
||||||
|
After vetting, produce this report:
|
||||||
|
|
||||||
|
```
|
||||||
|
SKILL VETTING REPORT
|
||||||
|
═══════════════════════════════════════
|
||||||
|
Skill: [name]
|
||||||
|
Source: [ClawdHub / GitHub / other]
|
||||||
|
Author: [username]
|
||||||
|
Version: [version]
|
||||||
|
───────────────────────────────────────
|
||||||
|
METRICS:
|
||||||
|
• Downloads/Stars: [count]
|
||||||
|
• Last Updated: [date]
|
||||||
|
• Files Reviewed: [count]
|
||||||
|
───────────────────────────────────────
|
||||||
|
RED FLAGS: [None / List them]
|
||||||
|
|
||||||
|
PERMISSIONS NEEDED:
|
||||||
|
• Files: [list or "None"]
|
||||||
|
• Network: [list or "None"]
|
||||||
|
• Commands: [list or "None"]
|
||||||
|
───────────────────────────────────────
|
||||||
|
RISK LEVEL: [🟢 LOW / 🟡 MEDIUM / 🔴 HIGH / ⛔ EXTREME]
|
||||||
|
|
||||||
|
VERDICT: [✅ SAFE TO INSTALL / ⚠️ INSTALL WITH CAUTION / ❌ DO NOT INSTALL]
|
||||||
|
|
||||||
|
NOTES: [Any observations]
|
||||||
|
═══════════════════════════════════════
|
||||||
|
```
|
||||||
|
|
||||||
|
## Quick Vet Commands
|
||||||
|
|
||||||
|
For GitHub-hosted skills:
|
||||||
|
```bash
|
||||||
|
# Check repo stats
|
||||||
|
curl -s "https://api.github.com/repos/OWNER/REPO" | jq '{stars: .stargazers_count, forks: .forks_count, updated: .updated_at}'
|
||||||
|
|
||||||
|
# List skill files
|
||||||
|
curl -s "https://api.github.com/repos/OWNER/REPO/contents/skills/SKILL_NAME" | jq '.[].name'
|
||||||
|
|
||||||
|
# Fetch and review SKILL.md
|
||||||
|
curl -s "https://raw.githubusercontent.com/OWNER/REPO/main/skills/SKILL_NAME/SKILL.md"
|
||||||
|
```
|
||||||
|
|
||||||
|
## Trust Hierarchy
|
||||||
|
|
||||||
|
1. **Official OpenClaw skills** → Lower scrutiny (still review)
|
||||||
|
2. **High-star repos (1000+)** → Moderate scrutiny
|
||||||
|
3. **Known authors** → Moderate scrutiny
|
||||||
|
4. **New/unknown sources** → Maximum scrutiny
|
||||||
|
5. **Skills requesting credentials** → Human approval always
|
||||||
|
|
||||||
|
## Remember
|
||||||
|
|
||||||
|
- No skill is worth compromising security
|
||||||
|
- When in doubt, don't install
|
||||||
|
- Ask your human for high-risk decisions
|
||||||
|
- Document what you vet for future reference
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
*Paranoia is a feature.* 🔒🦀
|
||||||
6
skills/skill-vetter/_meta.json
Normal file
6
skills/skill-vetter/_meta.json
Normal file
@@ -0,0 +1,6 @@
|
|||||||
|
{
|
||||||
|
"ownerId": "kn71j6xbmpwfvx4c6y1ez8cd718081mg",
|
||||||
|
"slug": "skill-vetter",
|
||||||
|
"version": "1.0.0",
|
||||||
|
"publishedAt": 1769863429632
|
||||||
|
}
|
||||||
Reference in New Issue
Block a user