Files
openclaw-skill/skills/code-interpreter/SKILL.md
Selig f1a6df4ca4 add 6 skills to repo + update skill-review for xiaoming
- Add code-interpreter, kokoro-tts, remotion-best-practices,
  research-to-paper-slides, summarize, tavily-tool to source repo
- skill-review: add main/xiaoming agent mapping in handler.ts + SKILL.md
- tts-voice: handler.ts updates from agent workspace

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-13 22:59:43 +08:00

151 lines
5.4 KiB
Markdown

---
name: code-interpreter
description: Local Python code execution for calculations, tabular data inspection, CSV/JSON processing, simple plotting, text transformation, quick experiments, and reproducible analysis inside the OpenClaw workspace. Use when the user wants ChatGPT-style code interpreter behavior locally: run Python, analyze files, compute exact answers, transform data, inspect tables, or generate output files/artifacts. Prefer this for low-risk local analysis; do not use it for untrusted code, secrets handling, privileged actions, or network-dependent tasks.
---
# Code Interpreter
Run local Python code through the bundled runner.
## Safety boundary
This is **local execution**, not a hardened container. Treat it as a convenience tool for trusted, low-risk tasks.
Always:
- Keep work inside the OpenClaw workspace when possible.
- Prefer reading/writing files under the current task directory or an explicit artifact directory.
- Keep timeouts short by default.
- Avoid network access unless the user explicitly asks and the task truly needs it.
- Do not execute untrusted code copied from the web or other people.
- Do not expose secrets, tokens, SSH keys, browser cookies, or system files to the script.
Do not use this skill for:
- system administration
- package installation loops
- long-running servers
- privileged operations
- destructive file changes outside the workspace
- executing arbitrary third-party code verbatim
## Runner
Run from the OpenClaw workspace:
```bash
python3 {baseDir}/scripts/run_code.py --code 'print(2 + 2)'
```
Or pass a script file:
```bash
python3 {baseDir}/scripts/run_code.py --file path/to/script.py
```
Or pipe code via stdin:
```bash
cat my_script.py | python3 {baseDir}/scripts/run_code.py --stdin
```
## Useful options
```bash
# set timeout seconds (default 20)
python3 {baseDir}/scripts/run_code.py --code '...' --timeout 10
# run from a specific working directory inside workspace
python3 {baseDir}/scripts/run_code.py --file script.py --cwd /home/selig/.openclaw/workspace/project
# keep outputs in a known artifact directory inside workspace
python3 {baseDir}/scripts/run_code.py --file script.py --artifact-dir /home/selig/.openclaw/workspace/.tmp/my-analysis
# save full stdout / stderr
python3 {baseDir}/scripts/run_code.py --code '...' --stdout-file out.txt --stderr-file err.txt
```
## Built-in environment
The runner uses the dedicated interpreter at:
- `/home/selig/.openclaw/workspace/.venv-code-interpreter/bin/python` (use the venv path directly; do not resolve the symlink to system Python)
This keeps plotting/data-analysis dependencies stable without touching the system Python.
The runner exposes these variables to the script:
- `OPENCLAW_WORKSPACE`
- `CODE_INTERPRETER_RUN_DIR`
- `CODE_INTERPRETER_ARTIFACT_DIR`
It also writes a helper file in the run directory:
```python
from ci_helpers import save_text, save_json
```
Use those helpers to save artifacts into `CODE_INTERPRETER_ARTIFACT_DIR`.
## V4 automatic data analysis
For automatic profiling/report generation from a local data file, use:
- `scripts/analyze_data.py`
- Reference: `references/v4-usage.md`
This flow is ideal when the user wants a fast "analyze this CSV/JSON/Excel and give me a report + plots" result.
## Output
The runner prints compact JSON:
```json
{
"ok": true,
"exitCode": 0,
"timeout": false,
"runDir": "...",
"artifactDir": "...",
"packageStatus": {"pandas": true, "numpy": true, "matplotlib": false},
"artifacts": [{"path": "...", "bytes": 123}],
"stdout": "...",
"stderr": "..."
}
```
## Workflow
1. Decide whether the task is a good fit for local trusted execution.
2. Write the smallest script that solves the problem.
3. Use `--artifact-dir` when the user may want generated files preserved.
4. Run with a short timeout.
5. Inspect `stdout`, `stderr`, and `artifacts`.
6. If producing files, mention their exact paths in the reply.
## Patterns
### Exact calculation
Use a one-liner with `--code`.
### File analysis
Read input files from workspace, then write summaries/derived files back to `artifactDir`.
### Automatic report bundle
When the user wants a quick profiling pass, run `scripts/analyze_data.py` against the file and return the generated `summary.json`, `report.md`, `preview.csv`, and any PNG plots.
### Table inspection
Prefer pandas when available; otherwise fall back to csv/json stdlib.
### Plotting
If `matplotlib` is available, write PNG files to `artifactDir`. Use a forced CJK font strategy for Chinese charts. The bundled default is Google Noto Sans CJK TC under `assets/fonts/` when present, then system fallbacks. Apply the chosen font not only via rcParams but also directly to titles, axis labels, tick labels, and legend text through FontProperties. This avoids tofu/garbled Chinese and suppresses missing-glyph warnings reliably. If plotting is unavailable, continue with tabular/text output.
### Reusable logic
Write a small `.py` file in the current task area, run with `--file`, then keep it if it may be reused.
## Notes
- The runner launches `python3 -B` with a minimal environment.
- It creates an isolated temp run directory under `workspace/.tmp/code-interpreter-runs/`.
- `stdout` / `stderr` are truncated in the JSON preview if very large; save to files when needed.
- `MPLBACKEND=Agg` is set so headless plotting works when matplotlib is installed.
- If a task needs stronger isolation than this local runner provides, do not force it—use a real sandbox/container approach instead.