Meta Footer
Appends a stats footer to every bot reply: model, thinking level, token usage, context window, cache hit rate, and compaction count.
Community code plugin. Review compatibility and verification before install.
claw-meta-footer · runtime id claw-meta-footer
Install
openclaw plugins install clawhub:claw-meta-footerLatest Release
Version 1.0.7
Compatibility
{
"builtWithOpenClawVersion": "2026.3.24",
"pluginApiRange": ">=1.0.0"
}Capabilities
{
"bundledSkills": [],
"capabilityTags": [
"executes-code"
],
"channels": [],
"commandNames": [],
"configSchema": true,
"configUiHints": false,
"executesCode": true,
"hooks": [],
"httpRouteCount": 0,
"materializesDependencies": false,
"providers": [],
"runtimeId": "claw-meta-footer",
"serviceNames": [],
"setupEntry": false,
"toolNames": []
}Security Scan
OpenClaw
Benign
high confidencePurpose & Capability
Name/description (append a stats footer) align with the code and SKILL.md. The plugin captures llm_output and message_sending events, resolves context window and thinking level, and appends a footer — all consistent with the stated purpose.
Instruction Scope
The SKILL.md and code instruct disabling Telegram streaming and reading local OpenClaw configuration and sessions. The plugin synchronously reads ~/.openclaw/agents/<agentId>/sessions/sessions.json to obtain compactionCount and reasoningLevel, and uses api.config. This is within scope for producing the footer but has privacy implications (it appends internal provider/model/token metrics into outgoing chat messages).
Install Mechanism
No install spec; this is an instruction-only plugin packaged with a single index.js and metadata. Nothing is downloaded from external URLs and no post-install scripts are present in the manifest.
Credentials
The plugin requests no environment variables or external credentials. It accesses local OpenClaw config and session files (via homedir) which is proportionate to resolving context window and session metadata; no unrelated secrets or keys are requested.
Persistence & Privilege
always is false and the plugin registers normal runtime hooks (llm_output and message_sending). It does not modify other plugins or system-wide configs. It keeps per-turn stats in-memory with a short TTL (5 minutes).
Assessment
This plugin appears to do what it says, but consider these practical risks before installing: (1) it appends model/provider and token usage details into outgoing Telegram messages — that can leak internal model/provider choices and usage stats to chat participants (especially in group chats); (2) it reads your local OpenClaw sessions.json (from ~/.openclaw) to obtain compactionCount and reasoningLevel — the code only exposes those two fields in the footer, but it still reads the entire sessions file locally; (3) it logs info via api.logger (check your logging configuration if logs are shared). If you accept those privacy trade-offs, the plugin is coherent. If not, do not enable it in public/group channels or inspect/modify the code to remove fields you don't want exposed (for example, remove model/provider or token counts).Verification
{
"hasProvenance": false,
"scanStatus": "clean",
"scope": "artifact-only",
"sourceCommit": "69ecee3d908e5e12fbd6ab4dc4d71b909a80cdcd",
"sourceRepo": "lqqk7/claw-meta-footer",
"sourceTag": "main",
"summary": "Validated package structure and linked the release to source metadata.",
"tier": "source-linked"
}Tags
{
"latest": "1.0.7"
}claw-meta-footer
An OpenClaw plugin that appends a stats footer to every bot reply, giving you at-a-glance visibility into model usage, token consumption, and session state — right inside your Telegram chat.
What It Shows
Every bot reply gets a footer like this:
`───────────────`
🤖 Model: `claude-sonnet-4-6`
🧠 Think: high
🔢 In: 12.3k Out: 0.8k
📊 Context: 13.1k / 200k (6.6%)
💾 Cache: 11.9k hit (88.4%)
🔁 Compact: 2
| Field | Description |
|---|---|
| Model | The model ID that generated the reply |
| Think | Thinking/reasoning level (off, low, medium, high, xhigh, adaptive) |
| In / Out | Input and output token counts for this turn |
| Context | Tokens currently in context vs. the model's context window limit |
| Cache | Cache-read token count and hit rate for this turn |
| Compact | Number of context compactions that have occurred in this session |
Cache and Compact lines only appear when there is data to show.
Requirements
- OpenClaw
>= 1.0.0 - Telegram channel with streaming disabled — the plugin hooks into
message_sending, which is only triggered when streaming is off
Installation
openclaw plugins install clawhub:claw-meta-footer
Configuration
1. Disable streaming on Telegram
In your openclaw.json, add "streaming": "off" to your Telegram channel config:
{
"channels": {
"telegram": {
"streaming": "off"
}
}
}
Without this, the plugin won't fire — streamed messages bypass the message_sending hook entirely.
2. Plugin options (optional)
{
"plugins": {
"claw-meta-footer": {
"enabled": true,
"skipSubagent": true
}
}
}
| Option | Type | Default | Description |
|---|---|---|---|
enabled | boolean | true | Toggle the footer on/off |
skipSubagent | boolean | true | Hide footer on subagent replies (recommended — subagents can be noisy) |
How It Works
llm_outputhook — captures token usage (input,output,cacheRead,cacheWrite), model ID, and provider from each LLM response, keyed by channel + chat IDmessage_sendinghook — before the reply is sent, retrieves the cached stats, readsthinkingLevelandcompactionCountfrom the session file, resolves the context window size, builds the footer, and appends it to the message content
Context window sizes are resolved via a priority chain:
- User-configured
contextWindowinopenclaw.jsonmodels - Composite
provider/modellookup (mirrors OpenClaw's internal overrides) - Plain model ID lookup
- Prefix matching fallback
License
MIT
