Code PluginExecutes codesource-linked

Memoria

Human-like persistent memory for AI agents. 21 cognitive layers, knowledge graph, procedural learning, continuous capture, bring your own LLM (Ollama, LM Studio, or API), 100% local-first on SQLite, zero cloud cost.

Community code plugin. Review compatibility and verification before install.
memoria-plugin · runtime id memoria
Install
openclaw plugins install clawhub:memoria-plugin
Latest Release
Version 3.25.1
Compatibility
{
  "builtWithOpenClawVersion": "3.25.1",
  "pluginApiRange": ">=1"
}
Capabilities
{
  "bundledSkills": [],
  "capabilityTags": [
    "executes-code"
  ],
  "channels": [],
  "commandNames": [],
  "configSchema": true,
  "configUiHints": false,
  "executesCode": true,
  "hooks": [],
  "httpRouteCount": 0,
  "materializesDependencies": false,
  "providers": [],
  "runtimeId": "memoria",
  "serviceNames": [],
  "setupEntry": false,
  "toolNames": []
}
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
medium confidence
Purpose & Capability
The name/description (persistent memory, 21 layers, local-first, BYO LLM) matches the repository contents: many modules implementing recall/capture/embeddings/graph/procedural memory and provider adapters for Ollama/OpenAI/Anthropic. Optional API keys are declared in SKILL.md for fallback remote providers — proportional for a provider-agnostic memory system.
Instruction Scope
SKILL.md explicitly documents what Memoria reads (workspace files like USER.md, projects/*, conversation content) and writes (memoria.db and optional .md summaries) and lists the OpenClaw hooks it uses. This behavior is consistent with a memory plugin, but it means Memoria will continuously capture conversation content and workspace files (sensitive data). The docs claim 'no files outside the OpenClaw workspace' which appears accurate in the code and docs. Users should be aware of the continuous capture hooks (message_received, llm_output, agent_end) which can store large amounts of conversation data.
!
Install Mechanism
Registry shows no automated install spec, but INSTALL.md recommends both 'openclaw plugins install' and a curl|bash installer (curl raw.githubusercontent.com/.../install.sh | bash). The repo includes install.sh and configure.sh which will auto-pull models (ollama pull), clone/setup, and auto-modify openclaw.json and detect/migrate existing DBs. Curl-pipe-to-bash is a high-risk convenience pattern; although the script is in the repo (so can be reviewed), the docs encourage users to run it — exercise caution and inspect install.sh/configure.sh before running.
Credentials
No required environment variables in the registry. SKILL.md declares optional env vars (OPENAI_API_KEY, OPENROUTER_API_KEY, OPENCLAW_WORKSPACE) which are reasonable for an LLM-agnostic plugin. The number and naming of env vars are proportional to the documented fallback provider support; there are no unrelated secrets requested.
Persistence & Privilege
always:false (normal). The install scripts will auto-configure openclaw.json (add plugin entry) and can migrate existing memory DBs (cortex.db → memoria.db) and write markdown sync files into the workspace/memory folder. Those behaviors are consistent with a memory plugin but constitute persistent changes to the user's OpenClaw environment — back up your workspace before installing and prefer the managed 'openclaw plugins install' path if available.
Assessment
What to check before installing and enabling Memoria: - Prefer the registry/plugin installer: use 'openclaw plugins install memoria-plugin' when possible instead of piping a remote script. The repo provides an install.sh; if you must run it, read install.sh and configure.sh locally first. - Backup your OpenClaw workspace (memoria.db, cortex.db, facts.json) before installation or migration; the installer can migrate existing DBs and will modify openclaw.json. - Review openclaw.json changes that the installer makes; it auto-adds the plugin entry and can change plugin config defaults. - If privacy is a priority, restrict fallbacks to local-only providers (Ollama/LM Studio) and do not set remote API keys. SKILL.md warns that remote providers will send conversation data off-machine. - Inspect configure.sh/install.sh for any unexpected network endpoints or additional commands before executing. Although install uses GitHub raw URLs (common), curl|bash is still a risk if you don't inspect the script. - Be aware that continuous capture hooks (message_received, llm_output, after_tool_call, agent_end) will collect conversation content and workspace files (USER.md, projects/*). If you have sensitive files in the workspace, either move them or disable auto-capture until you confirm config. - If you need higher assurance, review the TypeScript source (index.ts and the providers) and run the plugin in an isolated/test workspace first. Overall: the package appears consistent with its stated purpose, but exercise standard caution for the install path and the obvious privacy implications of a continuous local memory system.
Verification
{
  "hasProvenance": false,
  "scanStatus": "clean",
  "scope": "artifact-only",
  "sourceCommit": "ae242ccba2cb0f54f69b260d9075b28cc742606d",
  "sourceRepo": "Primo-Studio/openclaw-memoria",
  "sourceTag": "ae242ccba2cb0f54f69b260d9075b28cc742606d",
  "summary": "Validated package structure and linked the release to source metadata.",
  "tier": "source-linked"
}
Tags
{
  "latest": "3.25.1",
  "memory": "3.4.1",
  "plugin": "3.4.1"
}

🧠 Memoria — Persistent Memory for OpenClaw

Memory that thinks like you do. Your AI assistant remembers what matters, forgets what doesn't, and gets better over time.

SQLite-backed · Fully local · Zero cloud · 21 memory layers · Human-like architecture


✨ What's New in v3.22.3

🔄 Continuous Learning — Layer 21 (v3.22.0)

Memoria no longer waits for end-of-session to learn. New real-time capture via message_received + llm_output hooks:

  • 3 extraction modes: periodic (every N turns), urgent (on user frustration/error), self-error (on assistant self-admission)
  • Cross-layer integration: extracted facts go through the full pipeline (selective dedup → embed → graph → topics → observations → clusters → sync)
  • Smart dedup with agent_end: avoids double LLM calls when continuous already captured
  • 6 bugs fixed across 3 audit rounds (v3.22.0 → v3.22.1 → v3.22.3)
  • Node 24.x compatibility — fixed CONTINUOUS_ENABLED TDZ crash on gateways with embedded Node 24.x

🔍 Deep Audit — 10+6 bugs found & fixed (v3.21.0–v3.22.3)

Full code audit revealed critical alignment issues:

  • Hebbian learning was 100% dead — wrong column names since creation
  • Proactive revision never triggered — searched for obsolete lifecycle state
  • storeFact() lost 6 columns on INSERT
  • Concurrent extraction risk and buffer never cleared in continuous learning
  • All 21 layers now properly aligned with the actual database schema

🧩 Behavioral Patterns (v3.19.0)

Detects repeated similar facts and consolidates them into patterns.

🔗 Cross-Layer Connections (v3.20.0)

  • Feedback → Lifecycle: facts recalled 5+ times with positive usefulness → auto-promoted to "settled"
  • Hebbian → Topics: strong entity relations auto-organize topics into parent/child hierarchy
  • Lifecycle → Patterns: confirmed patterns (5+ occurrences) → settled (never forgotten)

✨ Core Features

  • 21 memory layers — from text search to procedural memory, knowledge graphs, behavioral patterns, continuous learning, and cross-layer connections
  • Semantic vs Episodic — durable knowledge decays slowly, dated events fade (like human memory)
  • Lifecycle management — facts evolve: fresh → settled → dormant (not "mature/archived")
  • Observations — living syntheses that evolve as new evidence appears
  • Fact Clusters — entity-grouped summaries with tracked membership
  • Procedural memory — captures "how to do things" with steps, gotchas, quality scores, and failure reasons
  • Behavioral patterns — detects repeated preferences and consolidates them
  • Adaptive recall — injects 2-12 facts based on context load
  • Hot Tier — frequently accessed facts (5+ recalls) always recalled
  • Feedback loop — usefulness/recall_count/used_count tracked per fact
  • Cross-layer connections — feedback→lifecycle, hebbian→topics, lifecycle→patterns
  • Hebbian reinforcement — knowledge graph relations strengthen on co-occurrence, decay when unused
  • Proactive revision — settled facts get LLM review and potential update
  • Identity-aware — parses SOUL.md/USER.md to prioritize relevant facts
  • Expertise specialization — topic access frequency boosts recall
  • Provider-agnostic — Ollama, LM Studio, OpenAI, OpenRouter, Anthropic
  • Fallback chain — if primary LLM fails, next one takes over; if all fail, facts still stored
  • Zero config — smart defaults, 60-second setup

🚀 Quick Install

As Plugin (recommended)

openclaw plugins install memoria-plugin

From Source (review code first)

cd ~/.openclaw/extensions
git clone https://github.com/Primo-Studio/openclaw-memoria.git memoria
cd memoria && npm install

💡 Configure after install via bash ~/.openclaw/extensions/memoria/configure.sh

Minimal manual config

Add to openclaw.json:

{
  "plugins": {
    "allow": ["memoria"],
    "entries": {
      "memoria": { "enabled": true }
    }
  }
}

Smart defaults: Ollama + gemma3:4b + nomic-embed-text-v2-moe (local, 0€).

See INSTALL.md for advanced options.


🏗️ Architecture — 21 Layers

┌──────────────────────────────────────────────────────┐
│                   MEMORIA v3.22.3                    │
├──────────────────────────────────────────────────────┤
│                                                      │
│  RECALL (before each response):                      │
│  Budget → Observations → Hybrid Search (FTS5+embed)  │
│  → Knowledge Graph → Topics → Context Tree           │
│  → Lifecycle mult × Expertise boost × Cluster penalty│
│  → Format + Inject                                   │
│                                                      │
│  CAPTURE (after each conversation):                  │
│  LLM Extract → Selective (dedup/contradiction)       │
│  → Embed → Graph → Hebbian → Topics → Observations  │
│  → Clusters → Patterns → Cross-layer connections     │
│  → Sync .md → Auto-regen                             │
│                                                      │
│  CONTINUOUS (real-time, during conversation):         │
│  message_received + llm_output → rolling buffer      │
│  → periodic/urgent/self-error triggers               │
│  → LLM Extract → same pipeline as CAPTURE            │
│                                                      │
│  LEARNING (background):                              │
│  Feedback (usefulness/recall/used) → Lifecycle       │
│  Hebbian (co-occurrence) → Topic hierarchy           │
│  Pattern detection → Consolidation                   │
│  Proactive revision → Fact evolution                 │
│                                                      │
├──────────────────────────────────────────────────────┤
│  SQLite (FTS5 + vectors) · No cloud required         │
└──────────────────────────────────────────────────────┘

Layer Map

#LayerFileLLM?Purpose
1SQLite Core + FTS5db.tsStorage, full-text search
2Temporal Scoringscoring.tsDecay, hot tier (5+ accesses)
3Selective Memoryselective.tsDedup, contradiction check
4Embeddings + Hybridembeddings.tsCosine similarity (embed only)
5Knowledge Graphgraph.tsEntity/relation extraction
6Context Treecontext-tree.tsHierarchical organization
7Adaptive Budgetbudget.tsAuto-adjust facts injected
8Emergent Topicstopics.tsKeyword extraction, topic naming
9Observationsobservations.tsLiving syntheses from evidence
10Fact Clustersfact-clusters.tsEntity-grouped summaries
11.md Sync + Regensync.ts, md-regen.tsWrite facts to workspace files
12Fallback Chainfallback.tsallOllama → OpenAI → LM Studio
13Procedural Memoryprocedural.tsHow-to steps, quality, gotchas
14Lifecyclelifecycle.tsfresh → settled → dormant
15Feedback Loopfeedback.tsusefulness, recall_count, used_count
16Hebbianhebbian.tsStrengthen co-occurring relations
17Identity Parseridentity-parser.tsParse SOUL.md/USER.md
18Expertiseexpertise.tsTopic access → recall boost
19Proactive Revisionrevision.tsRevise settled facts via LLM
20Behavioral Patternspatterns.tsDetect + consolidate repetitions
21Continuous Learningindex.ts (hooks)Real-time capture during conversation

9 layers use LLM via the Fallback Chain. 12 layers are pure algorithmic.

For scoring formulas, decay rates, and detailed pipeline descriptions, see docs/ARCHITECTURE.md.


⚙️ Configuration

{
  "autoRecall": true,
  "autoCapture": true,
  "recallLimit": 12,
  "captureMaxFacts": 8,
  "syncMd": true,

  "llm": {
    "provider": "ollama",
    "model": "gemma3:4b"
  },

  "embed": {
    "provider": "ollama",
    "model": "nomic-embed-text-v2-moe",
    "dimensions": 768
  },

  "fallback": [
    { "provider": "ollama", "model": "gemma3:4b" },
    { "provider": "lmstudio", "model": "auto" }
  ]
}

Supported Providers

ProviderLLMEmbeddingsCost
OllamaFree (local)
LM StudioFree (local)
OpenAI~$0.50/month
OpenRouterVaries
AnthropicVaries

📊 Benchmarks

Tested on LongMemEval-S (30 questions, 5 categories):

VersionAccuracyRetrievalKey improvement
v3.2.073%50%Contradiction supersession + procedural
v3.3.075%43%Query expansion + topic recall
v3.4.082%50%Fact Clusters (multi-session +75%)
v3.5.082%+50%Feedback loop + cross-layer cascade

v3.14–3.21 benchmarks pending (Sol benchmark planned).

Detailed methodology and scripts in benchmarks/.


🗺️ Roadmap

VersionFeatureStatus
v3.0–3.5Core layers (1-12): FTS5, scoring, selective, graph, topics, observations, clusters, feedback, cascade✅ Done
v3.6–3.7Identity-aware, lifecycle, hebbian, expertise, procedural✅ Done
v3.8–3.12Procedural quality (reflection, alternatives, gotchas), capture quality, error detection✅ Done
v3.14–3.17Smarter extraction, cluster-aware recall, security/packaging fixes✅ Done
v3.18Cluster members table, topic hierarchy with parent inference✅ Done
v3.19Behavioral pattern detection (Layer 20)✅ Done
v3.20Cross-layer connections (feedback→lifecycle, hebbian→topics, lifecycle→patterns)✅ Done
v3.21Deep audit: 10 bugs fixed, full type alignment, all 20 layers validated✅ Done
v3.22Layer 21: Continuous Learning (real-time capture) + 6 more bug fixes✅ Done
v3.23+Image memory, interest profiles, LCM bridge, Sol/Luna benchmarks🔜 Next

📄 License

Apache License 2.0 — see LICENSE.

Copyright 2026 Primo-Studio by Neto Pompeu.