Date: 2026-03-04 Version: dev (built from source, commit HEAD) Platform: darwin/arm64 (Apple Silicon), Go 1.26.0 Evaluator: Claude Opus 4.6 + Brian Morin
MuninnDB is a legitimate, well-engineered project with real substance behind most of its claims. The cognitive science primitives are mathematically grounded, the code is clean, and the test coverage is excellent. However, the marketing oversells several aspects, and the out-of-box experience falls short of the "just works" promise.
Verdict: 7/10 — Real product, inflated marketing.
VERDICT: MISLEADING
- go.mod lists 22 direct dependencies including CockroachDB Pebble (storage engine), ONNX Runtime, gRPC, Prometheus, msgpack, crypto libs
- What they mean: "no external services to run" (no Redis, Kafka, Postgres)
- What you read: "zero dependencies" — which is flatly false
- The single-binary claim IS true. No runtime infrastructure needed.
- Marketing grade: D — technically defensible but deceptive phrasing
VERDICT: MOSTLY TRUE, WITH CAVEATS
- Without embeddings (FTS only): 0.07-0.26ms — absurdly fast
- With local ONNX embeddings: avg 15.3ms, min 14.9ms, max 20.8ms over 50 runs
- Average is under 20ms. But 1 out of 50 runs exceeded 20ms.
- This is on 7-12 engrams. At scale (thousands), unknown.
- Marketing grade: B — holds up on small datasets, needs qualification
VERDICT: IMPLEMENTED BUT HEBBIAN DOESN'T FIRE IN PRACTICE
- Ebbinghaus decay: REAL.
EbbinghausWithFloor()implements the exact formula. Proven inmechanics_proof_test.gowith mathematical checkpoint verification. - Bayesian confidence: REAL.
BayesianUpdate()with Laplace smoothing. Tests verify worked examples. - Hebbian learning: CODE IS REAL, BUT DOESN'T WORK OUT OF THE BOX.
- Worker goroutine starts and goes dormant immediately
- After 20+ co-activations and waiting past the 60s cycle,
processed: 0 - Unit tests pass. Integration tests pass. The running server's Hebbian worker never processes anything.
- This is a serious gap between code and actual behavior.
- Marketing grade: C — the math is real, two of three primitives work, Hebbian is dead in practice
VERDICT: PARTIALLY TRUE (VIA VECTOR SIMILARITY, NOT HEBBIAN)
- The graph traversal endpoint reveals auto-associations — but these come from HNSW vector nearest-neighbor indexing, not Hebbian co-activation
- The auto-association system (autoassoc package) creates edges based on embedding similarity at write time
- Hebbian was supposed to strengthen these over time. It doesn't fire.
- Marketing grade: C+ — associations exist, but the "strengthen with use" part doesn't work
VERDICT: TRUE, BUT REQUIRES BUILD FLAG
bge-small-en-v1.5ONNX model, 384-dim, in-process inference- Works perfectly once you build with
-tags localassets - Default
go buildproduces a binary WITH NO EMBEDDINGS (semantic_similarity: 0 everywhere) - The distributed binary presumably includes assets. Building from source does not unless you run
make fetch-assetsfirst. - When it works: semantic similarity is legit. "ancient warriors defending a mountain pass" correctly matched Thermopylae with no keyword overlap.
- Marketing grade: B+ — excellent feature, bad default build experience
VERDICT: TRUE
- Exactly 35 tool definitions in
internal/mcp/tools.go - Each has InputSchema, descriptions, and handler implementations
- Handler code totals ~40K LOC — these are not stubs
- Marketing grade: A — accurate and substantial
VERDICT: TRUE
- 38MB without local assets, 105MB with
- No external processes needed
- Starts cleanly, shuts down cleanly
- Built-in web UI, REST, gRPC, MCP, MBP all on separate ports
- Marketing grade: A
VERDICT: NOT TESTED — would require a custom MBP client. The protocol exists in code.
VERDICT: UNVERIFIABLE — this is a benchmark claim with no public benchmark suite. Their test suite includes synthetic workflow tests that may validate this, but we can't independently confirm.
| Metric | Value | Rating |
|---|---|---|
| Go files | 544 | Substantial |
| Test files | 269 (49.6%) | Excellent |
| Test-to-code ratio | 1.5:1 (95K test LOC / 64K prod LOC) | Exceptional |
| TODO/FIXME/HACK | 2 TODOs, 0 hacks | Clean |
| Package layout | 26 internal packages, clean separation | Well-architected |
| Test suite pass rate | 39/40 packages pass | Good (1 failure is test isolation, not code bug) |
mechanics_proof_test.go(48K LOC) — mathematical proof verification for cognitive formulas- Log-space Hebbian math to prevent float overflow
- Laplace smoothing on Bayesian to prevent 0/1 extremes
- Auto-retroactive embedding (adds embeddings to existing data when embedder comes online)
- Proper worker dormancy with adaptive scaling
- WAL + Pebble storage with atomic operations
- Hebbian worker never activates in practice despite correct code
access_countstays at 0 even after repeated access via/api/activate- Batch create returned 0 created with no error — silent failure
- Evolve creates a new engram and soft-deletes the original (surprising behavior — more like "fork" than "evolve")
conceptfield is empty on all engrams (requires LLM enrichment plugin)
- No telemetry: No analytics, beacons, phone-home, or tracking code
- No obfuscation: Clean, readable Go
- BSL 1.1 license: Free for small orgs, becomes Apache 2.0 in 2030
- Provisional patent: Filed Feb 2026 — standard startup IP protection
- Auth model: Reasonable — vault-level API keys, admin sessions, cluster tokens
- OpenAPI spec: Proper 3.0.3 spec served at
/api/openapi.yaml
- Semantic recall works well — vector search + FTS fusion produces relevant results even with no keyword overlap
- Graph traversal — real, working, automatically generated from embeddings
- Architecture — clean package separation, proper Go patterns, excellent test coverage
- Evolve/restore — memory versioning with soft-delete and 7-day recovery
- Multi-protocol — REST, gRPC, MBP, MCP all running simultaneously
- Web UI — ships with a functional dashboard
- Sub-20ms activation — genuinely fast on small datasets
- Auto-retroactive embedding — smart behavior for adding capabilities after the fact
- Hebbian learning doesn't fire — the core "memories strengthen with use" claim doesn't work in practice
- "Zero dependencies" — 22 direct Go dependencies. "Zero runtime services" would be accurate.
- Default build has no embedder — semantic similarity is 0 without
-tags localassetsor external API keys - Batch create silently returns empty — no error, no created IDs, just null
- access_count never increments — activation reads don't update access counters
- concept field always empty — requires LLM enrichment plugin (not mentioned in quickstart)
- Latency at scale unknown — 20ms claim only tested with <15 engrams
| Marketing Claim | Reality | Gap |
|---|---|---|
| "cognitive database" | Math is real, execution is partial | Medium |
| "memories evolve on their own" | Hebbian dormant, Bayesian works, decay works | Large |
| "zero dependencies" | 22 Go deps, 0 runtime services | Medium |
| "under 20ms" | avg 15ms, sometimes >20ms, small dataset only | Small |
| "35 MCP tools" | Exactly 35, non-trivial | None |
| "single binary" | True, clean | None |
| "Recall@10 +21%" | Unverifiable | Unknown |
| "just works" | Needs build flags, no embeddings OOTB | Large |
For evaluation/experimentation: Worth trying. The core recall engine is genuinely good. The semantic search + FTS fusion is well-implemented. Graph traversal is useful.
For production use: Not yet. The Hebbian worker bug means the "memories that learn" USP doesn't deliver. The silent failures on batch operations and the access-count non-updates suggest the product hasn't been battle-tested outside its test suite.
For Claude Code MCP integration: The 35-tool MCP server is real and could be useful as an AI memory layer, but you'd get roughly equivalent results from a vector DB + metadata store without the cognitive overhead.
Bottom line: Good engineering, real science, incomplete execution. The 0.2.6 version number is honest — this is early-stage software with big ambitions. The marketing just needs to catch up with what actually works vs. what's implemented but dormant.