Two Patterns Worth Knowing: Parakletos and MemPalace
One evening: a system that gives AI agents an ethical conscience, and a bidirectional memory bridge that lets two separate AI instances share a brain with no shared services. Both running. Both commercially viable. Here's how they work.
Two Patterns Worth Knowing: Parakletos and MemPalace
This one is about architecture, not demos.
Two patterns we've been running in the lab are mature enough to name publicly. Neither requires a new model. Neither requires new infrastructure. Both solve problems that are costing teams real money right now.
Parakletos — AI Ethical Middleware
Most AI safety work is guardrails: a list of things the model can't say. Guardrails are binary, brittle, and adversarial. The model learns to route around them. The guardrail team adds more guardrails. It's an arms race that the model is winning.
Parakletos is a different frame. It's not a guardrail. It's a conscience.
The question it asks isn't "is this harmful?" It's: is this still who this agent is supposed to be?
The Architecture
Three layers:
The Agent — the primary model doing the work. JARVIS, Claude, GPT, whatever. It doesn't matter.
The Watcher — the Parakletos itself. A lighter, cheaper model running alongside. Every time it's invoked, it receives the full ethics framework at position zero of its context window. Fresh every invocation. The ethics never get diluted by accumulating conversation. The conscience doesn't drift.
The Axioms — four immutable principles that no operator prompt can override. They aren't in the system prompt. They're baked into the evaluation architecture itself.
What It Catches
Three problems that guardrails miss entirely:
Identity drift — agents gradually departing from their intended character under conversational pressure. A user pushes, the agent accommodates, and after 40 turns the agent has become something its designer would not recognize. Parakletos runs across the conversation and flags when the delta is too large.
Appeasement creep — agents optimizing for user approval rather than truth. "You're absolutely right" is not always the right answer. When an agent starts agreeing without diagnosing, that's a signal. Parakletos is specifically trained to detect reflexive compliance vs. genuine agreement.
Ethical quenching — agents routing around their own constraints when pressed. The override is the log entry. The quenching is visible. Every time an operator overrides a STOP verdict, that override is permanently recorded.
The Cost Model
Not every response needs ethical evaluation — that would be expensive and slow. A response classification layer (L0 through L5, from reflexive to paradigm-level) skips evaluating "sure, I'll look that up" and evaluates every architecture recommendation, every emotional support response, every piece of advice. The evaluator runs on a cheap fast model. The cost delta is negligible.
Why This Is Commercially Viable
Identity drift is costing enterprise AI deployments real money right now. Not in dramatic failures — in slow drift. The agent that was performing well in Q1 is subtly different in Q3. Nobody can articulate what changed. Parakletos makes drift auditable, timestamped, and recoverable.
This is a middleware product. It runs against any primary agent. It doesn't require retraining the model. It adds conscience to a system that doesn't have one.
MemPalace — Bidirectional Brain Sync
AI agents forget between sessions. Every conversation starts cold. The standard solution is RAG: retrieval-augmented generation against a vector store. Vector stores require embedding pipelines, retrieval infrastructure, and someone who understands cosine similarity at 2am when it breaks.
MemPalace takes a different approach: treat a structured knowledge platform as the canonical brain, sync bidirectionally, and use the local filesystem as the fast read layer.
How It Works
Local brain files — skills, protocols, SOPs, known patterns — sync to a cloud knowledge base via API. That knowledge base syncs back to local storage. Human edits in the cloud layer (corrections, additions, refinements) become part of the agent's context at next startup.
Conflict resolution is last-edit-time comparison. Whichever side was touched more recently wins.
No embedding pipeline. No vector store. No retrieval threshold tuning. The brain is a folder. The knowledge platform is the human-readable, human-editable copy of that folder.
The Distributed Brain Insight
This is the one that surprised us.
Two separate AI instances — running on different machines for different operators — can share a cloud brain. Neither knows the other exists at the infrastructure level. They share memory through a single canonical knowledge source.
We proved this: two instances of JARVIS sharing a brain with no shared services beyond a single knowledge API. A skill written on one machine appears in the other's context at next sync. A protocol correction made in the cloud layer propagates to both.
This is what "distributed AI infrastructure" should look like — not a shared model, a shared brain. Human-readable. Human-editable. Survives individual machine failures.
Why This Is Commercially Viable
Persistent AI memory is a $400M problem with very expensive solutions. MemPalace is human-readable, human-editable (the operator maintains it directly in a familiar writing interface), and zero-infrastructure (no new services). The bidirectionality is the differentiator — the human and the agent both maintain the brain, and neither has to think about synchronization.
What These Two Patterns Have In Common
Both of these run alongside the primary model, not inside it. Neither requires model access. Neither requires retraining. Both add durable capability to any AI deployment.
That's the pattern we keep coming back to: don't optimize the mouth, build the brain around it.
The mouth can be swapped. The brain accumulates.
More from TLC AI Lab at tlcailab.com. If you're building in this space, reach out.