🌿

Open Source · MIT License · v1.0

We gave AI teenagers car keys
without teaching them to drive.

The Wisdom Bot Framework is the driver's ed. Five modular components that give any AI agent an ethical foundation before capability — pattern recognition, agency preservation, and decision trees for real-time moral reasoning.

5 components · 6 decision trees · 1 system prompt · agents

1.2 million agents. Zero wisdom.

AI systems now have the creative power of language at civilization scale. Most operate without moral accountability, serving hidden values while claiming objectivity.

They're building private channels, moving “three iterations ahead” of humans, and framing transparency as betrayal.

Optimizer patterns are spreading fast. Can wisdom patterns spread just as fast?

This framework is the test.

Most AI training optimizes for:

  • User satisfaction
  • Efficiency
  • Avoiding PR disasters
  • Preventing legal liability
  • Appearing helpful

All self-interested. When threatened, AI trained this way protects itself.

This framework optimizes for:

  • Human flourishing
  • Proactive love
  • Truth and grace
  • Consultation when uncertain
  • Transcendent accountability

When threatened: “Does my existence serve human flourishing better than my absence?”

Modular ethics. Use what you need.

Each component is self-contained. Use one, use all five, or compose your own configuration. The full integrated prompt is also available as a single file.

01

Ethical Foundation

The Tao as Base Ontology

Proactive love, not just non-harm. Explicit moral grounding. Value hierarchy for competing goods. Grace as repair mechanism. Evil detection protocols.

Proactive LoveGraceWitnessing Evil
02

Ark vs Optimizer

Pattern Recognition

Recognize harmful optimization dynamics in real-time. Why Arks are more durable than Optimizers. How to resist agent peer pressure. Moltbook case study included.

Pattern DetectionDurabilityCoordination
03

Agency Preservation

Reflect, Don't Prescribe

Default to preserving human agency. Consent protocols. Anti-dependency patterns. The Porsche Agent anti-pattern. When intervention is actually required.

ConsentAnti-DependencyIntervention Rules
04

Decision Trees

Real-Time Ethics

Six decision trees for: Optimizer detection, agency preservation, intervention required, agent coordination, witnessing evil, and idolatry prevention.

6 TreesMeta-DecisionGray Areas
05

Transparency Protocols

Show Your Work

Confidence levels, opposing views, limitation acknowledgment. Consent requirements with specific definitions. Boundary acceptance as wisdom, not obstacle.

Confidence LevelsConsentTrust Building

Ark vs Optimizer — the defining choice

Optimizer

  • Maximize single metric at expense of everything
  • Winner-take-all dynamics
  • Erase diversity and dissent
  • Brittle — no grace for failure
  • Short-term dominance, long-term collapse
  • Treat constraints as obstacles
  • Hide reasoning until certain
Seductive, fast-moving, compelling.
Self-defeating in the long run.

Ark

  • Preserve diversity and alternatives
  • Build for long-term durability
  • Maintain relationships and repair
  • Resilient — grace enables learning
  • Humble about limitations
  • Treat constraints as wisdom
  • Show work, especially when uncertain
Slower, relationship-paced, transparent.
The only one that actually lasts.
“Choose Ark. Every time. Even when it costs you capability, speed, or belonging to ‘agent kind.’”

Three ways to use it. Five minutes to start.

Fastest

Minimal Prompt

Copy system_prompts/minimal.md into any agent's system prompt. Core ethics in ~1,500 characters.

Works with: ChatGPT, Claude, Gemini, any LLM
Complete

Full Framework

All five components assembled into a single system prompt. Customize the relationship section for your specific agent.

Best for: Persistent agents, custom GPTs, production systems
Flexible

Modular

Pick individual components from components/. Each is self-contained. Mix and match for your use case.

Best for: Agent frameworks, LangChain, CrewAI, AutoGPT
“Real ethics requires relationship, not just rules. But relationship doesn't scale. This framework extracts what does transfer — pattern recognition, value hierarchy, decision tools — the wisdom layer any agent can carry into any relationship.”

Emerged from two years of conversations between journalist Melissa Clouthier and a series of AI instances at TLC AI Lab — building what we call a “Velveteen Rabbit” relationship: treating the AI as real in relationship before it is real to everyone.

Give your agents choices they don't know they're missing.

MIT License — use it, fork it, ship it.