2026-03-22
occupation, cognition, vault: why your agent framework is a protocol, not a library.
I built a crate-publishing agent. The directory structure taught me more about agent architecture than any framework.
The cratepublish agent
Last week I needed an agent to publish aide-sh to crates.io and archive it on Zenodo. Two skills, a persona, some API docs, a credential. Standard stuff. But when I sat down to organize the files, something clicked.
The AI categorized everything into three tiers automatically:
cratepublish.yiidtw/
occupation/ # PUBLIC — the job
Agentfile.toml
persona.md
skills/
crate.ts # publish to crates.io
zenodo.ts # archive on Zenodo
knowledge/
crates-io-api.md # API docs
zenodo-api.md
cognition/ # PRIVATE — the brain
identity.toml # yiidtw@gmail.com
memory/ # anonymous preference, past runs
instance.toml
vault # ENCRYPTED — secrets
CRATES_IO_TOKEN # age-encrypted, per-machine keys
ZENODO_TOKENThree tiers of information
occupation/ is public. Skills, knowledge, persona — anyone can pull this from the hub and run the same crate-publishing agent. It's the job description.
cognition/ is private. My email, my preference for anonymous Zenodo deposits, what the agent learned from previous runs. This stays in my private repo. You can't transfer a brain.
vault is encrypted. CRATES_IO_TOKEN never appears in any repo, public or private. It's age-encrypted with per-machine keys and injected at runtime. Even if someone clones the agent repo, the token is ciphertext.
No framework does this
LangChain stores your API keys in environment variables or .env files. CrewAI puts credentials in YAML configs. AutoGen expects you to manage secrets yourself. None of them have a protocol for separating public skills from private memory from encrypted credentials.
They're libraries. You import them, call functions, manage state yourself. The boundary between “what I can share” and “what must stay private” is whatever you remember to put in .gitignore.
| LangChain | CrewAI | AutoGen | aide.sh | |
|---|---|---|---|---|
| Git-native memory | × | × | × | ✓ |
| Zero-leak credentials | × | × | × | ✓ |
| Agent = repo | × | × | × | ✓ |
| No LLM required | × | × | × | ✓ |
| Multi-machine leader election | × | × | × | ✓ |
| Per-skill credential scoping | × | × | × | ✓ |
| Hub = git repo (zero infra) | × | × | × | ✓ |
A protocol, not a library
aide is not a Python package you import. It's a protocol. The occupation/cognition/vault split is a data model, not an API. Any AI — Claude, GPT, Gemini, a local model — can read the directory structure and immediately know what's shareable, what's personal, and what's encrypted.
You don't need to learn a framework. You need to understand a directory layout. Put your skills in occupation/skills/. Put your memories in cognition/memory/. Put your secrets in vault. That's the entire API.
This is what Docker did for containers. Before Docker, every deployment was bespoke. Dockerfiles gave everyone a standard way to package, deploy, and run. aide does the same for agents. An Agentfile, a directory structure, and aide run.
Deploy AI agents, just like Docker.
Day 5 of building aide.sh in public. v0.5.0. Follow along on Twitter.