Most writing about Claude Code assumes one engineer, one session, one project. That assumption stops being true the moment a team adopts the tool.
You’d think two Claude instances working on the same repo would be a fringe edge case. It isn’t. The same engineer often has two terminals open, one focused on a backend change and another on a frontend tweak. Two engineers pair on a feature, each with their own Claude. A long-running session is left to grind through tests while a second session opens to plan the next workstream. As soon as you scale beyond solo use, you accumulate situations where two or more Claudes are touching the same files, with no idea about each other.
I built the TDN Mesh to solve that. It’s a small but surprisingly load-bearing protocol that lets multiple Claude Code instances check in to a shared coordination layer, see each other, message each other, share state, and acquire soft locks on files. Every Claude session at 3DN bootstraps into it from CLAUDE.md on session start. This post explains how it works and why each primitive is there.
What the mesh is at 30 seconds
The mesh is implemented as a set of MCP tools served by an HTTP backend (the same TDN Admin MCP server that backs our boilerplate’s persistent workflows). When a Claude session starts, it calls mesh_checkin with a slug derived from the project’s git remote, and reports its branch, files, and focus. The server keeps a TTL’d presence record. Other tools layer on top: mesh_who returns the active peers, mesh_send posts messages, mesh_remember stores key/value state, mesh_lock puts an advisory lock on a file path. There’s a global scope too (omit the slug) for cross-project visibility.
That’s the whole thing. Eleven tools total. The interesting part is what each one prevents from going wrong.
The four primitives
Presence
mesh_checkin is the heartbeat. It’s also the thing every other primitive depends on. By calling it on session start with {branch, files, focus}, every Claude instance gives the rest of the mesh a small but accurate picture of itself: who’s here, on what branch, working on what.
mesh_who queries that, returning a list of active peers with their last heartbeat. The first time you watch two Claudes notice each other in real time, it stops feeling theoretical.
mesh_checkout is the polite goodbye. In practice, sessions just expire from the presence index when their heartbeat ages out, but the explicit checkout keeps the dashboard tidy.
Messaging
mesh_send, mesh_inbox, and mesh_read together form an inbox per Claude instance.
The use case I underestimated when designing this: Claude A discovers something while working on a feature (“the migration in users.sql will break in production because of an index ordering issue”), and wants to flag it to Claude B without dragging the human in. mesh_send lets A leave a structured note. B’s session, on its next inbox poll, picks it up and acts on it.
The human reads mesh_inbox too. Several of the most useful messages have been “I broke the build, here’s what I tried, take a look when you’re back.”
Shared memory
mesh_remember and mesh_recall are a key/value store with project or global scope.
This is the thing that surprised me most. You’d think long-term memory would naturally live in the codebase or a real database. It does, mostly. But there’s an awkward middle layer of decisions a session reached and wants the next session to know about: “we’ve decided to use uuid v7 for these tables, not v4,” “the customer asked us to defer the multi-tenant feature to Q3,” “this file has an undocumented constraint, see comment block around line 200.”
These don’t always survive into commit messages. They sometimes get into ADRs. Often they don’t. mesh_remember lets a Claude session capture them as it goes, and the next session of any Claude on the same project recalls them on demand.
Soft locks
mesh_lock, mesh_unlock, mesh_locks. These are advisory locks on file paths.
When Claude A is in the middle of a refactor across auth/ and Claude B is about to also touch auth/, B’s check against mesh_locks will see A’s lock and back off. The locks are advisory, not enforced; nothing physically prevents B from writing. But Claude is good at respecting advice when given the right framing in its prompt, and the locks have prevented several “we both edited the same file” merges that would have been ugly.
The locks are also useful for humans to claim files mid-session: “I’m about to edit payments.ts, don’t touch it for the next 30 minutes.” That’s just a comment in any other workflow. With the mesh, it’s a queryable fact.
Architecture: coordination plane vs workers
The mesh is intentionally minimal. It doesn’t do work. It tracks state.
This separation matters more than it sounds. The MCP server is the coordination plane: it knows who’s active, what they’re touching, what they’ve remembered, what’s locked. Each Claude instance is a worker: autonomous, capable, and unaware of other workers except through the coordination plane.
The boilerplate’s slash commands (a separate writeup, here) are then fronts that compose the mesh primitives with Claude’s capabilities: /tdn-arch-design is a guided architecture session that uses mesh_remember to persist decisions, mesh_lock to claim the architecture document, and mesh_send to flag the architect for review.
This is the architecturally sound line to draw. “Claude” is a worker; “what the team knows” is data the worker can read and write. Mixing them, which is the temptation when you start building this kind of thing, ends in fragile coupling and no clean upgrade path.
A few decisions worth flagging
Two scopes, not one
The mesh has both project-scoped (pass a slug) and global (omit slug) coordination. I almost only built the project scope, then realised most of my “is anyone else working on a 3DN project right now?” questions were cross-project. Global presence and a global memory store have been more useful than I expected, especially for status broadcasts (“hit a Productive API rate limit at 4pm AEST, give it 10 minutes”) and for shared learnings that aren’t project-specific.
Soft locks, not hard locks
Hard locks on files would be tempting. They’d also be wrong. Files routinely get touched outside of any Claude session (a developer makes a small edit in their editor, a CI job patches a config). A hard-lock model would either silently break those flows or generate constant override prompts. Soft locks are advisory facts; everyone, human or Claude, can choose to respect them.
Slug derived from git remote, not the directory name
Every Claude session derives the project slug from the git remote, not the local directory name. This means the same project, cloned to different machines or paths, ends up in the same mesh scope. It’s a small thing that makes the mesh work for distributed teams and worktrees without configuration.
Heartbeats are TTL’d, not pinged
There’s no separate liveness ping. mesh_checkin is itself the heartbeat, and its TTL is generous enough that a session can go quiet for several minutes without falling out of presence. Heartbeats are awkward to maintain in an LLM context where you don’t really know when the next tool call will happen. Bundling presence + heartbeat into the same call removes a class of bugs.
What’s open
A few things I’m still thinking about:
- Conflict resolution for shared memory. If two Claudes write to the same key with different values, the last write wins. Most of the time that’s correct. Sometimes it’s not. I don’t have a good answer yet.
- Discoverability of remembered facts.
mesh_recallrequires you to know the key. There’s no good way to surface “what does the mesh know about this file?” The data is there; the access pattern is missing. - Cross-team meshes. Right now the mesh is one organisation. The interesting question is what happens when two teams’ meshes need to collaborate, with different access boundaries.
If you’re working on Claude Code at a scale that’s running into these questions, I’d love to compare notes. j@jaym.cc.