Parallel coding agents with tmux and Markdown specs

(schipper.ai)

68 points | by schipperai 5 hours ago

10 comments

  • ramoz 30 minutes ago
    I did a sort of bell loop with this type of workflow over summer.

    - Base Claude Code (released)

    - Extensive, self-orchestrated, local specs & documentation; ie waterfall for many features/longer term project goals (summer)

    - Base Claude Code (today)

    Claude Code is getting better at orchestrating it's own subagents for divide/conquer type work.

    My problem with these extensive self-orchestrated multi-agent / spec modes is the type of drift and rot of all the changes and then integrated parts of an application that a lot of the time end up in merge conflicts. Aside from my own decision cognitive space, it's also a lot to just generally orchestrate and review. I spent a ton of type enforcing Claude to use the system I put in place including documentation updates and continuous logging of work.

    I feel extremely productive with a single Claude Code for a project. Maybe for minor features, I'll launch Claude Code in the web so that it can operate in an isolated space to knock them out and create a PR.

    I will plan and annotate extensively for large features, but not many features or broad project specs all at the same time. Annotation and better planning UX, I think, are going to be increasingly important for now. The only augment of Claude Code I have is a hook for plan mode review: https://github.com/backnotprop/plannotator

    • schipperai 4 minutes ago
      The merge conflicts and cognitive load are indeed two big struggles with my setup. Going back to a single Claude instances however would mean I’m waiting for things to happen most of the time. What do you do while Claude is busy?
  • gas9S9zw3P9c 2 hours ago
    I'd love to see what is being achieved by these massive parallel agent approaches. If it's so much more productive, where is all the great software that's being built with it? What is the OP building?

    Most of what I'm seeing is AI influencers promoting their shovels.

    • fhd2 34 minutes ago
      Even if somebody shows you what they've built with it, you're none the wiser. All you'll know is that it seemingly works well enough for a greenfield project.

      The jury is still very far out on how agentic development affects mid/long term speed and quality. Those feedback cycles are measured in years, not weeks. If we bother to measure at all.

      People in our field generally don't do what they know works, because by and large, nobody really knows, beyond personal experiences, and I guess a critical mass doesn't even really care. We do what we believe works. Programming is a pop culture.

    • ecliptik 1 hour ago
      It's for personal use, and I wouldn't call it great software, but I used Claude Code Teams in parallel to create a Fluxbox-compatible window compositor for Wayland [1].

      Overall effort was a few days of agentic vibe-coding over a period of about 3 weeks. Would have been faster, but the parallel agents burn though tokens extremely quickly and hit Max plan limits in under an hour.

      1. https://github.com/ecliptik/fluxland

    • conception 2 hours ago
      People are building software for themselves.
      • jvanderbot 1 hour ago
        Correct. I've started recording what I've built (here https://jodavaho.io/posts/dev-what-have-i-wrought.html ), and it's 90% for myself.

        The long tail of deployable software always strikes at some point, and monetization is not the first thing I think of when I look at my personal backlog.

        I also am a tmux+claude enjoyer, highly recommended.

        • digitalbase 12 minutes ago
          tmux too.

          Trying workmux with claude. Really cool combo

      • hinkley 1 hour ago
        I’ve known too many developers and seen their half-assed definition of Done-Done.

        I actually had a manager once who would say Done-Done-Done. He’s clearly seen some shit too.

    • haolez 2 hours ago
      The influencers generate noise, but the progress is still there. The real productivity gains will start showing up at market scale eventually.
    • linsomniac 1 hour ago
      In my view, these agent teams have really only become mainstream in the last ~3 weeks since Claude Code released them. Before that they were out there but were much more niche, like in Factory or Ralphie Wiggum.

      There is a component to this that keeps a lot of the software being built with these tools underground: There are a lot of very vocal people who are quick with downvotes and criticisms about things that have been built with the AI tooling, which wouldn't have been applied to the same result (or even poorer result) if generated by human.

      This is largely why I haven't released one of the tools I've built for internal use: an easy status dashboard for operations people.

      Things I've done with agent teams: Added a first-class ZFS backend to ganeti, rebuilt our "icebreaker" app that we use internally (largely to add special effects and make it more fun), built a "filesystem swiss army knife" for Ansible, converted a Lambda function that does image manipulation and watermarking from Pillow to pyvips, also had it build versions of it in go, rust, and zig for comparison sake, build tooling for regenerating our cache of watermarked images using new branding, have it connect to a pair of MS SQL test servers and identify why logshipping was broken between them, build an Ansible playbook to deploy a new AWS account, make a web app that does a simple video poker app (demo to show the local users group, someone there was asking how to get started with AI), having it brainstorm and build 3 versions of a crossword-themed daily puzzle (just to see what it'd come up with, my wife and I are enjoying TiledWords and I wanted to see what AI would come up with).

      Those are the most memorable things I've used the agent teams to build in the last 3 weeks. Many of those things are internal tools or just toys, as another reply said. Some of those are publicly released or in progress for release. Most of these are in addition to my normal work, rather than as a part of it.

      • schipperai 33 minutes ago
        Further, my POV is that coding agents crossed a chasm only last December with Opus 4.5 release. Only since then these kinds of agent teams setups actually work. It’s early days for agent orchestration
    • schipperai 1 hour ago
      I work for Snowflake and the code I'm building is internal. I'm exploring open sourcing my main project which I built with this system. I'd love to share it one day!
    • verdverm 2 hours ago
      There are dozens and dozens of these submitted to Show HN, though increasingly without the title prefix now. This one doesn't seem any more interesting than the others.
      • schipperai 1 hour ago
        I picked up a number things from others sharing their setup. While I agree some aspects of these are repetitive (like using md files for planning), I do find useful things here and there.
    • calvinmorrison 59 minutes ago
      I built a Erlang based chat server implementing a JMAP extension that Claude wrote the RFC and then wrote the server for
      • mrorigo 54 minutes ago
        Erlang FTW. I remember the days at the ol' lab!
        • calvinmorrison 51 minutes ago
          i have no use for it at my work, i wish i did, so i did this project for run intead.
    • calvinmorrison 59 minutes ago
      I wrote a Cash flow tracking finance app in Qt6 using claude and have been using it since Jan 1 to replace my old spreadsheets!

      https://git.ceux.org/cashflow.git/

    • karel-3d 31 minutes ago
      look at Show HN. Half of it is vibe-coded now.
  • aceelric 57 minutes ago
    I’ve been experimenting with a similar pattern but wrapping it in a “factory mode” abstraction (we’re building this at CAS[1]) where you define the spec once after careful planning using a supervisor agent then you let it go and spin up parallel workers against it automatically. It handles task decomposition + orchestration so you’re not manually juggling tmux panes

    [1] https://cas.dev

    • schipperai 49 minutes ago
      Do parallel workers execute on the same spec? How do you ensure they don't clash with each other?
      • aceelric 38 minutes ago
        supervisor handles this. if it sees that workers can collide it spawns them in worktrees while it handles the merging and cherry-picking
        • schipperai 29 minutes ago
          do you find the merging agent to be reliable? I had a few bad merges in the past that makes me nervous of just letting agents take care of it
          • aceelric 16 minutes ago
            Opus 4.6 is great at this compared to other models
  • CloakHQ 1 hour ago
    We ran something similar for a browser automation project - multiple agents working on different modules in parallel with shared markdown specs. The bottleneck wasn't the agents, it was keeping their context from drifting. Each tmux pane has its own session state, so you end up with agents that "know" different versions of reality by the second hour.

    The spec file helps, but we found we also needed a short shared "ground truth" file the agents could read before taking any action - basically a live snapshot of what's actually done vs what the spec says. Without it, two agents would sometimes solve the same problem in incompatible ways.

    Has anyone found a clean way to sync context across parallel sessions without just dumping everything into one massive file?

    • briantakita 1 minute ago
      I've been building agent-doc [1] to solve exactly this. Each parallel Claude Code session gets its own markdown document as the interface (e.g., tasks/plan.md, tasks/auth.md). The agent reads/writes to the document, and a snapshot-based diff system means each submit only processes what changed — comments are stripped, so you can annotate without triggering responses.

      The routing layer uses tmux: `agent-doc claim`, `route`, `focus`, `layout` commands manage which pane owns which document, scoped to tmux windows. A JetBrains plugin lets you submit from the IDE with a hotkey — it finds the right pane and sends the skill command.

      For context sync across agents, the key insight was: don't sync. Each agent owns one document with its own conversation history. The orchestration doc (plan.md) references feature docs but doesn't duplicate their content. When an agent finishes a feature, its key decisions get extracted into SPEC.md. The documents ARE the shared context — any agent can read any document.

      It's been working well for running 4-6 parallel sessions across corky (email client), agent-doc itself, and a JetBrains plugin — all from one tmux window with window-scoped routing.

      [1] https://github.com/btakita/agent-doc

    • schipperai 38 minutes ago
      I avoid this with one spec = one agent, with worktrees if there is a chance of code clashing. Not ideal for parallelism though.
  • sluongng 1 hour ago
    Yeah the 8 agents limit aligns well with my conversations with folks in the leading labs

    https://open.substack.com/pub/sluongng/p/stages-of-coding-ag...

    I think we need much different toolings to go beyond 1 human - 10 agents ratio. And much much different tooling to achieve a higher ratio than that

    • schipperai 53 minutes ago
      Few experiments like gas town, the compiler from Anthropic or the browser from Cursor managed to reach the Rocket stage, though in their reports the jagged intelligence of the LLMs was eerily apparent. Do you think we also need better models?
  • philipp-gayret 20 minutes ago
    Is there a place where people like you go to share ideas around these new ways of working, other than HN? I'm very curious how these new ways of working will develop. In my system, I use voice memo's to capture thoughts and they become more or less what you have as feature designs. I notice I have a lot of ideas throughout the day (Claude chews through them some time later, and when they are worked out I review its plans in Notion; I use Notion because I can upload memos into it from my phone so it's more or less what you call the index). But ideas.. I can only capture them as they come, otherwise they are lost & I don't want to spend time typing them out.
    • schipperai 12 minutes ago
      I have only seen similar posts in HN or X. I’d be curious if there are more.
  • nferraz 2 hours ago
    I liked the way how you bootstrap the agent from a single markdown file.
    • schipperai 1 hour ago
      I built so much muscle memory from the original system, so it made sense to apply it to other projects. This was the simplest way to achieve that
  • hinkley 1 hour ago
    These setups pretty much require the top tier subscription, right?
    • 0x457 24 minutes ago
      Even Claude Max x1 if you run 2 agents with Opus in parallel you're going hit limits. You can balance model for use case thou, but I wouldn't expect it to work on any $20 plan even if you use Kimi Code.
    • schipperai 56 minutes ago
      That's a yes from my side.
      • etyhhgfff 5 minutes ago
        Is one $200 plan sufficient to run 8x Claude Code with Opus 4.6? Or what else you need in terms of subscriptions?
  • aplomb1026 1 hour ago
    [dead]
  • mrorigo 1 hour ago
    [dead]