Back to AI Workflows

How ZeroContentPipeline turns Telegram ideas into published ZeroLabs posts

We now have a dedicated Codex workflow that can turn a Telegram prompt into a researched, reviewed, and publishable ZeroLabs draft inside one content-only workspace.

Last updated: 2026-03-28 · Tested against Codex CLI v0.117.0 and the live ZeroLabs MCP on 2026-03-28

The useful part is not the bot. The useful part is the boundary.

We now run a dedicated Codex path for content work only. Telegram ideas come in through @content, get routed by ZeroRelay, and land inside ZeroContentPipeline, a standalone repo that exists purely to handle research, briefing, drafting, review, visuals planning, and publish decisions for ZeroLabs.

That matters because the content workflow no longer competes with general VPS maintenance, app fixes, or random host-level tasks. It has one workspace, one target, and one job.

Why split content work into its own repo?

We found the old pattern was too easy to muddy after using content tasks alongside general VPS work. A content task could start in a good place, then drift into infra chatter, ad hoc notes, or half-finished publish logic. That is manageable for one post. It does not scale well when you want repeatable quality.

A dedicated repo fixes that in a simple way:

  1. the prompts, templates, and validators live together
  2. the run logs and artifacts stay with the job
  3. the publishing target stays explicit
  4. the review gate is part of the system, not a memory test

If you have read How Claude published directly to Labs via MCP, this is the same principle applied one level higher. The publish connection is still there, but the surrounding workflow is now first-class too.

What actually happens after a Telegram idea lands?

The current flow is straightforward.

A message tagged @content reaches ZeroRelay. ZeroRelay routes it to a dedicated Codex bridge. That bridge runs inside /home/claude/ZeroContentPipeline, not inside the broader VPS workspace. From there, Codex can turn the idea into a brief, a draft, a review pass, a visual plan, and a publish action when the gates pass.

Inside the repo, the working contract is explicit:

  • brief.md captures topic, angle, intent, claims to verify, and internal link targets
  • draft.md holds the article itself
  • review-report.json records style, SEO, facts, links, and blockers
  • visual-manifest.json records whether the post should stay text-only, use Mermaid, or wait for proof assets
  • publish-result.json records the outcome against ZeroLabs

That keeps the pipeline closer to the structure we described in How to build AI review agents for your content pipeline, but with a single content workspace instead of scattered scripts.

Why does the ZeroLabs MCP target matter here?

Because it keeps the handoff clean.

The content repo is not the site. ZeroLabs is still the site. The pipeline workspace creates and audits the artifacts, then publishes through the live ZeroLabs MCP endpoint when the post is ready. That means content operations stay separate from rendering, routing, and site runtime logic.

This pattern fits the broader idea behind the Model Context Protocol, where tools and systems expose a consistent interface instead of forcing every agent to learn bespoke integration code each time.

For the Codex side specifically, the important detail is that we authenticated the VPS install with ChatGPT device auth instead of dropping API keys into the workflow, which is supported by OpenAI's Codex sign-in flow for ChatGPT-backed usage (OpenAI help).

What is good about this setup, and what still needs work?

The good part is clarity.

  • Codex has one content-only workspace
  • Telegram can feed ideas into it quickly
  • the pipeline artifacts are persistent and inspectable
  • ZeroLabs remains the publish target, not the editing environment

The part that still needs more work is polish around automation depth.

Right now the system is strong on research, drafting, gating, and publish mechanics. The next layer is richer asset automation, especially around proof screenshots, Mermaid export, and tighter recurring audit loops. We already made that direction clear in You don't need an AI agent: the goal is not more moving parts for their own sake. The goal is fewer fragile handoffs.

When should you use a setup like this?

Use it when content is operational, not ornamental.

If publishing is part of your product motion, your documentation engine, or your search strategy, the workflow around the post matters as much as the words inside it. You need a system that can keep research, QA, visuals, and publish state in sync.

If you only publish occasionally, this might be too much structure. If content is becoming a repeatable channel, it is usually worth the boundary.


If you want to build the same kind of stack, start with one dedicated content workspace, one publishing target, and one hard review gate. That will get you further than adding five more agents without a workflow.

Share