Pipe prompts through Claude on a schedule,
wired into your MCPs.
PipedAI is a managed runner for natural-language pipelines. Give it a prompt, an MCP endpoint, and a schedule — it runs Claude on your worker and the MCP does the actual work. Sticky workers, encrypted tokens, full run history.
Bring your own machine + Claude Max — no Anthropic API key required.
Three pieces. No glue code.
You author triggers in the dashboard, register worker machines once, and PipedAI does the orchestration. Your MCP is where the actual work happens.
Write a trigger
Name + prompt + cron expression + MCP URL + service token. The token is encrypted at rest under a per-environment data key and only handed to a worker via the poll endpoint.
Register a worker
One npm install on any machine you control. The worker registers, polls for assigned runs, and invokes claude -p with a per-run MCP config. Sticky assignment — same worker every time.
Let it run
On schedule (or on-demand), the worker picks up the run, lets Claude call your MCP, and reports the outcome. Full run history with stdout, stderr, token usage, and fault attribution.
Built for MCP-first pipelines.
Anthropic Routines is too limited for serious pipeline work. Generic job runners (Trigger.dev, Inngest, Temporal) want you to assemble the Claude+MCP plumbing yourself. PipedAI is the opinionated middle ground.
MCP-first by design
Triggers point at MCP endpoints with service tokens. The MCP is where the work happens — PipedAI is just orchestration + ledger. Swap MCPs without rewriting pipelines.
Bring your own machine
Run pipelines on hardware you control with your own Claude Max subscription. No per-token markup, no data leaves your infrastructure. Self-registers via one CLI command.
Encrypted at rest
MCP service tokens are encrypted under a per-environment data key, wrapped by a master key on Railway. Plaintext is only handed to a worker on poll, never logged or exposed in any API response.
Full run history
Every firing produces a Run record with stdout, stderr, token usage, exit code, fault class. Filter by status and date range, expand any row to see the full output. Audit log for every mutation.
Sticky assignment
A trigger always runs on the same worker until reassigned. Predictable for debugging, easy to reason about MCP token scoping, simple to attribute cost.
Multi-tenant from day one
Workspaces, environments, role-based access (viewer / operator / admin / owner), API keys with prefix-and-last-4 masking. Built for teams, not just solo operators.
Pick the model that fits your fleet.
Two tiers, no surprises. Both include the full dashboard, full run history, and unlimited workers + triggers. Billing details are confirmed during signup.
Run pipelines on your own hardware with Claude Max.
- Unlimited workers, triggers, runs
- Bring your own Claude Max subscription
- No per-token markup
- Full dashboard + run history
- Same encryption + multi-tenancy
- Best for: cost-sensitive teams already on Claude Max
Hands-off managed runner with metered token billing.
- Marolence runs the worker fleet
- Per-token metered billing (passed-through pricing)
- Full dashboard + run history
- Auto-retry on infra faults
- 99.5%+ heartbeat coverage SLA
- Best for: teams that want zero ops on the runner side
Billing UI is being finalized. New signups today get free access to the BYO tier; managed-worker invoicing rolls out shortly.
Retrieval and execution, finally separate.
InformedAI is the retrieval layer (RAG, embeddings, knowledge bases). PipedAI is the execution layer (scheduled, autonomous action via MCP). Most teams build both into one app. We split them — same auth, same Marolence-stack conventions, independently swappable.
Ready to schedule your first pipeline?
Sign up, register a worker on your laptop, and fire your first run inside five minutes. No credit card required.
Start free