Orchestrate AI Agents with Declarative YAML

Build production-ready multi-agent systems in minutes. Define everything in version-controlled YAML. No spaghetti code. Use AI to build your workflows.

$ npx agent-orcha init my-project
✓ Project initialized with example agents and workflows
$ cd my-project
$ npx agent-orcha start
Server running on http://localhost:3000

Get Started in Three Simple Steps

1

Initialize

One command to scaffold your project with example agents, workflows, and configurations. Start with working examples.

npx agent-orcha init my-project
2

Configure

Edit YAML files to define agents, workflows, and vector stores. Version-controlled and human-readable. No code required.

agents/
  researcher.agent.yaml
workflows/
  research-paper.workflow.yaml
3

Start

Launch the production-ready API server with streaming support, logging, and CORS. Access via REST API or web UI.

npx agent-orcha start
# Server ready at :3000

Why Agent Orcha?

Declarative AI

Define agents, workflows, and infrastructure in clear, version-controlled YAML files. Track changes in Git, collaborate with your team.

Model Agnostic

Seamlessly swap between OpenAI, Gemini, Anthropic, or local LLMs (Ollama, LM Studio) without rewriting logic. Zero vendor lock-in.

Universal Tooling

Leverage the Model Context Protocol (MCP) to connect agents to any external service, API, or database instantly. Extensible by design.

RAG Native

Built-in vector store integration (Chroma, Memory) makes semantic search and knowledge retrieval a first-class citizen.

Forever Open Source

No pricing, no commercial licenses, no vendor lock-in. MIT licensed and always will be. Built by developers, for developers, with complete transparency.

AI-First Design

Built so AI can build AI agents. YAML is perfect for LLMs to read and write. Use Claude, ChatGPT, or any AI to generate, modify, and maintain your agent configurations effortlessly.

Declarative Configuration Meets Powerful APIs

Define Your Agent

Create agents with simple YAML configuration. Reference LLM configs, define tools, set prompts.

YAML
name: researcher
description: Researches topics using web and vectors
version: "1.0.0"

llm:
  name: default
  temperature: 0.5

prompt:
  system: |
    You are a thorough researcher.
    Use available tools to gather information
    before responding.
  inputVariables:
    - topic
    - context

tools:
  - mcp:fetch
  - vector:knowledge
  - function:custom-tool

output:
  format: text

Invoke with REST API

Simple HTTP endpoints for agent invocation, workflow execution, and vector search.

Bash
curl -X POST \
  http://localhost:3000/api/agents/researcher/invoke \
  -H "Content-Type: application/json" \
  -d '{
    "input": {
      "topic": "machine learning trends",
      "context": "2024 overview"
    }
  }'
JSON Response
{
  "output": "Comprehensive research results...",
  "metadata": {
    "tokensUsed": 1523,
    "toolCalls": ["fetch", "vector_search"],
    "duration": 2341
  }
}

Complete AI Orchestration Stack

YAML Configs

Agents
Workflows
Vector Stores
Functions

Orchestrator

Agent Executor
Workflow Engine
Tool Registry
LLM Factory

LLM Providers

OpenAI
Anthropic
Gemini
Local (Ollama)

Integrations

MCP Servers
Vector Stores
Custom Functions
REST API

Start Building AI Agents Today

Join developers building the next generation of AI systems with declarative orchestration.