MCP Tool Integration
Agent Orchestrator connects natively to MCP Gateway Pro. Every MCP server in your gateway — built-in services like RAG Engine and Media Storage, plus any server you've connected — is available to your workflow agents as tools.
How It Works
When a workflow step runs, the orchestrator automatically:
- Fetches tool schemas from your gateway for the servers listed in the agent's
Toolsfield - Registers them with the LLM as callable tools
- Routes tool calls from the LLM back through the gateway to the correct MCP server
- Returns results to the LLM so it can incorporate them into its response
This happens transparently within each agent step. The agent can call tools multiple times in a multi-turn conversation loop before producing its final output. No additional configuration or credentials are needed — the orchestrator uses the same gateway connection and tenant identity as your API key.
The LLM decides when and how to call tools. The orchestrator handles routing and result serialization.
Adding Tools to Agents
In your workflow markdown, add a Tools: line to any agent definition. List the MCP server aliases the agent should have access to:
## Agents
### Researcher
Expert at finding and synthesizing information from internal documents.
Model: Sonnet
Tools: rag-engine
### Developer
Reviews code and creates GitHub issues for problems found.
Model: Sonnet
Tools: github
### Coordinator
Manages project tasks and keeps the team informed.
Model: Haiku
Tools: jira, slack
Agents without a Tools: line run as pure LLM agents — they reason over text inputs without external tool access.
Available Tools
Built-in Services
These are always available to every AppXen tenant. No additional setup required.
rag-engine Built-in Semantic search over your knowledge base. Agents can query documents you've ingested via the RAG Engine.
knowledge_query, knowledge_ingest, knowledge_list_sources, knowledge_delete_source, knowledge_get_chunkmedia-storage Built-in Upload, manage, and retrieve files and media. Agents can store and access documents, images, and other assets.
upload_media, list_media, get_media, delete_media, get_media_urlUser-Configured Servers
Any MCP server you've added through the MCP Gateway dashboard is automatically available to your workflows. Use the server alias you chose when connecting it.
github — Repos & issuestavily — Web searchneon — Postgres DBsupabase — Backendzapier — Automationsslack — Messagingjira — Issue tracking The alias you use in Tools: must match the alias shown in your gateway dashboard. If you connected GitHub as my-github, use Tools: my-github.
Web Search (Tavily)
By default, agents use LLM knowledge only and cannot access the web. To give agents web access, add the Tavily server from the MCP Gallery in the dashboard. Tavily is a search API built for AI agents — it returns concise, AI-optimized text summaries rather than raw HTML, making results ideal for agent workflows.
Setup requires a free API key from tavily.com (free tier includes 1,000 searches/month). Once connected, add Tools: tavily to any agent definition:
### Web Researcher
Searches the web for current information and synthesizes findings.
Model: Sonnet
Tools: tavily
### Full-Stack Researcher
Searches both the web and internal knowledge base.
Model: Sonnet
Tools: tavily, rag-engine Extensions
Extensions like AWS CloudWatch are also available as tool aliases once enabled in your gateway. Enable them from the dashboard, provide your credentials, and reference the alias in your workflow.
Example Workflows
Knowledge Base Q&A
Search your ingested documents and produce a sourced answer:
# Knowledge Base Q&A
## Inputs
- question: The question to answer (string, required)
## Agents
### Researcher
Expert at finding relevant information in internal documents.
Always cites sources and indicates confidence level.
Model: Sonnet
Tools: rag-engine
## Steps
### Answer
Search the knowledge base for information related to: {inputs.question}
Use the knowledge_query tool to find relevant documents, then synthesize
a comprehensive answer with source citations.
## Output
steps.answer.output Research & File Workflow
Search documents, write a report, then upload it to media storage:
# Research & Store Report
## Inputs
- topic: Research topic (string, required)
## Agents
### Researcher
Searches internal knowledge and synthesizes findings.
Model: Sonnet
Tools: rag-engine
### Writer
Writes polished reports and stores them for future reference.
Model: Sonnet
Tools: media-storage
## Steps
### Research
Search the knowledge base for everything related to {inputs.topic}.
Compile key findings with citations.
### Write & Store
Using the research: {steps.research.output}
Write a formal report on {inputs.topic}, then upload it to media
storage so the team can access it later. Multi-Tool Agent
A single agent can use multiple tool servers. This example searches docs and creates a GitHub issue:
# Documentation Gap Finder
## Inputs
- area: Product area to audit (string, required)
## Agents
### Auditor
Reviews documentation coverage and files issues for gaps found.
Thorough and systematic. Always creates actionable issue descriptions.
Model: Sonnet
Tools: rag-engine, github
## Steps
### Audit
Search the knowledge base for documentation covering {inputs.area}.
Identify any gaps, outdated content, or missing topics.
For each gap found, create a GitHub issue with a clear description
and suggested content outline. How Agents Use Tools
When a step runs, the orchestrator enters a multi-turn conversation loop with the LLM:
- The agent receives the step prompt plus tool schemas
- The LLM decides to either respond with text or call a tool
- If a tool is called, the orchestrator routes it to the gateway and returns the result
- The LLM sees the tool result and may call more tools or produce its final answer
- This continues until the LLM responds with text (no more tool calls) or hits the turn limit
The default turn limit is 10 turns per step (configurable up to 50). Each turn is one LLM request-response cycle, including any tool calls within it. A simple step that calls one tool and responds uses 2 turns.
Error Handling
Tool call failures are handled gracefully:
- Tool not found — if an MCP server alias isn't configured in your gateway, the agent runs without those tools and receives an error message it can adapt to
- Tool call fails — errors are returned to the LLM as tool results with an error status, letting the agent retry or adjust its approach
- Gateway unreachable — the agent falls back to LLM-only mode and reports that tools were unavailable
With Tools vs Without
| Without Tools | With Tools | |
|---|---|---|
| Data access | LLM knowledge only | Real-time data from MCP servers |
| Actions | Text generation only | Create issues, send messages, upload files |
| Turns | Single turn (prompt → response) | Multi-turn (prompt → tool call → result → response) |
| Latency | ~5-30s per step | ~10-60s per step (includes tool round-trips) |
What's Next
- Workflow DSL Reference — full syntax for agents, steps, dependencies, and patterns.
- Connecting MCP Servers — add GitHub, Neon, Supabase, and other servers to your gateway.
- RAG Engine Getting Started — ingest documents so your agents can search them.
- API Reference — REST API for programmatic workflow management.