Composite tools and workflows
Composite tools let you define multi-step workflows that execute across multiple backend MCP servers with parallel execution, conditional logic, approval gates, and error handling.
Overview
A composite tool combines multiple backend tool calls into a single workflow. When a client calls a composite tool, vMCP orchestrates the execution across backend MCP servers, handling dependencies and collecting results.
Key capabilities
- Parallel execution: Independent steps run concurrently; dependent steps wait for their prerequisites
- Template expansion: Dynamic arguments using step outputs
- Elicitation: Request user input mid-workflow (approval gates, choices)
- Error handling: Configurable abort, continue, or retry behavior
- Timeouts: Workflow and per-step timeout configuration
Elicitation (user prompts during workflow execution) is defined in the CRD but has not been extensively tested. Test thoroughly in non-production environments first.
Configuration location
Composite tools are defined in the VirtualMCPServer resource under
spec.config.compositeTools:
apiVersion: toolhive.stacklok.dev/v1alpha1
kind: VirtualMCPServer
metadata:
name: my-vmcp
spec:
incomingAuth:
type: anonymous
config:
groupRef: my-tools
# ... other configuration ...
compositeTools:
- name: my_workflow
description: A multi-step workflow
parameters:
# Input parameters (JSON Schema)
steps:
# Workflow steps
For complex, reusable workflows, you can also reference external
VirtualMCPCompositeToolDefinition resources using
spec.config.compositeToolRefs.
Simple example
Here's a composite tool that searches arXiv for papers on a topic and reads the top result:
spec:
config:
compositeTools:
- name: research_topic
description: Search arXiv for papers and read the top result
parameters:
type: object
properties:
query:
type: string
description: Research topic to search for
required:
- query
steps:
# Step 1: Search arXiv for papers matching the query
- id: search
tool: arxiv.search_papers
arguments:
query: '{{.params.query}}'
max_results: 1
# Step 2: Download the paper (required before reading)
# Note: fromJson is needed when the MCP server returns JSON as text
# rather than structured content. This is common for servers that
# don't fully support MCP's structuredContent field.
- id: download
tool: arxiv.download_paper
arguments:
paper_id:
'{{(index (fromJson .steps.search.output.text).papers 0).id}}'
dependsOn: [search]
# Step 3: Read the downloaded paper content
- id: read
tool: arxiv.read_paper
arguments:
paper_id:
'{{(index (fromJson .steps.search.output.text).papers 0).id}}'
dependsOn: [download]
What's happening:
- Parameters: Define the workflow inputs (
queryfor the research topic) - Step 1 (search): Calls
arxiv.search_paperswith the query from parameters using template syntax{{.params.query}} - Step 2 (download): Waits for search (
dependsOn: [search]), then downloads the paper. ThefromJsonfunction parses the JSON text returned by the server, andindexaccesses the first paper's ID. - Step 3 (read): Waits for download, then reads the paper content.
When a client calls this composite tool, vMCP executes all three steps in sequence and returns the paper content.
Structured content vs JSON text
MCP servers can return data in two ways:
- Structured content: Data is in
structuredContentand can be accessed directly:{{.steps.stepid.output.field}} - JSON text: Data is returned as a JSON string in the
textfield and requires parsing:{{(fromJson .steps.stepid.output.text).field}}
The arxiv-mcp-server in this example uses JSON text, so we use fromJson. Check
your backend's response format to determine which approach to use.
Use cases
Incident investigation
Gather data from multiple monitoring systems in parallel:
spec:
config:
compositeTools:
- name: investigate_incident
description: Gather incident data from multiple sources in parallel
parameters:
type: object
properties:
incident_id:
type: string
required:
- incident_id
steps:
# These steps run in parallel (no dependencies)
- id: get_logs
tool: logging.search_logs
arguments:
query: 'incident_id={{.params.incident_id}}'
timerange: '1h'
- id: get_metrics
tool: monitoring.get_metrics
arguments:
filter: 'error_rate'
timerange: '1h'
- id: get_alerts
tool: pagerduty.list_alerts
arguments:
incident: '{{.params.incident_id}}'
# This step waits for all parallel steps to complete
- id: create_summary
tool: docs.create_document
arguments:
title: 'Incident {{.params.incident_id}} Summary'
content: 'Logs: {{.steps.get_logs.output.results}}'
dependsOn: [get_logs, get_metrics, get_alerts]
Deployment with approval
Human-in-the-loop workflow for production deployments:
spec:
config:
compositeTools:
- name: deploy_with_approval
description: Deploy to production with human approval gate
parameters:
type: object
properties:
pr_number:
type: string
environment:
type: string
default: production
required:
- pr_number
steps:
- id: get_pr_details
tool: github.get_pull_request
arguments:
pr: '{{.params.pr_number}}'
- id: approval
type: elicitation
message:
'Deploy PR #{{.params.pr_number}} to {{.params.environment}}?'
schema:
type: object
properties:
approved:
type: boolean
timeout: '10m'
dependsOn: [get_pr_details]
- id: deploy
tool: deploy.trigger_deployment
arguments:
ref: '{{.steps.get_pr_details.output.head_sha}}'
environment: '{{.params.environment}}'
condition: '{{.steps.approval.content.approved}}'
dependsOn: [approval]
Cross-system data aggregation
Collect and correlate data from multiple backend MCP servers:
spec:
config:
compositeTools:
- name: security_scan_report
description: Run security scans and create consolidated report
parameters:
type: object
properties:
repo:
type: string
required:
- repo
steps:
- id: vulnerability_scan
tool: osv.scan_dependencies
arguments:
repository: '{{.params.repo}}'
- id: secret_scan
tool: gitleaks.scan_repo
arguments:
repository: '{{.params.repo}}'
- id: create_issue
tool: github.create_issue
arguments:
repo: '{{.params.repo}}'
title: 'Security Scan Results'
body: 'Found {{.steps.vulnerability_scan.output.count}} vulnerabilities'
dependsOn: [vulnerability_scan, secret_scan]
onError:
action: continue
Workflow definition
Parameters
Define input parameters using JSON Schema format:
spec:
config:
compositeTools:
- name: <TOOL_NAME>
parameters:
type: object
properties:
required_param:
type: string
optional_param:
type: integer
default: 10
required:
- required_param
Steps
Each step can be a tool call or an elicitation:
spec:
config:
compositeTools:
- name: <TOOL_NAME>
steps:
- id: step_name # Unique identifier
tool: backend.tool # Tool to call
arguments: # Arguments with template expansion
arg1: '{{.params.input}}'
dependsOn: [other_step] # Dependencies (this step waits for other_step)
condition: '{{.steps.check.output.approved}}' # Optional condition
timeout: '30s' # Step timeout
onError:
action: abort # abort | continue | retry
Elicitation (user prompts)
Request input from users during workflow execution:
spec:
config:
compositeTools:
- name: <TOOL_NAME>
steps:
- id: approval
type: elicitation
message: 'Proceed with deployment?'
schema:
type: object
properties:
confirm: { type: boolean }
timeout: '5m'
Error handling
Configure behavior when steps fail:
| Action | Description |
|---|---|
abort | Stop workflow immediately |
continue | Log error, proceed to next step |
retry | Retry with exponential backoff |
spec:
config:
compositeTools:
- name: <TOOL_NAME>
steps:
- id: <STEP_ID>
# ... other step config (tool, arguments, etc.)
onError:
action: retry
retryCount: 3
Template syntax
Access workflow context in arguments:
| Template | Description |
|---|---|
{{.params.name}} | Input parameter |
{{.steps.id.output}} | Step output (map) |
{{.steps.id.output.text}} | Text content from step output |
{{.steps.id.content}} | Elicitation response content |
{{.steps.id.action}} | Elicitation action (accept/decline/cancel) |
Template functions
The following functions are available for use in templates:
| Function | Description | Example |
|---|---|---|
fromJson | Parse a JSON string into a value | {{(fromJson .steps.s1.output.text).field}} |
json | Encode a value as a JSON string | {{json .steps.s1.output}} |
quote | Quote a string value | {{quote .params.name}} |
index | Access array elements by index | {{index .steps.s1.output.items 0}} |
Accessing step outputs
When an MCP server returns structured content, you can access output fields directly:
# Direct access when server supports structuredContent
result: '{{.steps.fetch.output.data}}'
items: '{{index .steps.search.output.results 0}}'
This is the simplest approach and works when the backend MCP server populates
the structuredContent field in its response.
Working with JSON text responses
Some MCP servers return structured data as JSON text rather than using MCP's
structuredContent field. When this happens, use fromJson to parse it:
# Parse JSON text and access a nested field
paper_id: '{{(index (fromJson .steps.search.output.text).papers 0).id}}'
This pattern:
- Gets the text output:
.steps.search.output.text - Parses it as JSON:
fromJson ... - Accesses the
papersarray and gets the first element:index ... 0 - Gets the
idfield:.id
How to tell which approach to use: Call the backend tool directly and
inspect the response. If structuredContent contains your data fields, use
direct access. If structuredContent only has a text field containing JSON,
use fromJson.
Complete example
A VirtualMCPServer with an inline composite tool using the arxiv-mcp-server:
apiVersion: toolhive.stacklok.dev/v1alpha1
kind: VirtualMCPServer
metadata:
name: research-vmcp
namespace: toolhive-system
spec:
incomingAuth:
type: anonymous
config:
groupRef: research-tools
aggregation:
conflictResolution: prefix
conflictResolutionConfig:
prefixFormat: '{workload}_'
compositeTools:
- name: research_topic
description: Search arXiv for papers and read the top result
parameters:
type: object
properties:
query:
type: string
description: Research topic to search for
required:
- query
steps:
- id: search
tool: arxiv.search_papers
arguments:
query: '{{.params.query}}'
max_results: 1
- id: download
tool: arxiv.download_paper
arguments:
paper_id:
'{{(index (fromJson .steps.search.output.text).papers 0).id}}'
dependsOn: [search]
- id: read
tool: arxiv.read_paper
arguments:
paper_id:
'{{(index (fromJson .steps.search.output.text).papers 0).id}}'
dependsOn: [download]
timeout: '5m'
Note: The example above assumes you have:
- An
MCPGroupnamedresearch-tools.- An
arxiv-mcp-serverdeployed as anMCPServerorMCPRemoteProxyresource that references theresearch-toolsgroup.For a complete example of configuring MCP groups and backend servers, see the quickstart and tool aggregation guides. For complex, reusable workflows, create
VirtualMCPCompositeToolDefinitionresources and reference them withspec.config.compositeToolRefs:
spec:
config:
groupRef: my-tools
compositeToolRefs:
- name: my-reusable-workflow
- name: another-workflow