Welcome to the Maestro CLI! This guide will help you get started with managing vector databases and their resources using the Maestro command-line interface.
- Installation
- Quick Start
- Configuration
- Basic Commands
- Vector Database Management
- Collection Management
- Document Management
- Agent Management
- Workflow Management
- Tool Management
- Custom Resource Management
- Mermaid Diagram Generation
- Validation
- Environment Variables
- Examples
- Troubleshooting
- Go 1.21 or later
- Access to a vector database (Milvus or Weaviate)
- Clone the repository:
git clone <repository-url>
cd maestro-cli- Build the CLI:
./build.sh- Verify installation:
./maestro --version- Set up your MCP server connection (optional):
export MAESTRO_MCP_SERVER_URI="http://localhost:8030/mcp"- Validate a configuration file:
./maestro validate config.yaml- List available vector databases:
./maestro vectordb list- Create a new vector database:
./maestro vectordb create config.yamlThe Maestro CLI uses YAML configuration files that follow a specific schema. The schema is automatically downloaded from the maestro-knowledge repository when needed.
Example configuration file (config.yaml):
apiVersion: maestro/v1alpha1
kind: VectorDatabase
metadata:
name: my-vector-db
labels:
app: my-app
spec:
type: milvus # or weaviate
uri: localhost:19530
collection_name: my_collection
embedding: text-embedding-3-small
mode: local # or remote| Field | Type | Required | Description |
|---|---|---|---|
apiVersion |
string | Yes | Must be maestro/v1alpha1 |
kind |
string | Yes | Must be VectorDatabase |
metadata.name |
string | Yes | Unique name for the vector database |
metadata.labels |
object | No | Optional labels for the configuration |
spec.type |
string | Yes | Type of vector database (milvus or weaviate) |
spec.uri |
string | Yes | Connection URI (host:port for local, full URL for remote) |
spec.collection_name |
string | Yes | Name of the collection to use |
spec.embedding |
string | Yes | Embedding model to use |
spec.mode |
string | Yes | Deployment mode (local or remote) |
# Show help
./maestro --help
# Show version
./maestro --version
# Show help for a specific command
./maestro vectordb --help--mcp-server-uri string: MCP server URI (overrides MAESTRO_MCP_SERVER_URI environment variable)--verbose: Enable verbose output--silent: Suppress output (except errors)--dry-run: Show what would be done without executing
# List all vector databases
./maestro vectordb list
# List with verbose output
./maestro vectordb list --verbose
# Dry run (show what would be listed)
./maestro vectordb list --dry-run# Create from configuration file
./maestro vectordb create config.yaml
# Create with verbose output
./maestro vectordb create config.yaml --verbose
# Dry run (show what would be created)
./maestro vectordb create config.yaml --dry-run# Delete a vector database
./maestro vectordb delete my-vector-db
# Delete with verbose output
./maestro vectordb delete my-vector-db --verbose
# Dry run (show what would be deleted)
./maestro vectordb delete my-vector-db --dry-run# List collections in a vector database
./maestro collection list my-vector-db
# List with verbose output
./maestro collection list my-vector-db --verbose# Create a collection
./maestro collection create my-vector-db my-collection
# Create with verbose output
./maestro collection create my-vector-db my-collection --verbose# Delete a collection
./maestro collection delete my-vector-db my-collection
# Delete with verbose output
./maestro collection delete my-vector-db my-collection --verbose# List documents in a collection
./maestro document list my-vector-db my-collection
# List with verbose output
./maestro document list my-vector-db my-collection --verbose# Write documents to a collection
./maestro document write my-vector-db my-collection data.json
# Write with verbose output
./maestro document write my-vector-db my-collection data.json --verbose# Delete a document
./maestro document delete my-vector-db my-collection doc-id
# Delete with verbose output
./maestro document delete my-vector-db my-collection doc-id --verboseThe Maestro CLI provides commands for creating and serving AI agents.
Agents are defined using YAML configuration files that follow a specific schema:
apiVersion: maestro/v1alpha1
kind: Agent
metadata:
name: my-agent
labels:
app: my-app
spec:
framework: fastapi # Agent framework (fastapi, etc.)
description: "My AI agent"
model: gpt-4 # LLM model to use
tools:
- name: tool-name
description: "Tool description"# Create agents from YAML configuration
./maestro agent create agent-config.yaml
# Create with verbose output
./maestro agent create agent-config.yaml --verbose
# Test without creating (dry run)
./maestro agent create agent-config.yaml --dry-run# Serve an agent from YAML configuration
./maestro agent serve agent-config.yaml
# Serve with custom port
./maestro agent serve agent-config.yaml --port=8080
# Serve a specific agent from a multi-agent YAML file
./maestro agent serve agent-config.yaml --agent-name=my-agent
# Test without serving (dry run)
./maestro agent serve agent-config.yaml --dry-runWorkflows allow you to orchestrate multiple agents to work together on complex tasks.
Workflows are defined using YAML configuration files:
apiVersion: maestro/v1alpha1
kind: Workflow
metadata:
name: my-workflow
labels:
app: my-app
spec:
template:
prompt: "Initial workflow prompt"
agents:
- agent-1
- agent-2
steps:
- name: step-1
agent: agent-1
input: "{{ .prompt }}"
- name: step-2
agent: agent-2
input: "Process the output from step-1: {{ .step-1.output }}"
exception:
agent: exception-handler-agent# Run a workflow with agents
./maestro workflow run agent-config.yaml workflow-config.yaml
# Run with interactive prompt
./maestro workflow run agent-config.yaml workflow-config.yaml --prompt
# Test without running (dry run)
./maestro workflow run agent-config.yaml workflow-config.yaml --dry-run# Serve a workflow with agents
./maestro workflow serve agent-config.yaml workflow-config.yaml
# Serve with custom port
./maestro workflow serve agent-config.yaml workflow-config.yaml --port=8080
# Test without serving (dry run)
./maestro workflow serve agent-config.yaml workflow-config.yaml --dry-run# Deploy a workflow
./maestro workflow deploy agent-config.yaml workflow-config.yaml
# Deploy to Kubernetes
./maestro workflow deploy agent-config.yaml workflow-config.yaml --kubernetes
# Deploy with Docker
./maestro workflow deploy agent-config.yaml workflow-config.yaml --docker
# Test without deploying (dry run)
./maestro workflow deploy agent-config.yaml workflow-config.yaml --dry-runThe Maestro CLI provides commands for creating Tools for agents.
# Create tool from YAML
./maestro tool create tool-config.yaml
# Test without creating (dry run)
./maestro tool create tool-config.yaml --dry-runThe command automatically:
- Sets the API version to
maestro.ai4quantum.com/v1alpha1 - Sanitizes resource names for Kubernetes compatibility
- Processes workflow-specific fields for proper deployment
Tool example defined in yaml format is:
apiVersion: maestro/v1alpha1
kind: MCPTool
metadata:
name: fetch
namespace: default
spec:
image: ghcr.io/stackloklabs/gofetch/server:latest
transport: streamable-httpThe syntax of the agent definition is defined in the json schema.
The schema is same as ToolHive CRD definition except apiVersion and kind.
Maestro deploy MCP servers for the defined tools. The available tools are listed by ToolHive thv list command.
- apiVersion: version of agent definition format. This must be
maestro/v1alpha1now. - kind: type of object.
MCPToolfor agent definition - metadata:
- name: name of tool
- labels: array of key, value pairs. This is optional and can be used to associate any information to this agent
- spec:
- image: Image is the container image for the MCP server. The image location is in
thv registry info [server] [flags]output - transport: Transport is the transport method for the MCP server (stdio, streamable-http, sse)
- image: Image is the container image for the MCP server. The image location is in
The full schema is documeted in ToolHive Docs
The Maestro CLI provides commands for creating Kubernetes custom resources for agents and workflows.
# Create Kubernetes custom resources from YAML
./maestro customresource create resource-config.yaml
# Test without creating (dry run)
./maestro customresource create resource-config.yaml --dry-runThe command automatically:
- Sets the API version to
maestro.ai4quantum.com/v1alpha1 - Sanitizes resource names for Kubernetes compatibility
- Processes workflow-specific fields for proper deployment
Agent Custom Resource:
kind: Agent
metadata:
name: my-agent
spec:
framework: fastapi
description: "My AI agent"
model: gpt-4
tools:
- name: tool-name
description: "Tool description"Workflow Custom Resource:
kind: Workflow
metadata:
name: my-workflow
labels:
app: my-app
spec:
template:
agents:
- agent-1
- agent-2
steps:
- name: step-1
agent: agent-1
- name: parallel-step
parallel:
- agent-1
- agent-2The Maestro CLI provides commands for generating Mermaid diagrams from workflow definitions.
# Generate a sequence diagram from a workflow
./maestro mermaid workflow-config.yaml --sequenceDiagram
# Generate a top-down flowchart from a workflow
./maestro mermaid workflow-config.yaml --flowchart-td
# Generate a left-right flowchart from a workflow
./maestro mermaid workflow-config.yaml --flowchart-lr- Sequence Diagram: Shows the interaction between agents in a workflow as a sequence of messages
- Flowchart TD: Shows the workflow steps as a top-down flowchart
- Flowchart LR: Shows the workflow steps as a left-right flowchart
Sequence Diagram:
sequenceDiagram
participant User
participant System
User->>System: Request
System->>User: Response
Flowchart:
flowchart TD
A[Start] --> B[Process]
B --> C[End]
# Validate a configuration file
./maestro validate config.yaml
# Validate with verbose output
./maestro validate config.yaml --verbose
# Validate with custom schema
./maestro validate config.yaml schema.json
# Dry run validation
./maestro validate config.yaml --dry-runThe validation command automatically downloads the latest schema from the maestro-knowledge repository if no local schema is found.
| Variable | Description | Default |
|---|---|---|
MAESTRO_MCP_SERVER_URI |
MCP server URI for communication | http://localhost:8030/mcp |
MAESTRO_TEST_MODE |
Enable test mode (for testing) | false |
- Create a configuration file:
# milvus-config.yaml
apiVersion: maestro/v1alpha1
kind: VectorDatabase
metadata:
name: my-milvus-db
spec:
type: milvus
uri: localhost:19530
collection_name: documents
embedding: text-embedding-3-small
mode: local- Validate the configuration:
./maestro validate milvus-config.yaml- Create the vector database:
./maestro vectordb create milvus-config.yaml- List to verify:
./maestro vectordb list- Create a configuration file:
# weaviate-config.yaml
apiVersion: maestro/v1alpha1
kind: VectorDatabase
metadata:
name: my-weaviate-db
spec:
type: weaviate
uri: http://localhost:8080
collection_name: documents
embedding: text-embedding-3-small
mode: local- Validate and create:
./maestro validate weaviate-config.yaml
./maestro vectordb create weaviate-config.yaml# List collections
./maestro collection list my-vector-db
# Create a collection
./maestro collection create my-vector-db my-documents
# List documents
./maestro document list my-vector-db my-documents
# Write documents (assuming you have a data.json file)
./maestro document write my-vector-db my-documents data.json- Create agent configuration file:
# agent-config.yaml
apiVersion: maestro/v1alpha1
kind: Agent
metadata:
name: research-agent
spec:
framework: fastapi
description: "Research assistant agent"
model: gpt-4
tools:
- name: search
description: "Search for information"
- name: summarize
description: "Summarize content"
---
apiVersion: maestro/v1alpha1
kind: Agent
metadata:
name: writing-agent
spec:
framework: fastapi
description: "Content writing agent"
model: gpt-4
tools:
- name: write
description: "Write content"- Create workflow configuration file:
# workflow-config.yaml
apiVersion: maestro/v1alpha1
kind: Workflow
metadata:
name: research-workflow
spec:
template:
prompt: "Research quantum computing"
steps:
- name: research
agent: research-agent
input: "{{ .prompt }}"
- name: write
agent: writing-agent
input: "Write an article based on this research: {{ .research.output }}"- Run the workflow:
# Run the workflow
./maestro workflow run agent-config.yaml workflow-config.yaml
# Run with interactive prompt
./maestro workflow run agent-config.yaml workflow-config.yaml --prompt- Generate a diagram of the workflow:
# Generate a sequence diagram
./maestro mermaid workflow-config.yaml --sequenceDiagram- Deploy to Kubernetes:
# Create Kubernetes custom resources
./maestro customresource create agent-config.yaml
./maestro customresource create workflow-config.yaml
# Or deploy the workflow directly
./maestro workflow deploy agent-config.yaml workflow-config.yaml --kubernetes-
Schema download fails:
- The CLI automatically tries to download the schema from the maestro-knowledge repository
- If download fails, ensure you have internet connectivity
- You can provide a custom schema file:
./maestro validate config.yaml custom-schema.json
-
MCP server connection issues:
- Check that your MCP server is running
- Verify the URI:
./maestro --mcp-server-uri http://your-server:port/mcp
-
Vector database connection issues:
- Ensure your vector database (Milvus/Weaviate) is running
- Check the URI in your configuration file
- Verify network connectivity
-
Permission issues:
- Ensure the binary has execute permissions:
chmod +x maestro - Check file permissions for configuration files
- Ensure the binary has execute permissions:
- Use
./maestro --helpfor general help - Use
./maestro <command> --helpfor command-specific help - Check the logs with
--verboseflag for detailed information - Use
--dry-runto see what would be executed without making changes
Enable verbose output to see detailed information about what the CLI is doing:
./maestro <command> --verboseThis will show:
- Schema download attempts
- MCP server communication
- Detailed error messages
- Step-by-step execution information
For more information, see the main README.md or run ./maestro --help.