Skip to content

cipherstash/llm-cli-tools

Repository files navigation

llm-cli-tools

A suite of CLI tools designed for LLM agents to interact with SaaS APIs. JSON output by default, --human flag for readable output.

Tools

Binary Service API
llm-cli Dispatcher Execs llm-cli-<subcommand> from $PATH
llm-cli-linear Linear GraphQL
llm-cli-discourse Discourse REST
llm-cli-slack Slack REST

Prerequisites

Install

./install.sh

This discovers all binary crates in the workspace and runs cargo install --path for each.

Configuration

Quick setup

Run the interactive setup wizard to generate your config file:

llm-cli init

This detects which llm-cli-* tools are installed, provides instructions for creating API keys, and prompts for the required configuration fields.

Manual setup

All tools read from ~/.config/llm-cli/config.toml (or $XDG_CONFIG_HOME/llm-cli/config.toml).

[linear]
op_item_id = "your-1password-item-id"

[discourse.my-forum]
base_url = "https://forum.example.com"
op_item_id = "your-1password-item-id"
api_username = "your-username"

[slack]
op_item_id = "your-1password-item-id"

Authentication

API keys are never stored in config files. Instead, each tool retrieves credentials from 1Password at call time using the op CLI.

Under the hood, each invocation runs:

op item get <op_item_id> --field <op_field> --reveal

Setup

  1. Install the 1Password CLI — follow the getting started guide
  2. Create an API key for the service (Linear, Discourse, or Slack)
  3. Store it in 1Password — create an item (e.g. type "API Credential" or "Login") and paste the key into a field named credential
  4. Find the item ID — open the item in 1Password.app and copy the ID from the URL bar (it looks like a1b2c3d4e5f6g7h8), or run:
    op item list | grep "Linear"
  5. Add the item ID to your config as op_item_id

Config fields

  • op_item_id (required) — the 1Password item ID containing your API key
  • op_field (optional, default: "credential") — the field name within the 1Password item to read the key from. Set this if you stored the key in a different field.

The 1Password desktop app must be running and unlocked for op to work. If you use 1Password in the browser only, you'll need to enable CLI integration.

Usage

# Dispatcher
llm-cli linear issues list
llm-cli discourse posts latest
llm-cli slack messages read --channel general

# Direct invocation
llm-cli-linear issues list --limit 10 --mine --team ENG
llm-cli-linear issues list --priority 1 --label bug
llm-cli-linear issues list --cursor <next_cursor>
llm-cli-linear issues get --id PROJ-123
llm-cli-linear issues create --title "Bug" --team ENG
llm-cli-linear issues create --input issue.json
llm-cli-linear issues close --id PROJ-123

llm-cli-discourse posts latest --page 2
llm-cli-discourse posts get --id 42
llm-cli-discourse posts create --title "Topic" --category general --raw "Body"
llm-cli-discourse posts create --input topic.json
llm-cli-discourse comments create --topic-id 42 --raw "Reply"

llm-cli-slack messages send --channel general --text "hello"
llm-cli-slack messages send --input message.json
llm-cli-slack messages read --channel general --oldest 1711900000 --latest 1711990000
llm-cli-slack messages read --channel general --cursor <next_cursor>
llm-cli-slack messages dm --user U12345 --text "hey"
llm-cli-slack messages mentions
llm-cli-slack summary --channel general

Shell completions

llm-cli completions generates completions for the dispatcher and all installed llm-cli-* subcommands in a single script. One file gives you tab-completion for everything.

Bash

llm-cli completions --shell bash > ~/.local/share/bash-completion/completions/llm-cli

Zsh

# Ensure completions directory exists and is in fpath.
# Add to ~/.zshrc if not already present:
#   fpath=(~/.zfunc $fpath)
#   autoload -Uz compinit && compinit
mkdir -p ~/.zfunc
llm-cli completions --shell zsh > ~/.zfunc/_llm-cli

Fish

llm-cli completions --shell fish > ~/.config/fish/completions/llm-cli.fish

Re-run after installing new subcommands to pick up their completions.

Common flags

  • --human — human-readable output instead of JSON
  • --debug — log HTTP requests/responses to stderr
  • --debug=pretty — pretty-print JSON bodies and GraphQL queries
  • --debug=curl — print reproducible curl commands (secrets redacted by default)
  • --debug=dangerous_no_redact — show secrets in debug output
  • --debug=curl,dangerous_no_redact — curl commands with secrets exposed
  • --debug=pretty,curl — pretty + curl, secrets redacted

JSON output format

Success

{
  "success": true,
  "data": { ... }
}

List commands include a pagination object when more results are available:

{
  "success": true,
  "data": { ... },
  "pagination": {
    "has_more": true,
    "next_cursor": "WyIyMDI2LTA0LTAxIl0"
  }
}

Errors

Errors are output as structured JSON to stdout (not stderr) with a non-zero exit code:

{
  "success": false,
  "error": {
    "code": "CONFIG_NOT_FOUND",
    "message": "Config file not found at ~/.config/llm-cli/config.toml",
    "suggestion": "Create a config file with..."
  }
}

In --human mode, errors go to stderr as plain text.

Exit codes

Code Meaning
0 Success
1 Unknown/general error
2 Configuration error (missing config file, bad TOML, missing section)
3 Authentication error (1Password CLI missing, credential retrieval failed)
4 API error (HTTP failure, bad response)
5 Invalid CLI input (bad debug mode)

JSON input

Create commands accept --input <file> for structured JSON input instead of individual flags. Use --input - to read from stdin:

echo '{"title": "Bug", "team": "ENG"}' | llm-cli-linear issues create --input -
llm-cli-slack messages send --input message.json

Automated discovery

Each API tool has a schema subcommand that outputs a JSON description of available commands and arguments:

llm-cli-linear schema
llm-cli-discourse schema
llm-cli-slack schema

Resilience

All API crates retry once with a 1-second backoff on transient HTTP errors (429 rate limits, 5xx server errors). Slack respects the Retry-After header when present. Destructive operations (delete) are not retried.

Design principles

See PRINCIPLES.md for the CLI design philosophy. These tools are agent-first: JSON output, structured errors with suggestions, named flags, no interactive prompts.

Project structure

packages/
  llm-cli/           # Dispatcher (std only, no deps)
  llm-cli-linear/    # Linear GraphQL client
  llm-cli-discourse/ # Discourse REST client
  llm-cli-slack/     # Slack REST client
docs/
  plans/             # Design documents

About

A suite of CLI tools for LLM agents first - humans a distant second.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors