The AI connectable video editor.
UltimateSlice is a Final Cut Pro–inspired non-linear video editor built with Rust and GTK4, powered by GStreamer for media playback and export.
Built in MCP server allows for AI collaboration.
- GTK4 application scaffold with dark theme styling
- Media library import with duration probing (video/audio/image)
- Source monitor with playback, scrubber, in/out marks, and timecode
- Timeline with multi-track rows, zoom/pan, clip selection, trim, move, razor
- Undo/Redo command history
- Inspector panel for selected clip properties
- MP4/H.264 export via GStreamer pipeline
- FCPXML 1.10–1.14 import + 1.14 export
- Optional MCP server (
--mcp) for JSON-RPC control
UltimateSlice uses open-source crates and runtime libraries, including:
gtk4-rs/gdk4/gio/glib/pango— GTK/Pango runtime libraries LGPL-2.1-or-latergstreamer-rs+ GStreamer — LGPL-2.1-or-laterquick-xml— MITserde/serde_json— MIT OR Apache-2.0uuid— MIT OR Apache-2.0anyhow/thiserror/log/env_logger— MIT OR Apache-2.0rustfft— MIT OR Apache-2.0fuzzy-matcher— MITwhisper-rs— Unlicenseort(ONNX Runtime) /ndarray— MIT OR Apache-2.0tokenizers(Hugging Face) — Apache-2.0hound(WAV I/O) — Apache-2.0resvg/usvg/tiny-skia— MIT OR Apache-2.0tempfile— MIT OR Apache-2.0- FFmpeg (tooling/runtime) — LGPL-2.1-or-later (Flatpak build enables GPL options)
- x264 (Flatpak build dependency) — GPL-2.0-or-later
UltimateSlice can also use the following AI models, installed by the user at runtime (not bundled with the binary):
- MODNet — photographic portrait matting (background removal). Apache-2.0.
- SAM 3 — Segment Anything Model v3 (Meta AI) for mask generation from box prompts. Apache-2.0 (model weights from wkentaro/sam3-onnx ONNX exports).
- Whisper (GGML) — speech-to-text for subtitle generation. MIT.
- MusicGen-small — text-to-music generation (Meta AI). CC-BY-NC-4.0 (research/non-commercial use; check the model card before commercial use).
- RIFE — Real-time Intermediate Flow Estimation for AI slow-motion frame interpolation. MIT (model weights from third-party ONNX exports of the upstream RIFE project).
AI inference (SAM, MODNet, RIFE, MusicGen, and optionally Whisper STT) can
run on GPU via ONNX Runtime execution providers. The default build is
CPU-only; opt in to GPU with one of the ai-webgpu / ai-cuda / ai-rocm /
ai-openvino Cargo features. Cross-vendor WebGPU is the recommended
starting point (one feature flag, prebuilt binaries, works on Intel Arc /
AMD / NVIDIA). Native EPs (CUDA / ROCm / OpenVINO) deliver higher peak
throughput but CUDA needs the NVIDIA toolkit and ROCm/OpenVINO require
source-building ONNX Runtime 1.24.2 via scripts/build_onnxruntime.sh.
See docs/gpu/README.md for the decision tree and
per-vendor setup guides.
For exact versions and full dependency tree, see Cargo.toml, Cargo.lock,
io.github.kmwallio.ultimateslice.yml, cargo-sources.json, and
onnxruntime-sources.json.
See ROADMAP.md for upcoming features like thumbnails, audio waveforms, multi-track editing, transitions, and a program monitor.
See docs/ARCHITECTURE.md for the full layout and design notes. Highlights:
src/app.rs– GTK application setup and CSS loadingsrc/model/– core data model (Project,Track,Clip,MediaItem)src/media/– playback, thumbnails, and exportsrc/ui/– GTK widgets (timeline, inspector, media browser, preview)src/fcpxml/– FCPXML parser/writer
- Rust (edition 2021, via
rustup) - GTK4 development libraries
- GStreamer + plugins for playback and export
ffmpegon$PATH(for export)
Linux (Ubuntu/Debian):
The default feature set builds whisper-rs from source, so install the compiler toolchain, pkg-config, and cmake alongside the media libraries:
sudo apt install \
build-essential \
cmake \
pkg-config \
libgtk-4-dev \
libgstreamer1.0-dev \
libgstreamer-plugins-base1.0-dev \
gstreamer1.0-plugins-good \
gstreamer1.0-plugins-bad \
gstreamer1.0-libav \
ffmpegmacOS (Homebrew):
Install the Xcode Command Line Tools first:
xcode-select --installThen install the Homebrew dependencies (the default speech-to-text feature needs cmake during cargo build):
brew install cmake gtk4 gstreamer gst-plugins-base gst-plugins-good gst-plugins-bad gst-libav ffmpegThen add to your shell profile so cargo can locate the libraries:
export PKG_CONFIG_PATH="$(brew --prefix)/lib/pkgconfig:$(brew --prefix)/share/pkgconfig"Three tiers targeting up to 4K source media, each with suggested UltimateSlice preference settings.
| Component | Spec |
|---|---|
| CPU | Dual-core, 2.0 GHz+ |
| RAM | 4 GB |
| GPU | Integrated graphics (Intel HD / AMD APU), 512 MB shared VRAM |
| Storage | HDD (SSD recommended for proxy cache) |
Settings: Proxy Mode → Half or Quarter, Preview Quality → Quarter, Hardware Acceleration → Off, GSK Renderer → Cairo, Playback Priority → Smooth
| Component | Spec |
|---|---|
| CPU | Quad-core, 3.0 GHz+ |
| RAM | 8 GB |
| GPU | Integrated or discrete with VA-API support, 2 GB VRAM |
| Storage | SSD |
Settings: Proxy Mode → Off, Preview Quality → Half or Auto, Hardware Acceleration → On, GSK Renderer → Auto (OpenGL), Playback Priority → Balanced
| Component | Spec |
|---|---|
| CPU | 6+ cores, 3.5 GHz+ |
| RAM | 16 GB+ |
| GPU | Discrete GPU with VA-API, 4 GB+ VRAM |
| Storage | NVMe SSD |
Settings: Proxy Mode → Off, Preview Quality → Full, Hardware Acceleration → On, GSK Renderer → Vulkan, Playback Priority → Accurate, Real-time Preview → On
Notes:
- VA-API hardware decoding supports H.264, H.265/HEVC, VP9, and AV1.
- Export uses FFmpeg (CPU-based) — more cores = faster exports.
- Flatpak includes
--device=drifor GPU access; native installs need VA-API drivers.
# from the project root
cargo build
cargo runTo run with MCP server enabled:
cargo run -- --mcpTo open a project file at startup from program arguments:
cargo run -- /path/to/project.uspxmlWhen using the MCP socket transport (running instance), you can use the Python bridge client:
python3 tools/mcp_socket_client.py
Optional socket override:
python3 tools/mcp_socket_client.py --socket /tmp/ultimateslice-mcp.sock
The client reads JSON-RPC lines from stdin and writes responses to stdout.
See docs/user/python-mcp.md for complete command examples.
After building, run install.sh to install the binary, desktop entry, icons, MIME type,
and AppStream metainfo to standard XDG locations:
# Install to /usr/local (default)
sudo ./install.sh
# Install to /usr (distro-style)
sudo ./install.sh --system
# User-level install (no sudo needed)
./install.sh --prefix=$HOME/.local
# Remove all installed files
sudo ./install.sh --uninstallThe script will automatically run cargo build --release if the binary is not yet built.
Run ./install.sh --help for full usage.
A Flatpak manifest is provided at io.github.kmwallio.ultimateslice.yml.
Routine builds can use the checked-in cargo-sources.json and
onnxruntime-sources.json source mirrors as-is. The first Flatpak build takes
longer because ONNX Runtime is compiled from source inside the sandbox.
flatpak-builder --user --install --force-clean flatpak-build io.github.kmwallio.ultimateslice.yml
flatpak run io.github.kmwallio.ultimatesliceIf Cargo.lock changes, regenerate the Rust vendored-source manifest before
rebuilding:
python3 flatpak-cargo-generator.py Cargo.lock -o cargo-sources.jsononnxruntime-sources.json only needs to be refreshed when the pinned ONNX
Runtime version or its mirrored CPU/shared-lib cmake/deps.txt inputs change.
- GTK4 callbacks cannot unwind panics. Avoid
RefCelldouble-borrows in UI callbacks. - The project shares a single GStreamer
playbinfor source and timeline playback.
This project is licensed under the GNU General Public License v3.0.

