A privacy-focused, open-source AI chat interface designed for local, private role-play and character interaction. This application leverages your local hardware to run large language models (LLMs) for immersive character experiences.
Unlike many existing solutions, this project prioritizes a fully offline and local experience. No subscriptions, no purchases, and it's completely open source.
Important: You'll need a mid-range gaming GPU with 8GB VRAM or more!
- Role-Play Focused: Optimized for engaging and immersive character interactions.
- Character Management: Create and manage character templates, supporting imports from popular character card formats (SillyTavern, Backyard AI).
- User Templates: Define and customize multiple user profiles for diverse interactions.
- Broad GPU Support: Designed to utilize various GPUs via accelerated frameworks (CUDA, HIP, Vulkan.).
- Open Source: Full transparency with an open codebase for inspection and modification.
- Privacy-Centric:
- No Data Collection: Your data stays on your machine.
- Encrypted Data: Sensitive information (chats, characters, users) is encrypted locally.
- Advanced Model Features: Support for Qwen3 Think Mode and Jinja templating.
Windows 10+ or Linux x64. GPU drivers are needed. Tested and validated:
- Windows 11 (with gpu drivers)
- Ubuntu 24.04 (with gpu drivers)
- archlinux (with gpu drivers)
- JavaFX The GUI library.
- AtlantaFX Modernized styling for JavaFX.
- RenEngine Simplified framework for JavaFX.
- llama.cpp Backend server for the models.
As of version 1.0.4 vision has been implemented into the chats. When sending an image it will now add the image to the chat window. You can close and expand the image when needed. The model can only process one image, it will prioritize the last sent message.
Vulkan works right out of the box. No installations needed apart from basic graphics drivers. As of version 1.0.5 vulkan is the default backend.
Q: What's the difference between the backends?
A: Cuda and HIP typically provide slightly better performance, but you will have to install necessary drivers. Vulkan works for most GPUs without the need of third party installations. Cuda requires an Nvidia card. HIP requires an AMD card.
Cuda is difficult to install correctly. You must use the cuda version that llama.cpp supports.
There are two ways to download and install the application. All downloads are located in releases.
- Executable: Download either the
.exeor.msiinstaller. Launch the file to install the program. You can optionally create a start-menu and desktop short-cut. - Zip: Download the
.zipfile and extract it. Userun.batorrun_console.batto launch the app.
Only GGUF format is supported. Please refer to Converting Models for converting to GGUF models.
Configure the model location in the Models tab on the application. It is highly recommended to change it, but you can keep it default.
If you place a model inside the directory you will have to refresh the page. (Click on Models tab again.)
Make sure you have a model set as a default. This is a hard requirement.
Below is the instructions on how to test and compile source changes. The instructions will be for Intellij, but may work with other IDE's.
You need to ensure you have the latest release downloaded and extracted into %appdata%. When extracting the zip it will create a parent folder called chat-app.
Navigate to App.java. Inside Intellij there will be a green arrow on the class line, and main() function. Click the green arrow on either of these.
A configuration dialogue may open, if so set the JDK to 25. The app should start.
Make changes to the source and test them by running the App.main() entry point.
TIP: Run configurations will be pinned to the right. Just press the green arrow on the top right or (Shift+F10)
Ensure you have RenEngine compiled following its instructions.
In the project directory run the follow.
mvnw clean installThere will be two jar files.
- character-chat-app.jar : This is the executable file.
- character-chat-app-1.0-SNAPSHOT.jar : Compiled classes only.
Download the latest releases. This is needed for necessary backend and sdk files. Replace the jarfile with your compiled executable jar file.
If the app does not launch check the following:
- Ensure C++ Redistributables are installed. This is required for both llamacpp and RenEngine.
- Ensure your java compiler is set to JDK 25. Sometimes your Windows Java and IDE Java are two different things.
- Ensure you installed the latest release.
- Invalidate caches and restart.
Your application paths should be the following. Anything with * is required.
- %appdata%/chat-app/*
- %appdata%/chat-app/backend/*
- %appdata%/chat-app/characters/
- %appdata%/chat-app/icons/*
- %appdata%/chat-app/jdk/*
- %appdata%/chat-app/jdk/windows/*
- %appdata%/chat-app/logs/
- %appdata%/chat-app/models/*
- %appdata%/chat-app/users/