Skip to content

junainfinity/aura

Repository files navigation

Aura Logo

Aura 🔮

A stunning, fully local conversational voice AI interface.
Powered by your own local Language Models (via LM Studio) and high-fidelity speech processing (via MLX Audio).


✨ Features

  • 100% Local Pipeline: Say goodbye to cloud dependencies. Aura runs entirely on your machine, ensuring complete privacy and zero API costs.
  • Intelligent Voice Activity Detection (VAD): Speak naturally. The system automatically detects when you stop talking and processes your turn.
  • Barge-in Support: Interrupt the AI at any time. If the AI is speaking and you start talking, it immediately stops and listens to you.
  • Dynamic Settings UI: Configure your AI pipeline directly from the browser. Choose your active MLX Speech-to-Text (e.g., Qwen3 ASR) and Text-to-Speech models dynamically.
  • Universal LLM Compatibility: Point Aura to any OpenAI-compatible API endpoint (like LM Studio, Ollama, or vLLM) on your local network. Proxied securely to bypass browser CORS restrictions.
  • Stunning UI: A dark, minimalist, and responsive Next.js interface featuring a dynamic "Orb" that visualizes listening, processing, and speaking states.

🚀 Architecture

Aura orchestrates a seamless three-step pipeline:

  1. Listen & Transcribe: Captures pristine WAV audio from your microphone and sends it to your local MLX Audio server for Speech-to-Text inference.
  2. Think: The transcript is passed to your configured local LLM (e.g., LM Studio on port 1234) for answer generation.
  3. Speak: The resulting text is passed back down to MLX Audio for Text-to-Speech generation, and immediately played back through your speakers.

🛠️ Installation & Setup

1. Prerequisites

You need two local API servers running to power Aura:

  • Speech Server (MLX Audio): Start your MLX Audio server on port 8000. Make sure you have downloaded an ASR model and a TTS model (e.g., Qwen3 8-bit).
  • LLM Server (LM Studio / Ollama): Start your preferred local LLM server on the OpenAI-compatible endpoint (usually http://127.0.0.1:1234/v1/chat/completions).

2. Clone & Install

git clone https://github.com/junainfinity/aura.git
cd aura
npm install

3. Run the App

npm run dev

The application will start on http://localhost:3086.

⚙️ Configuration

  1. Open http://localhost:3086 in your browser.
  2. Click the Gear icon in the bottom right corner to open the Settings pane.
  3. Aura will automatically fetch the loaded models from your MLX server. Select your desired ASR and TTS models.
  4. Paste the URL to your local LLM API endpoint.
  5. Hit Save changes and start talking to the Orb!

Built with Next.js, React, Tailwind CSS, Framer Motion, and local AI magic.

About

Voice to Voice chat

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors