Most audio libraries expose samples as raw numeric buffers. In Python,
audio is typically represented as a NumPy array whose dtype is
explicit, but whose meaning is not: sample rate, channel layout,
amplitude range, memory interleaving, and PCM versus floating-point
semantics are tracked externally, if at all. In Rust, the situation is
reversed but not resolved. Libraries provide fast and safe low-level
primitives, yet users are still responsible for managing raw buffers,
writing ad hoc conversion code, and manually preserving invariants
across crates.
AudioSamples is designed to close this gap by providing a strongly typed audio representation that makes audio semantics explicit and enforced by construction. Sample format, numeric domain, channel structure, and layout are encoded in the type system, and all operations preserve or explicitly update these invariants.
The result is an API that supports both exploratory workflows and reliable system-level use, without requiring users to remember hidden conventions or reimplement common audio logic.
AudioSamples is the core data and processing layer of the broader audio related crates. It defines the canonical audio object and the operations that act upon it.
Other crates that build on this foundation:
audio_samples_iofor decoding and encoding audio containers into typed audio objectsaudio_samples_playbackfor device-level outputaudio_samples_pythonfor Python bindings, enabling AudioSamples to act as a type-safe backend for Python workflowshtml_viewfor lightweight visualisation and inspection, generating self-contained HTML outputs suitable for analysis and reporting
NOTE The crate is still a WIP so some features particularly plotting and serialization are not fully complete.
cargo add audio_samplesSee the Features for more details.
This example generates a sine wave in a target sample format, converts it to floating-point samples, and mixes it with a second signal.
use audio_samples::{
AudioProcessing, AudioTypeConversion, cosine_wave, operations::types::NormalizationMethod,
sine_wave,
};
use std::time::Duration;
fn main() {
let sample_rate = 44_100;
let duration = Duration::from_secs_f64(1.0);
let frequency = 440.0;
let amplitude = 0.5;
// Generate a sine wave with i16 output samples.
// The waveform is computed in f32 and converted into i16.
let pcm_sine = sine_wave::<i16, f32>(frequency, duration, sample_rate, amplitude);
// Convert to floating-point representation
let float_sine = pcm_sine.to_format::<f32>();
// Generate a second signal directly as floating-point samples
let cosine = cosine_wave::<f32, f32>(frequency / 2.0, duration, sample_rate, amplitude);
// Mix the two signals
let mixed = (float_sine + cosine).normalize(-1.0, 1.0, NormalizationMethod::MinMax);
}AudioSamples supports spectral and time–frequency transforms via the
AudioTransforms trait, enabled by the spectral-analysis feature.
These operations produce standard frequency-domain and
time–frequency representations used in audio analysis and research.
Enable the feature:
cargo add audio_samples --features spectral-analysisuse audio_samples::{
AudioProcessing, AudioTypeConversion, cosine_wave, operations::types::NormalizationMethod,
sine_wave,
};
use std::time::Duration;
fn main() {
let sample_rate = 44_100;
let duration = Duration::from_secs_f64(1.0);
let frequency = 440.0;
let amplitude = 0.5;
// Generate a sine wave with i16 output samples.
// The waveform is computed in f32 and converted into i16.
let pcm_sine = sine_wave::<i16, f32>(frequency, duration, sample_rate, amplitude);
// Convert to floating-point representation
let float_sine = pcm_sine.to_format::<f32>();
// Generate a second signal directly as floating-point samples
let cosine = cosine_wave::<f32, f32>(frequency / 2.0, duration, sample_rate, amplitude);
// Mix the two signals
let mixed = (float_sine + cosine).normalize(-1.0, 1.0, NormalizationMethod::MinMax);
}statisticsprocessingeditingchannels
fftresamplingserializationplotting
spectral-analysisbeat-detection(requiresspectral-analysis)
static-plots(PNG output)
parallel-processingsimd(nightly only)mklfixed-size-audio
formattingrandom-generationutilities-full
Full API documentation is available at https://docs.rs/audio_samples
A range of examples is included in the repository.
Additional demos include:
- DTMF encoder and decoder
- Basic synthesis examples
- Audio inspection utilities
These additional demos are located in their own repos due to them depending on audio_samples and audio_samples_io
Rust crate providing audio file I/O utilities and helpers.
audio_samples_io is the IO extension of the audio_samples crate.
Device-level playback built on AudioSamples.
Python bindings exposing AudioSamples, AudioIO and AudioPlayback.
A lightweight, cross-platform HTML viewer for Rust.
html_view provides a minimal, ergonomic API for rendering HTML content in a native window, similar in spirit to matplotlib.pyplot.show() for visualisation rather than UI development.
A zero-heap, no_std friendly, const-first implementation of the standard DTMF (Dual-Tone Multi-Frequency) keypad used in telephony systems.
This crate provides compile-time safe mappings between keypad keys and their canonical low/high frequencies, along with runtime helpers for practical audio processing.
i24 provides a 24-bit signed integer type for Rust, filling the gap between i16 and i32. This type is particularly useful in audio processing, certain embedded systems, and other scenarios where 24-bit precision is required but 32 bits would be excessive
MIT License
Contributions are welcome. Please submit a pull request and see CONTRIBUTING.md for guidance.
