Source code for the JEPA models used in "Self-Supervised Representation Learning with a JEPA Framework for Multi-Instrument Music Transcription" (WASPAA 2025).
This repository builds on top the original i-JEPA codebase. This can be found here.
- Pretrain a JEPA model on unlabeled data:
python main_baseline.py --config configs/mir_jepa_pretrain.yaml --devices cuda:0- Finetune a JEPA model on labeled data:
python main_finetune_baseline.py --config configs/mir_jepa_finetune.yaml --devices cuda:0- Train the transcriber probe on frozen JEPA features:
python main_transcriber.py --config configs/mir_jepa_transcriber.yaml --devices cuda:0