Build
How to install the toolchain, run Rast in dev mode, run the test suites, drive the bench binaries, and ship a release build.
Prerequisites
- Rust — stable toolchain (whatever the latest is at the time of writing). Install via rustup.
- Node 20 or newer (for Vite, Svelte 5, and the Tauri CLI).
- Python 3.11+ if you want to run the reference Python tests/benches in
tests/andtools/. Two venvs live side by side; see First-time setup. - FFmpeg on
PATH. Used for decoding any input format to 44100 Hz stereo f64 PCM atsrc-tauri/src/import.rs:78and for FLAC encoding when the cache is written. - yt-dlp on
PATH(optional — needed only for YouTube import). Either the standalone binary orpython -m yt_dlpis acceptable; detection logic atsrc-tauri/src/import.rs:283. - Tauri prerequisites — on Windows, WebView2 (preinstalled on Windows 11) and the MSVC build tools. On macOS, Xcode CLT. On Linux, the Tauri Linux deps.
- CUDA (optional) — Demucs is heavy; a CUDA-capable GPU helps. The
ortruntime falls back to CPU automatically. Spleeter is light enough to run comfortably on CPU.
First-time setup
npm installThis pulls Tauri CLI, Vite, Svelte, Vitest, VitePress, and the SoundTouch audio worklet.
The two Python venvs are referenced in CLAUDE.md and used for the reference Python pipeline:
.venv-spleeter/— Spleeter, CREMA, librosa, madmom. Used bytests/,tools/bench_*.py,tools/compare_*.py. Python:.venv-spleeter/Scripts/python.exeon Windows..venv-onnx/— ONNX + maturin. Used when comparing Rust output against a Pythononnxruntimereference, and for any PyO3 wheel builds viamaturin develop.
These are kept separate because Spleeter pulls in TensorFlow 2.x with a Python 3.11 ceiling, while the maturin/onnxruntime path is happy on a newer Python. Don't fall through to system Python — the model tooling is version-sensitive.
On Windows, set PYTHONIOENCODING=utf-8 when running anything that prints Greek song titles (otherwise the default mbcs/cp1252 strips non-Latin to ?).
The ONNX model files live in pretrained_models/onnx/ and are tracked via Git LFS. Confirm they're populated after clone:
git lfs pull
ls pretrained_models/onnx/
# beat_this_int8.onnx, crema_chord.onnx, demucs_htdemucs.onnx,
# spleeter_vocals.onnx, spleeter_accompaniment.onnx,
# btc_chord.onnx (optional), basic_pitch_nmp.onnx (optional)The model resolver lives at src-tauri/src/jobs.rs:36. Resolution order: RAST_MODELS_DIR env var, then ~/Rast/models/, then a walk up from the executable / cwd looking for pretrained_models/onnx/. Beat and CREMA are mandatory; everything else is optional and gates feature availability via BackendAvailability (src-tauri/src/jobs.rs:121).
Running the desktop app
npm run tauri devLaunches the Rust backend with Tauri's HMR-aware dev server. Equivalent to tauri dev against src-tauri/tauri.conf.json. The first build is slow (Cargo + ONNX deps); subsequent runs are fast.
For a frontend-only iteration loop without the Rust backend (useful when you're styling cards and don't need to invoke any Tauri command):
npm run devVite serves at http://localhost:1420. Tauri commands will all reject (no backend), so this only makes sense for visual changes. Production scripts in package.json:6.
Frontend test/typecheck
npm run check # svelte-check --tsconfig ./tsconfig.json
npm run test # vitest run (src/tests/**/*.test.ts)
npm run test:watch # vitest in watch modeVitest covers the pure-TS music logic in src/lib/music/ (chord ops, beat snapping, harmonic function classification, simplification) and a few stores. See src/tests/.
Rust workflow
All cargo invocations need --manifest-path rust/Cargo.toml because the workspace is one level down:
cargo check --manifest-path rust/Cargo.toml
cargo test --manifest-path rust/Cargo.toml
cargo clippy --manifest-path rust/Cargo.toml
cargo build --manifest-path rust/Cargo.toml --releaseWorkspace-level clippy lints in rust/Cargo.toml:9 allow needless_range_loop, too_many_arguments, type_complexity, and new_ret_no_self — the DSP modules use explicit indexing and have legitimately wide signatures.
Unit tests live next to the code they test. Notable: rust/rast-theory/src/scales.rs:177 (scale-table sanity), rust/rast-theory/src/chords.rs:198 (label normalisation, transposition, practice-quality bucketing), rust/rast-theory/src/enharmonic.rs:182 (per-scale spelling for every (root, scale) pair).
Bench binaries
Two CLI binaries live in the workspace for end-to-end and key-only profiling.
bench-pipeline
Runs the full pipeline (decode → separate → beats || chords || notes → key → chroma similarity) against a single audio file and prints per-stage timings. Source at rust/bench-pipeline/src/main.rs:26.
cargo build --manifest-path rust/Cargo.toml --release -p bench-pipeline
./rust/target/release/bench-pipeline \
--file <audio> \
--backend spleeter \
--chord-backend crema \
--models-dir pretrained_models \
--dumpFlags:
--backend demucs|spleeter— separation backend. Defaultdemucs.--chord-backend crema|btc— chord inference. Defaultcrema. BTC requiresbtc_chord.onnx.--models-dir <dir>— accepts eitherpretrained_modelsorpretrained_models/onnxdirectly. Falls back toRAST_MODELS_DIRthen a directory walk.--dump— write a JSON dump of the result alongside the timing summary.
The first positional argument is treated as the audio path if --file was omitted.
bench-key-from-chords
Re-runs only detect_key_refined (rust/rast-analysis/src/key_detection.rs:1395) over a precomputed chord JSON plus the original audio. Useful when iterating on the disambiguation rules without paying for separation each time. Source at rust/bench-key-from-chords/src/main.rs:24.
cargo build --manifest-path rust/Cargo.toml --release -p bench-key-from-chords
./rust/target/release/bench-key-from-chords \
--audio-file <audio.flac> \
--chords-json <chords.json> \
[--beats-json <beats.json>]The chords JSON is an array of { "start": <sec>, "end": <sec>, "chord": "<label>" }. When --beats-json is supplied, chords are re-snapped to that beat grid; otherwise the input timing is taken as-is.
Logging
The logging stack is tracing + tracing-subscriber + tracing-appender (non-blocking file writer). Initialised in src-tauri/src/lib.rs:32.
- Filter —
RAST_LOGenv var, falling back toRUST_LOG, falling back toinfo,ort=warn,ort::logging=warn. Theort=warnclause silences ONNX Runtime'sBias is not presentchatter. - File —
~/Rast/logs/rast.log. Single per-run file; truncated on startup so each launch starts fresh. Path resolved viarust/rast-core/src/paths.rs:26. - Frontend → backend bridge — Svelte's
logger.tscalls thelog_from_frontendTauri command (src-tauri/src/log_bridge.rs:7), so frontend events land in the same file.
To make ONNX itself chatty for a session (e.g. when a model is loading slowly):
RAST_LOG=info,ort=info npm run tauri devTo see only Rast's own analysis tracing without ONNX:
RAST_LOG=rast_analysis=debug,rast_core=debug,ort=warn npm run tauri devBuilding a release
npm run tauri buildThis runs vite build (output to dist/), then cargo build --release for the Tauri shell, then bundles installers for the host OS into src-tauri/target/release/bundle/. Window config (1100×750, frameless, dark theme) and bundle metadata are in src-tauri/tauri.conf.json.
The bundled binary will look up models in:
RAST_MODELS_DIRif set,~/Rast/models/,- a walk up from the install dir for
pretrained_models/onnx/.
For a self-contained installer, copy pretrained_models/ next to the binary or pre-populate ~/Rast/models/.
Common debugging patterns
"Where did that error go?" Tail the log file. On Windows: Get-Content $env:USERPROFILE\Rast\logs\rast.log -Wait. On Unix: tail -f ~/Rast/logs/rast.log. The frontend sends its log lines through log_from_frontend, so you don't need to also watch the WebView devtools.
"Why is reprocess refusing?" Check the song's status column. Anything in importing|separating|analyzing is treated as in-flight. Startup sweeps these to error (src-tauri/src/lib.rs:92), but a still-running job will still be locked by ActiveJobs. The valid statuses are importing|separating|analyzing|ready|error (rust/rast-db/src/lib.rs:20).
"Verbose ONNX logs." Set RAST_LOG=info,ort=info (or ort=debug). The default filter silences ort at warn level.
"Reset a single song's cache." Delete ~/Rast/cache/<hash>/. The next reprocess with scope = "stems" will rebuild stems and beat_chroma.bin. Note: original.flac is required — if you remove it, you must re-import the song. The reprocess command refuses without it (src-tauri/src/reprocess.rs:264).
"Wipe everything." Delete ~/Rast/. The next launch starts from a clean DB and empty cache. Note: a one-time legacy migration in rust/rast-core/src/paths.rs:36 will rename ~/Usak/ to ~/Rast/ if the old dir exists — only relevant if you're upgrading from before the rebrand.
"List installed backends." Run the dev app and the import dialog will hide unavailable backends; under the hood that's the get_backend_availability command (src-tauri/src/reprocess.rs:521) which probes the ONNX file paths from build_model_paths (src-tauri/src/jobs.rs:81).
"My yt-dlp install isn't being detected." YtDlp::detect (src-tauri/src/import.rs:283) probes yt-dlp --version first, then python -m yt_dlp --version. Make sure either is on PATH for the shell that launched the app — Tauri inherits PATH from its launching environment.
Documentation site
This documentation is built with VitePress. All Markdown lives under docs/; configuration in docs/.vitepress/config.ts.
npm run docs:dev # local dev server with HMR (default http://localhost:5173)
npm run docs:build # static build into docs/.vitepress/dist/
npm run docs:preview # serve the built output for a final checkThe dev server resolves links and renders pages incrementally. The build is strict — it fails on dead internal links, so any [text](/guide/foo) that does not resolve to an existing page will block the build. Use site-absolute paths (/guide/playback) rather than relative paths so the same link works from any nesting depth.
Static assets (screenshots, diagrams) go in docs/public/; reference them from Markdown with a leading slash (/screenshots/foo.svg). The build copies public/ straight into the output root.
The dist/ and cache/ folders inside docs/.vitepress/ are gitignored.
Reference Python tests
Run with the Spleeter venv (these documents the reference behaviour, not the live Rust pipeline):
.venv-spleeter/Scripts/pytest.exe tests/ -x -q
.venv-spleeter/Scripts/pytest.exe tests/test_key_detection.py -v # single file
PYTHONIOENCODING=utf-8 .venv-spleeter/Scripts/python.exe \
tools/bench_crema_vs_btc.py --songs 1Slow model-inference tests are excluded by default; opt in with -m slow.