VibeTranscribe.org
Menu
Updated: May 2026 9 min read

Whisper models for Vibe Transcribe

Vibe uses Whisper models of different sizes. Bigger models are often more accurate but slower and heavier on RAM and disk. Pick what matches your audio difficulty and your computer.

Simple recommendations

  • Most people: Small — great everyday balance.
  • Strong PC + GPU: Large v3 Turbo — fast and still very capable.
  • Maximum accuracy priority: Large v3 — plan time and hardware headroom.
  • Older hardware: Tiny or Base — expect more mistakes on hard audio.

Comparison table

Model Size RAM WER (English) CPU speed GPU speed
Tiny 75 MB ~1 GB ~15% 32× realtime 100×+
Base 142 MB ~1 GB ~10% 16× realtime 60×+
Small 466 MB ~2 GB ~5–8% 6× realtime 25×+
Medium 1.5 GB ~5 GB ~3–5% 2× realtime 10×+
Large v2 2.9 GB ~10 GB ~2.7% 0.5× realtime 3×+
Large v3 2.9 GB ~10 GB ~2.4% 0.5× realtime 3×+
Large v3 Turbo Popular upgrade 1.6 GB ~6 GB ~3% 1× realtime 8×+

WER means word error rate on clean English—lower is better. “× realtime” is a rough idea of throughput; real results depend on your file, settings, and hardware.

Where models are stored

  • Windows: %APPDATA%\com.thewh1teagle.vibe\models
  • macOS: ~/Library/Application Support/com.thewh1teagle.vibe/models
  • Linux: ~/.config/com.thewh1teagle.vibe/models

Related

Download Vibe Transcribe — free

Open source · Works offline · No account needed · v3.0.19

Free download
Windows macOS Linux

Frequently asked questions

Which Whisper model should I use in Vibe Transcribe?

Start with the Small model (466MB) for everyday use — it's the best balance of speed and accuracy. Upgrade to Medium if you regularly transcribe non-English audio or interviews with multiple speakers. Only use Large if maximum accuracy is critical and you have a GPU.

What is the difference between Whisper tiny, base, small, medium, and large?

The models range from Tiny (39M parameters, 75MB, fastest but least accurate) to Large (1550M parameters, 2.9GB, slowest but most accurate). Each larger model improves accuracy by approximately 30–40% over the previous size while using 2–6x more RAM and processing time.

Is there a Whisper Large v3 model in Vibe?

Yes. Vibe supports Whisper Large v3 (the latest official OpenAI release) and Large v3 Turbo — a distilled version that's 6x faster than Large v3 with only a small accuracy trade-off. Large v3 Turbo is the best choice for users who want near-Large accuracy with Medium-level speed.

Can I use multiple Whisper models in Vibe at the same time?

You can download multiple models and switch between them, but only one model is loaded into memory at a time. Switching models requires a brief reload. Each model is stored separately in your Vibe data folder.

Where does Vibe store downloaded Whisper models?

Windows: %APPDATA%\com.thewh1teagle.vibe\models. macOS: ~/Library/Application Support/com.thewh1teagle.vibe/models. Linux: ~/.config/com.thewh1teagle.vibe/models. You can also change the model storage path in Settings.