Why we built Yapper — a story about a competitor we liked, until we didn’t. Read why →
Yapper
Why we built it Download for macOS
Blog · 30 March 2026 · 11 min read

Dictating code on a Mac a practical guide for Cursor, VS Code, and JetBrains

Dictating code is finally a real workflow in 2026. Here is the setup that actually works for Cursor, VS Code and JetBrains on Apple Silicon — including the custom-vocabulary trick that fixes 90% of the friction.

TL;DR: Dictating code on a Mac is finally a real workflow in 2026. The pieces you need: a low-latency dictation app (under 200ms), a custom dictionary loaded with your jargon, and a few small habits around when to use voice and when not to. This guide covers Cursor, VS Code and JetBrains setups, plus the four common mistakes we see from devs trying voice for the first time.

The latency threshold that makes voice usable for code

Voice dictation works for prose at any latency under about 600ms. It works for code at much tighter tolerances — under 200ms in our experience. The reason is that code is denser per word, you backtrack more often, and the tab-complete loop in modern IDEs runs at human-perceivable speed. If your dictation tool stalls mid-line, you fall out of flow with Cursor’s autocomplete in a way you don’t with email.

We measure latency carefully because of this. Yapper sits at 142ms median on Apple Silicon (see our latency benchmark). That’s the only reason any of the workflows below feel like an upgrade.

What dictating code actually looks like

It’s not what you’d expect. Almost nobody dictates raw code letter-by-letter. The useful workflow is:

  • Comments and docstrings. The 80% of voice-coding time. Talking through a function as you write it produces dramatically better comments than typing them.
  • Commit messages and PR descriptions. Long-form context lives more naturally in voice than typing.
  • AI prompts inside the editor. Cursor’s chat panel, VS Code’s Copilot Chat, JetBrains AI Assistant — all benefit massively from voice. You spend less time composing the prompt and more time shaping the answer.
  • Variable names you’ve already settled on. With a good custom dictionary, things like useAuthContext or handleStripeWebhook can be dictated cleanly.

What you don’t do: dictate raw syntax. for (let i = 0; i < arr.length; i++) is faster to type than to speak.

Setup: Cursor

  1. Install Yapper for macOS. Grant microphone access on first launch.
  2. Open Cursor, then in Yapper’s settings add Cursor to the “always-on” app list. This switches the cleanup prompt to a code-aware variant that preserves indentation, camelCase, and code-fence behaviour.
  3. Build your custom dictionary. Run git ls-files | grep -E ".tsx?$" in your repo, scan for variable and function names you use repeatedly, and add them to Yapper’s dictionary. 30 terms is enough for most projects to halve the error rate.
  4. For Cursor’s chat panel, just hold the dictation key and talk. The cleanup pass leaves prompts as-is.

Setup: VS Code

Identical to Cursor, with one extra step: install the “Yapper VS Code companion” extension if you want cursor-position-aware injection (without it, Yapper falls back to Accessibility-API injection, which works fine for 95% of cases).

For Copilot Chat: voice is dramatically better than typing. The average dev composes a Copilot prompt at ~20 wpm and dictates at ~150 wpm. That 7x speed-up applies to the slowest part of using AI in the editor.

Setup: JetBrains (IntelliJ, WebStorm, RubyMine, GoLand…)

JetBrains IDEs accept Accessibility-API text injection out of the box, so no extension is needed. Add the IDE to Yapper’s always-on list to get the code-aware cleanup prompt. The AI Assistant chat field works the same as Cursor and VS Code.

One quirk: JetBrains’ live templates absorb the Tab key. If you’ve mapped a chord to dictation, avoid Tab.

Custom vocabulary: the single highest-leverage trick

The default Whisper model has never seen useUserPreferences. Out of the box it will transcribe it as “use user preferences” or sometimes worse. Add it to the dictionary, with the canonical casing, and Yapper will preserve it.

Good dictionary entries:

  • Project nouns: Yapper, StripeWebhook, AuthContext.
  • Common abbreviations you say out loud: API, JWT, UUID.
  • People’s names if they show up in commits or PRs.
  • Library names you reach for often: tRPC, Drizzle, Tauri.

Voice commands worth learning

  • “new line” — inserts a literal newline.
  • “triple backtick” — opens a code fence in chat.
  • “snake case X” — converts on the fly.
  • “camel case X” — same.
  • “all caps” — flips the next phrase to upper case.

Four common mistakes

Trying to dictate syntax

Dictate the intent, then let the editor or the AI assistant produce the syntax. Voice → comment → tab-complete is the loop.

Not loading a custom dictionary

First-week users who skip the dictionary often conclude voice coding doesn’t work. It does — but only after you spend ten minutes adding the 30-50 terms specific to your project.

Using a slow tool

At 600ms+ latency, voice coding is more frustrating than helpful. At 150ms it disappears into the background. The gap between those two numbers is the difference between adoption and abandonment.

Trying to use it for everything

Voice is great for comments, prompts, and prose. It is not great for changing a single character, navigating a file, or editing existing code. Use the right tool for the job; the keyboard isn’t going anywhere.

The 30-day workflow that actually sticks

  1. Week 1. Use voice only for comments and Cursor chat prompts. Don’t try to dictate code.
  2. Week 2. Build out your custom dictionary as you notice errors. Add a term every time you have to fix one.
  3. Week 3. Move commit messages and PR descriptions to voice.
  4. Week 4. Try voice for variable-naming inside tab-complete loops. By now your dictionary is rich enough that most names land correctly.

After that, the keyboard stops being your default and becomes one of two equally-valid input modes. Most heavy users settle around 70% voice, 30% keyboard.

Frequently asked questions

Does voice coding work in vim/Emacs?

Yes, but the cleanup pass is unhelpful in modal editors. Turn it off in Yapper’s per-app settings for your terminal. Use voice only in insert mode.

What about pair programming over Zoom?

Yapper’s Both Sides mode can capture the call audio alongside your dictation, which makes a searchable record of pairing sessions. Useful for post-mortems and onboarding.

How accurate is it on technical jargon?

Out of the box, Whisper-v3-turbo gets ~92% on technical English (worse than the 96% it manages on conversational English). With a loaded custom dictionary, that climbs to 97-98%. See our accuracy benchmark.

Is there a free way to try this?

Yes — Yapper’s free tier gives you 2,500 words on signup. Download for macOS.


Want to try the fastest dictation tool we measured? Download Yapper for macOS — 2,500 free words, no card. Or read the next post: /blog.

Calculate your savings.

Plug in how many words you write a day and how much your hour is worth. We’ll show you what Yapper pays back.

Assumes typing at 45 wpm vs yapping at 220 wpm. Your mileage will, charmingly, vary.
You’d save
$16,130/yr
215
hours back
730k
words shipped

Start yapping.

2,500 words on us. Refer friends to earn more — or upgrade to Pro for unlimited.