Local LLM Thesaurus

It’s always more fun to work on something other than what I should explicitly be doing in the moment, so ideas and small projects naturally arise from procrastination. I was having trouble returning to my NaNoWriMo work after my sisters visited last weekend, and I took fifteen minutes to learn how to locally run an LLM.

My ninety percent use-case for LLMs is word refinement. While writing I will get a word stuck in my head, the wrong word for the exact feeling I’d like to describe. So, I tell some LLM (often Claude) to provide several more synonyms with varying connotations. This doesn’t rely on having up-to-date knowledge or internet access, so a nimble, offline, and local LLM would fit the task perfectly.

Somewhat ironically, I used an LLM to help me sort out what to do. It turns out this is a well-trod path. Here were my steps on my MacBook Air.

  1. Use Homebrew to install Ollama.
  2. Install my chosen model. I opted for mistral, so in a new window, I run ollama run mistral. Once it installs the first time, you can exit the instance.
  3. Run the Ollama server using ollama serve in my terminal. I leave this running.
  4. Install the app Enchanted from the Mac App Store. It’s a free project designed to provide a modern front-end to your local LLM instance. This just worked for me without any setup. It automatically detected my local Ollama instance.
  5. I used Enchanted to create a “Completion” in their app, allowing me to create a shortcut to run with a few key strokes. I select a word, and my completion appends that word to the query: “Give me some synonyms for this word with varrying connotations: text inserted here“.

That’s all it took. I had a local model running in fewer than fifteen minutes. I don’t need to pay for anything, and it perfectly fits what I need most of the time.

Leave a Reply