8/12/2025

Ditching the Cloud: Your Guide to Setting Up a Local LLM in the Cursor Code Editor

Hey there! So, you're a fan of the Cursor code editor, right? It's pretty amazing how it's changed the game with its AI-first approach to coding. But, if you're like me, maybe you've hit your token limit on the pro plan one too many times, or perhaps you're just not thrilled about your code being sent to a third-party server. Whatever the reason, you're probably wondering if you can get all the awesome AI-powered features of Cursor, but with a large language model (LLM) that runs right on your own machine.
Well, I'm here to tell you that you absolutely can, & it's a game-changer. In this guide, I'm going to walk you through everything you need to know to set up & use a local LLM in Cursor. We'll cover why you'd even want to do this, what you'll need to get started, & the step-by-step process for two of the most popular tools for the job: Ollama & LM Studio.

So, Why Bother with a Local LLM?

I get it, setting up a local LLM might sound a little complicated, especially when Cursor's built-in models are so easy to use. But trust me, there are some pretty compelling reasons to take the plunge:
  • Privacy, Privacy, Privacy: This is a big one. When you use a local LLM, all of your code stays on your machine. Nothing gets sent to the cloud. For those of us working on sensitive or proprietary projects, this is a HUGE deal.
  • No More Token Limits: Say goodbye to those annoying "you've reached your limit" messages. With a local LLM, you can use it as much as you want, whenever you want, without worrying about running out of tokens or racking up a hefty bill.
  • Offline Coding Power: Ever been on a flight or in a coffee shop with spotty Wi-Fi & wished you could still use your AI assistant? With a local LLM, you can. You'll still need an internet connection for the initial setup with ngrok, which we'll get to later, but for the most part, you can code from anywhere.
  • Customization & Control: The world of open-source LLMs is exploding, with new & improved models coming out all the time. Running a local LLM gives you the freedom to experiment with different models & find the one that works best for your specific needs. You're in complete control of your AI coding assistant.

What You'll Need to Get Started

Alright, so you're convinced. What do you need to make this happen? The good news is, you don't need to be a super-genius to get this working. Here are the three main things you'll need:
  1. Cursor: Well, this one's a no-brainer. You'll need the Cursor code editor, which you can download from their website.
  2. A Local LLM Server: This is the software that will run the LLM on your computer. We'll be focusing on two of the most popular options: Ollama & LM Studio. Both are great choices, so it really comes down to personal preference.
  3. Ngrok: This is a clever little tool that creates a secure tunnel from a public URL to your local machine. Cursor doesn't play nicely with
    1 localhost
    addresses, so we need ngrok to expose our local LLM server to an external URL that Cursor can connect to.

Setting Up Your Local LLM with Ollama

Ollama is a fantastic, no-fuss tool for running LLMs locally. It's super easy to install & use, making it a great option for beginners. Here's how to get it set up with Cursor:

Step 1: Install Ollama

First things first, head over to the Ollama website & download the installer for your operating system. The installation is pretty straightforward; just follow the on-screen instructions. Once it's installed, you should see a little llama icon in your system tray, which means it's running.

Step 2: Download a Model

Now for the fun part: choosing an LLM! There are a ton of great open-source models out there, but for coding, you'll want one that's been specifically trained for that purpose. Some of the best ones right now are from the DeepSeek, Llama, & Qwen families. DeepSeek Coder V2, Llama 3.1, & Qwen 2.5 Coder are all excellent choices.
To download a model, open up your terminal & type:

Arsturn.com/
Claim your chatbot

Copyright © Arsturn 2025