8/11/2025

Running Your Own AI: The Easiest Way to Deploy Ollama & OpenWebUI with Docker

You've probably heard all the buzz about local large language models (LLMs). The idea of running powerful AI models like Llama 3 or Phi-3 on your own machine, completely offline, is pretty incredible. It gives you privacy, control, & the ability to experiment without racking up huge cloud bills. But honestly, getting started can feel a bit daunting, especially if you're not a command-line wizard.
That's where the magic of Docker comes in.
If you've been looking for a straightforward, no-fuss way to jump into the world of local LLMs, you're in the right place. In this guide, I'm going to walk you through what is, in my opinion, the absolute easiest method to get a powerful AI chatbot up & running in minutes. We're talking about using Docker to deploy two amazing open-source tools: Ollama & OpenWebUI.
Think of Ollama as the engine. It’s an open-source tool that makes it ridiculously simple to run various open LLMs on your computer. OpenWebUI, on the other hand, is the beautiful, user-friendly dashboard that lets you chat with those models, much like you would with ChatGPT. It's a fantastic combination, & with Docker, we can get them working together seamlessly.
So, grab a coffee, fire up your terminal, & let's get this thing built.

First Things First: What You'll Need

Before we dive in, let's make sure you have the basics covered. It's a short list, I promise.
  • Docker Desktop: This is the core of our setup. Docker is a platform that lets you run applications in isolated environments called containers. This means we don't have to worry about complex installation steps or conflicting dependencies. Just make sure you have Docker installed & running on your Mac, Windows, or Linux machine. Docker Desktop handily includes Docker Compose, which is what we'll be using.
  • A Bit of Command-Line Know-How: You don't need to be a terminal guru, but you should be comfortable opening a terminal or command prompt & running a few basic commands. We'll walk through them step-by-step.
  • Decent Hardware (Optional but Recommended): While you can run smaller models on a variety of machines, if you want to play with the bigger, more powerful LLMs, having a computer with a good amount of RAM (16GB or more is a great start) & a dedicated NVIDIA GPU will make a HUGE difference in performance. Don't worry if you don't have a beastly machine; you can still run some of the smaller, yet surprisingly capable, models.

The Power Couple: Ollama & OpenWebUI

Let's quickly break down why these two tools are such a perfect match.
Ollama is the workhorse. It's a lightweight, open-source tool that handles all the heavy lifting of running the LLMs. It provides a simple API that other applications can use to interact with the models. This is where OpenWebUI comes in.
OpenWebUI is a feature-rich, self-hosted web interface that connects to Ollama. It gives you a polished, ChatGPT-like experience for your local models. You can easily switch between different models, create new chat sessions, & even customize the behavior of the AI with system prompts. It was initially built just for Ollama but has since expanded to support other LLM runners as well.

The Easiest Path: Using Docker Compose

The most straightforward way to get Ollama & OpenWebUI running together is with Docker Compose. Compose is a tool for defining & running multi-container Docker applications. With a single, simple configuration file, we can tell Docker to spin up both Ollama & OpenWebUI, connect them, & handle all the networking for us. It's pretty cool.
Here's how we do it:
Step 1: Create a Project Directory & the
1 docker-compose.yml
File
First, open up your terminal & create a new folder for our project. Let's call it
1 local-ai
.

Copyright © Arsturn 2025