8/11/2025

Build Your Own Command-Line AI Agent? Yeah, You Can Totally Do That.

Here's the thing about AI right now: it feels like it's either locked away in some big tech company's cloud, costing you money per query, or it's so complicated to set up that you need a PhD in computer science just to get it running. But what if I told you that you could have a powerful AI, like the ones you've been hearing about, running on your own machine, completely offline, & you can talk to it right from your command line?
Turns out, you absolutely can. & it’s not even that hard. We're going to dive into how you can build your own command-line AI agent using a couple of amazing open-source tools: Ollama & a GPT-style open-source model. This isn't just a gimmick; this is about taking back control, ensuring your privacy, & building something that's genuinely yours.

So, What’s the Big Deal with Local AI?

Before we get our hands dirty, let's talk about why you'd even want to do this. First off, privacy. When you use a commercial AI service, your data is being sent to their servers. For personal stuff, that might be fine, but for sensitive work information or just for peace of mind, running an AI locally means your data never leaves your machine. That’s HUGE.
Second, it's free. Once you have the model downloaded, you can use it as much as you want without worrying about API credits or subscription fees. This is a game-changer for developers, writers, & anyone who wants to experiment with AI without breaking the bank.
& finally, it's just plain cool. There's something incredibly empowering about having this technology at your fingertips, ready to be customized & bent to your will. You're not just a user; you're a builder.

The Tools of the Trade: Ollama & GPT-OSS

To make this happen, we're going to be using two key pieces of technology:
  • Ollama: Think of Ollama as a manager for large language models (LLMs) on your computer. It’s a super lightweight & easy-to-use tool that handles all the complicated stuff of running these massive models. It downloads them, manages them, & gives you a simple way to interact with them, either through the command line or an API. Honestly, it’s what makes this whole thing so accessible.
  • GPT-OSS: You’ve probably heard of GPT-4, OpenAI’s flagship model. Well, recently OpenAI released an open-source version of their GPT model, called GPT-OSS. This is a big deal because it gives us access to a very powerful & capable model that's designed for reasoning & agent-like tasks. We’ll be using this as the brains of our AI agent. There are two main versions, a 20-billion parameter one (gpt-oss-20b) that runs well on higher-end consumer hardware, & a massive 120-billion parameter one (gpt-oss-120b) for more serious setups. For most of us, the 20b version is the sweet spot.
Alright, enough talk. Let's start building.

Step 1: Getting Ollama Up & Running

First things first, we need to install Ollama. The team behind it has made this part incredibly simple.
  1. Head over to the Ollama website: Just go to
    1 ollama.com
    & you'll see download links for macOS, Windows, & Linux. Grab the one for your operating system.
  2. Install it: The installation is a standard, straightforward process. On a Mac, you’ll drag the app to your Applications folder. On Windows, you'll run the installer. For Linux, you can use a simple curl command.
  3. Verify the installation: Once it's installed, open up your terminal (or Command Prompt on Windows) & type:

Copyright © Arsturn 2025