8/12/2025

Unraveling the Mystery: Why Your MCP Server Memory Usage Keeps Growing & How to Fix It

Hey there, if you're running a Model Context Protocol (MCP) server, you've probably seen it. That slow, creeping rise in memory usage that has you scratching your head. One minute everything's fine, the next you're getting alerts about high memory consumption, & you're left wondering if your server's about to fall over. Honestly, it's a super common issue, but a frustrating one for sure. The good news is, you're not alone, & there are some pretty clear reasons why this happens &—more importantly—how to fix it.
MCP servers are becoming the backbone of how we connect our AI models to things like databases, APIs, & other external tools. They're the bridges that make AI truly useful in the real world. But because they're dealing with a constant flow of requests, often from AI models themselves, they have some unique performance challenges. That constant growth in memory? It's usually a sign that something's not quite right under the hood.
In this deep dive, we're going to get to the bottom of why your MCP server's memory is acting like a runaway train. We'll look at the usual suspects, from sneaky memory leaks to resource-hungry processes, & I'll give you some practical, real-world advice on how to troubleshoot & fix these problems for good.

The Usual Suspects: Common Causes of Growing Memory Usage

First things first, let's talk about why this is happening. It's rarely just one thing, but here are some of the most common culprits I've seen.

1. The Sneaky Memory Leak

This is the classic, & often the most frustrating, cause of rising memory usage. A memory leak happens when an application requests memory to do something, but then forgets to release it when it's done. Over time, these little unreleased bits of memory add up, leading to a steady increase in your server's RAM consumption until it eventually runs out of memory & crashes. It's like a dripping faucet, but for your server's resources.
In the world of MCP servers, memory leaks can be particularly nasty. A GitHub issue I saw recently highlighted a pretty common scenario: a client application was creating a new connection for every single tool call to the MCP server. This resulted in a "monotonic memory usage growth" because the server was holding onto resources for each of those connections, never letting them go. The fix, in that case, was to refactor the client to reuse a single connection & session for multiple tool calls. It's a simple change, but it makes a HUGE difference.
Another real-world example I came across was in a component called
1 mcp-superassistant-proxy
. The problem there was that sessions were being stored in a map but were never cleaned up. So, over time, as more & more sessions were created, the memory usage just kept climbing. The solution involved adding an automatic cleanup mechanism for stale sessions, which is a great best practice to keep in mind.

2. Resource-Intensive Applications & Processes

Sometimes, the problem isn't a leak, but just an application or process that's naturally a memory hog. This is especially true for MCP servers that are doing heavy lifting, like connecting to large databases, performing complex data analytics, or running a lot of concurrent jobs. For example, if your MCP server is frequently running long, complex SQL queries, the database might be caching a lot of data in memory to speed things up, which can lead to high memory usage.
In the Ciena MCP world, there are a couple of specific processes that have been known to cause issues. One is a process called
1 goferd
. It turns out, this process isn't even used by MCP, but it can sometimes consume a ton of memory. The fix is pretty straightforward: just stop & disable it. Another one to watch out for is
1 gosftp
, which in some versions of MCP could have memory spikes that lead to the server running out of memory. These issues are often patched in later software versions, which is a good reminder to keep your MCP software up-to-date.

3. The Docker & WSL2 Conundrum

If you're running your MCP server in a Docker container on Windows, you might have noticed a process called
1 vmmemWSL
eating up a lot of your RAM. This is because Docker Desktop on Windows often uses the Windows Subsystem for Linux 2 (WSL2) as its backend. By default, WSL2 is designed to dynamically grab as much memory as it can, which can be a problem if you're not running a lot of containers but WSL2 is still holding onto a huge chunk of your RAM.
The solution here is to create or edit a
1 .wslconfig
file in your user directory (
1 C:\Users\YourUsername\.wslconfig
) & set some limits. You can specify how much memory & how many processors WSL2 is allowed to use. For example:

Copyright © Arsturn 2025