Docker MCP: Making connecting things much easier...
Okay, one last one about MCP - I promise!
Now we all know about MCP (model context protocol) to connect to services through AI LLM’s, how can we make this easier to maintain and scale?
Using Docker, you can remove a lot of the plumbing not to mention the complexity of scaling and of course security. However, my previous article about the cost implications of MCP’s is still valid - although there is light at the end of the tunnel given the article from Anthropic in my last post.
🐳 MCP via Docker Gateway
When you run MCP servers inside Docker and expose them through a Docker MCP gateway, you’re essentially creating a hub that aggregates multiple MCP servers behind one endpoint. This has some clear benefits:
Deployment & isolation: Docker makes it easy to spin up, isolate, and manage many MCP servers without dependency conflicts.
Scalability: You can add/remove MCP servers quickly, and the gateway abstracts away the networking.
Consistency: Every server runs in a predictable environment, which is great for reproducibility.
⚡ Efficiency Implications
However, the token efficiency problem described in the article doesn’t magically disappear just because you’re using Docker:
Direct tool calls (inefficient): If your agent still loads all tool definitions from the Docker MCP gateway into its context window, you’ll hit the same 150k+ token overhead. Docker just makes the servers easier to manage, but the model still sees the full set of tools downstream.
Code execution (efficient): If you adopt the code execution approach (agents writing code to call MCP tools on demand), then Docker helps because:
The gateway centralizes access, so your agent can discover tools dynamically (e.g., by querying the gateway or filesystem-like structure).
You only load the definitions you need, reducing context usage to ~2k tokens.
Docker doesn’t add overhead here — it simply provides a clean way to host and scale the MCP servers.
🧩 Bottom Line
Docker MCP gateway = orchestration convenience. It makes connecting many MCPs easier.
Efficiency gains = only if you use code execution. If you stick with direct tool calls, Docker won’t solve the token bloat — it just aggregates the problem.
Think of Docker as the plumbing, while code execution is the efficiency hack. Together, they’re powerful: Docker handles scale, code execution handles cost.

