degradesEditorialReviewed May 2026

Ollama port 11434 conflict — find what's holding it, fix it

Ollama defaults to port 11434. When something else is on that port — often a previous Ollama process, Docker container, or another LLM server — startup fails. Here's how to find the squatter and reclaim the port.

OllamaDockerLM StudiomacOSLinuxWindows
By Fredoline Eruo · Last verified 2026-05-08

Diagnostic order — most likely first

#1

Previous Ollama instance still running

Diagnose

Linux/Mac: `lsof -i :11434` lists an `ollama` process. Windows: `netstat -ano | findstr 11434` shows ollama.exe PID.

Fix

Kill it: Linux/Mac `pkill ollama`, Windows `taskkill /PID <pid> /F`. Then restart with `ollama serve`.

#2

Ollama installed both as system service AND user-launched

Diagnose

`systemctl status ollama` shows running. You also ran `ollama serve` manually. Both fight for the port.

Fix

Pick one. For service: `sudo systemctl stop ollama && sudo systemctl disable ollama` if you prefer manual. For service-managed: don't run `ollama serve` directly; use `systemctl start ollama`.

#3

Docker container also exposing 11434

Diagnose

`docker ps` shows a container with `0.0.0.0:11434->11434/tcp`. Conflict on the host.

Fix

Either stop the container (`docker stop <name>`), or remap to a different host port: `-p 11435:11434`. Ollama clients then connect to `:11435`.

#4

Another LLM server (LM Studio, llama.cpp) bound to 11434

Diagnose

`lsof` / `netstat` shows the process. LM Studio and some llama.cpp servers can be configured to imitate the Ollama API on the same port.

Fix

Configure either app to a different port. For Ollama specifically: `OLLAMA_HOST=0.0.0.0:11435 ollama serve` to override.

#5

Firewall / corporate antivirus blocking the bind

Diagnose

Process starts then exits. Logs say bind succeeded but connection refused from clients.

Fix

Add Ollama to firewall exceptions (Windows Defender, corporate firewall). On macOS: System Settings → Network → Firewall → allow ollama.app.

Frequently asked questions

Can I run multiple Ollama instances on different ports?

Yes. `OLLAMA_HOST=0.0.0.0:11435 ollama serve` runs a second instance. Useful for testing different model libraries side-by-side. Each instance maintains its own model cache by default.

Why does Ollama default to 11434 specifically?

Convention from the project's first release. No technical reason. Override with `OLLAMA_HOST` env var if it conflicts with your stack.

Should I expose Ollama beyond localhost?

Only if you understand the security implications. Ollama has no authentication. Exposing on `0.0.0.0` opens your model serving endpoint to the network. For LAN access, prefer a reverse proxy (Nginx, Caddy) with auth in front.

Related troubleshooting

When the fix is hardware

A surprising fraction of troubleshooting tickets resolve to: this card doesn't have enough VRAM for what you're asking it to do. If you're hitting OOM after every reasonable fix, or your GPU genuinely can't fit the model you need, it's upgrade time: