You spin up two containers. A web app and a database. You try to connect the app to the database using the container name as the hostname. It doesn’t work. Connection refused. You try the container’s IP address — it works. But the IP changes every time the container restarts.
This is the most common Docker networking confusion, and it happens because of the default bridge network’s limitation that nobody tells you about until you hit it.
Docker has six network modes. Most people use one (the default bridge) and don’t realize it’s the worst one for multi-container applications.
The Default Bridge — Why It’s a Trap
When Docker installs, it creates a network called bridge backed by a Linux bridge device called docker0. Every container started without a --network flag connects to this default bridge.
docker run -d --name web nginx
docker run -d --name db postgres:16
Both containers are on the default bridge. They can reach each other by IP:
docker inspect web | grep IPAddress
# "IPAddress": "172.17.0.2"
docker exec db ping 172.17.0.2
# Works
But:
docker exec db ping web
# ping: bad address 'web'
The default bridge does not provide DNS resolution between containers. This is the single most confusing thing about Docker networking, and it catches everyone.
The reason is historical. The default bridge predates Docker’s embedded DNS server. For backward compatibility, Docker kept it without DNS. Every user-created bridge network gets DNS automatically.
Never use the default bridge for multi-container applications. Always create a custom bridge.
Custom Bridge Networks — The Right Way
docker network create mynet
docker run -d --name web --network mynet nginx
docker run -d --name db --network mynet postgres:16
Now:
docker exec web ping db
# PING db (172.20.0.3): 56 data bytes — works!
docker exec db ping web
# Works too
Custom bridge networks provide:
- DNS resolution — containers find each other by name
- Better isolation — containers on different networks can’t communicate
- Connect/disconnect on the fly —
docker network connect mynet existing-container
Docker Compose does this automatically. Every docker-compose.yml creates a custom bridge for the project. All services can reach each other by service name:
services:
web:
image: nginx
ports:
- "80:80"
api:
image: myapp
environment:
DATABASE_URL: postgresql://user:pass@db:5432/mydb
db:
image: postgres:16
environment:
POSTGRES_PASSWORD: pass
Notice db in the connection string. Compose’s network makes db resolvable from any service in the same file. The db service has no ports mapping — it’s only accessible from other containers on the network, not from the host or outside. That’s a security advantage.
Host Mode — When You Need Raw Performance
docker run -d --network host nginx
Host mode removes network isolation entirely. The container shares the host’s network stack — same IP, same interfaces, no NAT.
Nginx in the container now listens on port 80 of the host directly. No port mapping needed. No NAT overhead.
When to use host mode:
- Network monitoring tools that need access to all host interfaces
- Applications with many dynamic ports where mapping each one is impractical
- Maximum network performance (eliminates NAT/iptables processing)
- Applications that need to bind to the host’s specific network interface
When NOT to use host mode:
- Multiple containers needing the same port — they’ll conflict
- When you need network isolation between containers
- On macOS or Windows — host mode only works on Linux
For most web applications, the performance difference between bridge and host is negligible — under 1% latency difference. You’d need to be pushing millions of packets per second (network appliances, high-frequency trading) for host mode to matter.
Overlay Networks — Multi-Host Communication
If you have containers running on different machines that need to communicate, overlay networks handle it:
docker swarm init
docker network create --driver overlay --attachable production
docker service create --name api --network production --replicas 3 myapp
Overlay networks:
- Span multiple Docker hosts
- Create encrypted tunnels between hosts (using VXLAN)
- Provide DNS-based service discovery across hosts
- Required for Docker Swarm services
Each container gets an IP on the overlay network and can reach other containers by name, regardless of which physical host they’re running on.
For single-host deployments (most development environments and many production setups), bridge networks are simpler and sufficient. Use overlay only when you genuinely have multi-host requirements.
None and Macvlan — The Specialty Modes
None mode — complete network isolation:
docker run --network none myapp
No network interfaces except loopback. Use for containers that process data from volumes but should never make network connections. Security-sensitive batch jobs.
Macvlan — containers appear as physical devices on your network:
docker network create -d macvlan \
--subnet=192.168.1.0/24 \
--gateway=192.168.1.1 \
-o parent=eth0 macnet
Containers get IPs from your physical network’s subnet. Other devices on the network see them as separate machines. Useful for legacy applications that expect to be on the physical network, or for containers that need to be directly accessible without port mapping.
Debugging Docker Networking
When containers can’t communicate, work through this sequence:
Check which network the containers are on:
docker inspect mycontainer | grep -A 20 Networks
If they’re on different networks, they can’t communicate directly.
Check DNS resolution:
docker exec mycontainer nslookup other-container
If this fails, you’re on the default bridge. Create a custom network.
Check connectivity:
docker exec mycontainer ping other-container
docker exec mycontainer curl -s http://other-container:80/
Use netshoot for advanced debugging:
docker run --rm -it --network mynet nicolaka/netshoot
This image has every network debugging tool: curl, ping, dig, nslookup, tcpdump, netstat, ss, iperf, and more.
Check iptables rules Docker created:
sudo iptables -L DOCKER -n -v
Docker manages iptables rules for port mapping and inter-network isolation. Corrupted rules can cause mysterious connectivity issues. Restarting Docker often fixes them.
Check port mapping:
docker port mycontainer
Shows which host ports map to which container ports. If empty and you expected a mapping, you forgot -p when starting the container.
The Networking Decision Tree
- Single container, no communication needed → default bridge is fine
- Multiple containers on one host → custom bridge network
- Need maximum network performance → host mode (Linux only)
- Containers across multiple hosts → overlay network
- Need containers on the physical network → macvlan
- Need complete isolation → none
For 90% of deployments, a custom bridge network is the right answer. Docker Compose creates one automatically. Start there, and only reach for other modes when you have a specific reason.
If you found this guide helpful, check out our other resources:
- (More articles coming soon in DevOps & Infrastructure)