Docker Networking Explained — Bridge vs Host vs Overlay and When to Use Each

By banditz

Monday, April 20, 2026 • 6 min read

Diagram showing Docker bridge host and overlay network modes

You spin up two containers. A web app and a database. You try to connect the app to the database using the container name as the hostname. It doesn’t work. Connection refused. You try the container’s IP address — it works. But the IP changes every time the container restarts.

This is the most common Docker networking confusion, and it happens because of the default bridge network’s limitation that nobody tells you about until you hit it.

Docker has six network modes. Most people use one (the default bridge) and don’t realize it’s the worst one for multi-container applications.

The Default Bridge — Why It’s a Trap

When Docker installs, it creates a network called bridge backed by a Linux bridge device called docker0. Every container started without a --network flag connects to this default bridge.

docker run -d --name web nginx

docker run -d --name db postgres:16

Both containers are on the default bridge. They can reach each other by IP:

docker inspect web | grep IPAddress

# "IPAddress": "172.17.0.2"

docker exec db ping 172.17.0.2

# Works

But:

docker exec db ping web

# ping: bad address 'web'

The default bridge does not provide DNS resolution between containers. This is the single most confusing thing about Docker networking, and it catches everyone.

The reason is historical. The default bridge predates Docker’s embedded DNS server. For backward compatibility, Docker kept it without DNS. Every user-created bridge network gets DNS automatically.

Never use the default bridge for multi-container applications. Always create a custom bridge.

Custom Bridge Networks — The Right Way

docker network create mynet

docker run -d --name web --network mynet nginx

docker run -d --name db --network mynet postgres:16

Now:

docker exec web ping db

# PING db (172.20.0.3): 56 data bytes — works!

docker exec db ping web

# Works too

Custom bridge networks provide:

  • DNS resolution — containers find each other by name
  • Better isolation — containers on different networks can’t communicate
  • Connect/disconnect on the flydocker network connect mynet existing-container

Docker Compose does this automatically. Every docker-compose.yml creates a custom bridge for the project. All services can reach each other by service name:

services:

  web:

    image: nginx

    ports:

      - "80:80"

  api:

    image: myapp

    environment:

      DATABASE_URL: postgresql://user:pass@db:5432/mydb

  db:

    image: postgres:16

    environment:

      POSTGRES_PASSWORD: pass

Notice db in the connection string. Compose’s network makes db resolvable from any service in the same file. The db service has no ports mapping — it’s only accessible from other containers on the network, not from the host or outside. That’s a security advantage.

Host Mode — When You Need Raw Performance

docker run -d --network host nginx

Host mode removes network isolation entirely. The container shares the host’s network stack — same IP, same interfaces, no NAT.

Nginx in the container now listens on port 80 of the host directly. No port mapping needed. No NAT overhead.

When to use host mode:

  • Network monitoring tools that need access to all host interfaces
  • Applications with many dynamic ports where mapping each one is impractical
  • Maximum network performance (eliminates NAT/iptables processing)
  • Applications that need to bind to the host’s specific network interface

When NOT to use host mode:

  • Multiple containers needing the same port — they’ll conflict
  • When you need network isolation between containers
  • On macOS or Windows — host mode only works on Linux

For most web applications, the performance difference between bridge and host is negligible — under 1% latency difference. You’d need to be pushing millions of packets per second (network appliances, high-frequency trading) for host mode to matter.

Overlay Networks — Multi-Host Communication

If you have containers running on different machines that need to communicate, overlay networks handle it:

docker swarm init

docker network create --driver overlay --attachable production

docker service create --name api --network production --replicas 3 myapp

Overlay networks:

  • Span multiple Docker hosts
  • Create encrypted tunnels between hosts (using VXLAN)
  • Provide DNS-based service discovery across hosts
  • Required for Docker Swarm services

Each container gets an IP on the overlay network and can reach other containers by name, regardless of which physical host they’re running on.

For single-host deployments (most development environments and many production setups), bridge networks are simpler and sufficient. Use overlay only when you genuinely have multi-host requirements.

None and Macvlan — The Specialty Modes

None mode — complete network isolation:

docker run --network none myapp

No network interfaces except loopback. Use for containers that process data from volumes but should never make network connections. Security-sensitive batch jobs.

Macvlan — containers appear as physical devices on your network:

docker network create -d macvlan \

    --subnet=192.168.1.0/24 \

    --gateway=192.168.1.1 \

    -o parent=eth0 macnet

Containers get IPs from your physical network’s subnet. Other devices on the network see them as separate machines. Useful for legacy applications that expect to be on the physical network, or for containers that need to be directly accessible without port mapping.

Debugging Docker Networking

When containers can’t communicate, work through this sequence:

Check which network the containers are on:

docker inspect mycontainer | grep -A 20 Networks

If they’re on different networks, they can’t communicate directly.

Check DNS resolution:

docker exec mycontainer nslookup other-container

If this fails, you’re on the default bridge. Create a custom network.

Check connectivity:

docker exec mycontainer ping other-container

docker exec mycontainer curl -s http://other-container:80/

Use netshoot for advanced debugging:

docker run --rm -it --network mynet nicolaka/netshoot

This image has every network debugging tool: curl, ping, dig, nslookup, tcpdump, netstat, ss, iperf, and more.

Check iptables rules Docker created:

sudo iptables -L DOCKER -n -v

Docker manages iptables rules for port mapping and inter-network isolation. Corrupted rules can cause mysterious connectivity issues. Restarting Docker often fixes them.

Check port mapping:

docker port mycontainer

Shows which host ports map to which container ports. If empty and you expected a mapping, you forgot -p when starting the container.

The Networking Decision Tree

  • Single container, no communication needed → default bridge is fine
  • Multiple containers on one host → custom bridge network
  • Need maximum network performance → host mode (Linux only)
  • Containers across multiple hosts → overlay network
  • Need containers on the physical network → macvlan
  • Need complete isolation → none

For 90% of deployments, a custom bridge network is the right answer. Docker Compose creates one automatically. Start there, and only reach for other modes when you have a specific reason.


If you found this guide helpful, check out our other resources:

  • (More articles coming soon in DevOps & Infrastructure)

Step-by-Step Guide

1

Understand the default bridge and its limitations

Docker creates a default bridge network called docker0. Containers without a network flag connect here. The critical limitation is no DNS resolution between containers on the default bridge. They can only communicate by IP address. Always create a custom bridge instead.

2

Create custom bridge networks for multi-container apps

Run docker network create mynet. Start containers with --network mynet. Containers on custom bridges resolve each other by name. Docker Compose creates custom bridges automatically for each project. All services in the same compose file reach each other by service name.

3

Use host mode when you need maximum performance

Host mode removes network isolation entirely. Container uses the host network stack directly. No NAT overhead. Use for network monitoring tools, apps needing dynamic port ranges, or when NAT latency matters. Only works on Linux.

4

Use overlay for multi-host communication

Overlay networks span multiple Docker hosts. Required for Docker Swarm or any multi-node container deployment. Creates encrypted tunnels between hosts. Containers on different machines communicate as if on the same network.

5

Debug networking issues

Use docker network inspect to see connected containers and IPs. Use docker exec to run ping curl or nslookup inside containers. Check DNS with docker exec container nslookup service_name. Use nicolaka/netshoot image for full network debugging tools.

Frequently Asked Questions

Why can't my containers find each other by name?
You are probably using the default bridge network which has no DNS. Create a custom bridge with docker network create and connect containers to it. Docker Compose does this automatically.
What is the performance difference between bridge and host?
Host eliminates NAT and iptables overhead. For most web apps the difference is under 1 percent latency. For high-throughput apps processing millions of packets per second host mode matters.
When should I use overlay networks?
When containers on different physical or virtual hosts need to communicate. Required for Docker Swarm services. For single-host deployments bridge is sufficient and simpler.
How do I expose a container port to the outside?
Use -p hostport:containerport when running the container. On a bridge network only published ports are accessible from outside. On host mode the container binds directly to host ports without mapping.
banditz

Research Bug bounty at javahack team

Freeland Reseacrh Bug Bounty

View all articles →