My personal reasons to not run my Nginx reverse-proxy inside Docker
This text explains why, in my situation, wrapping my reverse-proxy (Nginx) with Docker is a disadvantage. Your situation is likely different.
Docker is a useful technology that serves us software engineers well in many areas. I use it at work all the time although I do not consider myself a Docker pro.
As with any technology, it has merits and demerits in certain situations. I will weigh the pros and cons carefully and explain my judgment.
My setup
Explaining how my infrastructure looks like will help set a baseline for you to understand and compare it to your situation.
I use Nginx as a reverse-proxy that serves 10 websites and web applications. They live on a single VPS that runs Ubuntu Linux.
I update my Nginx configuration infrequently, typically to add a new website by configuring a new virtual host.
Problems that Docker solves
A refresher on what Docker does for us developers.
- Ensures compatibility of the service (process) with the underlying OS
- Ensures compatibility of the service with the libraries and dependencies (avoids dependency hell)
- Prevents configuration drift (infrastructure is written as code)
- Eases spinning up more instances (e.g. for testing or scaling)
- Infrastructure-as-code serves as documentation
My assessment process
To assess situations like this in a qualitative way, I take inspiration from the "Failure mode and effects analysis".
The FMEA was developed in the aerospace and military industry in the 50s to assess the reliability of (technical) solutions before building them.
Three probability metrics are important:
- Severity
- Occurrence
- Detection
This sounds abstract, but the concept is straightforward.
The FMEA gives you a framework to answer the question:
How much should I care about a certain problem and therefore bother solving it?
For a trivial example:
If a problem is unlikely to occur, easily detectable and fixable by the user, and has low severity, it is not worth solving.
All other cases are non-trivial. The outcome varies because each person has a different risk perception.
Docker is one dependency more
You may object to this section with the argument, "I use Docker already, so there is no additional dependency disadvantage".
This is about Nginx depending on Docker. When Docker is down, I cannot show an error page. Every dependency increases the failure modes.
I minimize dependencies because of the increased failure modes.
I would put Nginx into Docker if the problems Docker solves were likely to occur in practice when running Nginx as a reverse proxy. If not, Docker becomes an unnecessary dependency since it solves a hypothetical problem in my particular situation.
Nginx has few, stable dependencies
The external dependencies that Nginx relies on are:
libc,zlib,libssl,libcrypt,libpcreandiproute2.
Apart from iproute2 all of these libraries themselves depend only on libc and have proven their maturity over decades.
My server contains no customized versions of these libraries. I have never encountered a compatibility issue. Therefore, the safety net of dependency isolation that Docker provides is unnecessary here.
I will re-evaluate the decision to forgo dependency isolation for Nginx if I ever suffer from a dependency issue with it.
Docker makes it harder to have Nginx "always on"
Nginx is designed for the use case of never shutting down. I only shut it down for OS kernel security patches.
You can upgrade the binary in-place under load. Nginx gracefully finishes serving existing connections, and new worker processes start with the new binary. It does not tear down active connections.
If you change the configuration, you issue nginx -s reload and Nginx picks it up by the same graceful hot-reload mechanism used when upgrading the binary.
Docker gets in the way here. Hot-reloading and in-place upgrades of Nginx go against the containerization philosophy. Containers are about immutability. Instead of updating existing instances, you tear down the old instance and spawn a new one.
I could tolerate a brief service interruption for going the immutable-container way of spawning new instances.
However, the investment does not seem worthwhile. At some point, I would likely make an error in my Docker setup, and my reverse proxy would go down because of it.
I do not build five-nines reliable systems, but when Nginx is down, I cannot show an error page, which is suboptimal. My paying customers should see at least an error page with contact information.
Configuration of the reverse proxy in production is unique
The virtual host configurations are tied to specific domain names. DNS entries point to the server which runs my reverse proxy. If we run another instance of Nginx with the same configuration as in production—for instance, for development—we need a dedicated DNS server to make it work.
The reverse proxy is a "singleton service". It is the entry point to everything.
Having more than one entry point means that another instance, with a single entry point, needs to decide where to enter your system.
Hence, you typically have only one reverse proxy unless you do round-robin DNS load balancing or some Google-scale hierarchical load balancer setups.
That said, I have never needed to spin up another instance. If I need to, I have a bash script and rsync the configuration file from my project directory to the server.
Summary
For my setup, Docker doesn't solve a problem I have faced. Following the KISS principle and my FMEA analysis, implementing a solution for a non-existent problem creates its own problems.
I will revisit this opinion if my situation changes or new insights make these points invalid.