“It is generally recommended that you separate areas of concern by using one service per container. … It’s OK to have multiple processes, but to get the most benefit out of Docker, avoid one container being responsible for multiple aspects of your overall application.”
Those words from the official Docker documentation site are typical of commercial practice with containers. By default, Docker runs one process per container.
But is there ever a good reason to change this prescription? When is it acceptable to run multiple processes in a container?
The short answers are “Yes” and “More often than people generally think.” To understand the long answers, consider first the essential context of containerization.
When to run containers
We host applications in containers to:
- Make underlying infrastructure more manageable
- Isolate services to make them more secure and reliable
- Create a more consistent operational environment
Containers aren’t the only way to achieve these effects, of course; good techniques exist for running directly within native operating systems, virtualized machines, or serverless hosts, for instance. Containers aren’t a magical technology, but thanks to a great deal of productive research and effort, they have become a widely used practice in the industry.
Different technologies have different profiles. Virtual machines and containers are similar in their standardization of operating environments and security concerns, but their practical application diverges a great deal. The best practice for containers, as the Docker documentation recommended, is to limit each container to a single service; with virtual machines, in contrast, conventional advice is to load them up with all the dependencies of a single application.
These are largely conventions, though: I maintain plenty of applications configured as distinct virtual machines operating together, and simultaneously I sometimes expand the borders of containers to run multiple distinct processes. Architectural rules such as “one application per container” are as much stylistic as functional.
True wisdom in container management, then, isn’t so much to follow “best practices” blindly as it is to know how to apply those best practices and when to adjust them.
Consider this example. An organization maintains a complex equipment inventory in a database. All access to the database is through the application; it has no other entry points or uses. There is no particular need to scale the application horizontally or vertically.
A decade ago, a virtual machine might have naturally hosted such an application, although specific measurements should have been done to ensure that the database performed adequately in a virtualized filesystem. Nowadays, however, containerization might be more standard for that organization’s IT department. A conventional implementation would have separate containers for the database and its CRUD (create-read-update-delete) access. Separate containers can be tested and maintained in isolation.
Is that truly an advantage, though? Depending on the details of usage, perhaps not.
The CRUD application can’t be tested without at least a mock of the database. There are no scalability requirements. The description here makes it unlikely that the database will be run apart from the application, and while an efficient infrastructure department might need to re-host the application, there’s no compelling reason to split up the two component processes. In fact, realistic testing and monitoring might be easier with in-host communications and logging; there’s no need to introduce sophisticated, centralized scheduling and logging. Even Docker themselves has written about how to manage multiple processes within a single container.
Best practices exist for reasons, and they’re generally good reasons. At the same time, a sensitive, thoughtful DevOps team knows how to analyze specific requirements. A situation with low or peculiar scaling needs, that has little coupling to other applications, and is in a well-understood problem domain might be best designed as a monolith in a container, rather than a collection of microservices. Increase coupling to other systems, or decrease the tight coupling of the database and its CRUD, and design decisions shift.
In any case, it’s more important to analyze and make the most of resources, rather than just to apply received rules blindly.
What should be inside a container? Most likely the answer will be a single process. If you have good reasons for a different design, though, and sound plans to test and maintain that design, it’s OK to bend the single-process rule. In fact, your careful analysis may well yield you an even better result than automatic adherence to usual “best practices.”
Accelerate Your Software Development Workflow
Universal package management and governance
All-in-one Test Automation
Cross-Technology | Cross-Device | Cross-Platform