Giddy elevator pitches often promise technology that costlessly enables explosive customer growth. In the real world of application development and operation, though, our biggest scaling challenge has to do with teamwork. The latest best practices, from Kubernetes on, are first about scaling teams.
Explanations of containers, for instance, often emphasize that “containerization increases scalability anywhere from 10 to 100 times that of traditional VM environments,” or, more generally, “can scale up to handle the additional load or ramp down to conserve resources during a lull.” Similarly, most discussions about scaling microservices focus on operational aspects: how to measure a microservice’s capacity and actual usage by customers, and techniques for responding as the latter approaches the former.
These platform efficiencies are particularly important to me personally, as I spent so much of my own career in their pursuit. Containerization and microservices bring even greater gains in other dimensions, though: responsiveness, time to market and thrifty use of expertise. The reduction of data center capital and operating costs due to migration to Docker, say, is easy to describe and value.
However, these technologies do considerably more than just pinch the pennies spent on platform power.
The greater achievement of containerization and microservices is to save on human costs. These savings also only grow day after day, because Moore’s law lowers hardware costs over time.
How does containerization lower human costs? Consider a few examples:
- Your organization hires a new software developer. Instead of spending the first two days trying to recreate the development environment, her colleagues try to explain to her, she just retrieves containers to her desktop with the standard services they all jointly need. She has a working environment in minutes, with no surprises because her particular operating system happens to embed a database incompatibility, or her workstation is on the wrong subnet, or the host issued her is too old to run particular tools. All she requires is the ability to run containers; with that in place, everything else quickly takes shape.
- Development and IT operations don’t need a lengthy negotiation before every update about the exact platform requirements for the update. It all reduces to, “Can the ops hosts run containers? Do they have enough memory and disk space?” Given agreement on those points, whatever developers put into their containers generally runs adequately on production servers.
- When a prospective enterprise customer $CUSTOMER asks whether an application can run on-premises under the peculiar customized operating $CUSTOMER runs in its Tulsa data center, there’s no need to scramble a team to fly to Oklahoma and compare notes. Instead, the conversation simplifies to: “Can your data center run containers?” Everyone jumps ahead by a week and has the opportunity to focus on deeper contractual and strategic questions, with fewer distractions and uncertainties about whether the software will do what it should once in place.
More generally, containers help streamline and elevate planning by a whole range of departments, including product, design, sales, and response. Lowering or eliminating the time collaborators devote to puzzling over why and how the software works differently in different environments liberates all of them to focus on core values and respond more quickly and decisively.
Containerization is no panacea. Naive early pronouncements to the contrary, containers do not solve thorny security problems, and container deployment itself is a specialty in short supply in many organizations. What containers reliably provide, though, is a common target that at least gives teammates from different departments a starting point for their analyses and investigations.
Simon Willison, co-creator of the Django web framework, is among those who argue, “The most important quality factor of any piece of software is how easy it is to change.” With the enormous premium software development puts on time to market and management of abstractions, any simplification and clarification of software and its changes pay off in a big way.
Jeff Bezos is credited with a two-pizza rule, recognizing that teams quickly lose efficiency as they grow. When container use shaves a few distractions from each developer’s daily work, the result isn’t just the apparent improvement in code; it’s a substantial multiple of output for the team as a whole. The team travels so much farther without accreting overhead of more programmers, more administrators, more managers for those additional workers, more policy to handle platform variations, and so on.
Microservices compound the savings
The microservices story is similar. Microservices’ technical advantages are, at best, mixed; plenty can go wrong with a microservices architecture, and it takes discipline to make the most of microservices.
When conditions — technical, cultural, and historical — are right, though, microservices bring powerful team-level advantages: on-boarding and modularity become easier, risk and fault management simplify, and development pace quickens. That’s a powerful combination.
When evaluating containers and microservices, of course, you’ll need to get the technical details right. Teams generally make those adjustments quickly. The bigger change — the one with the potential to boost your effectiveness to a new qualitative level — will be the impact on your team itself.
If the fit is a good one, you’ll find containers and microservices enable quicker deliveries to customers with level or smaller teams. It’s a result worth the initial investment.
All-in-one Test Automation
Cross-Technology | Cross-Device | Cross-Platform