Old computer code side by side with a modern interface

Is there a point to the Make program in 2020? Sure, especially if your legacy infrastructure already depends on it. It also remains widely used in Unix systems. But after four decades, is it still a top choice for the requirements involved with creating or managing new build processes? Will learning Make in 2020 elevate your DevOps career?

While some training programs have lately presented Make as an esoteric tool for advanced practitioners, that kind of marketing is neither faithful to its real history nor helpful to newcomers to the subject.

Fundamentally, Make automates builds. Nowadays, that’s a chore most often assigned to integrated development environments (IDEs), continuous integration (CI) tools, build automation utilities such as Maven and Ant, package managers, and “homegrown” pieces in bash or PowerShell to script and configure what all the other tools do. While all these other tools have inherited concepts and techniques from Make, they’ve taken over so much of the build-automation burden that there’s little left for Make to do on its own.

Another symptom of Make’s historical success is how poorly it’s standardized now. Visual Studio’s NMAKE has inference rules that generalize those of standard Make, and GNU Make and BSD Make both diverged from the POSIX specification for the tool.

Still, Make was the influence behind many build tools. From the time of its creation in April 1976, Make emphasized two distinct themes in build automation:

  • Comprehensive expression of a build
  • Inferences about the dependencies and especially optimizations of the abstract build

An example will help.

Foundational build concepts

Underlying all build work is a crucial distinction between the artifacts for using computers and those for their construction. Readers of a website experience it as content, images, perhaps shopping carts, and so on; the developers of that same site have to transact HTML, CSS and other source files.

We label a “build” as the transformation from the source files to the useful object. To manage that transformation as a concept is one of the great hurdles of programming — keeping in a single creative mind the correspondences between source files and the results built from them.

Consider a minimal example that would have been recognizable from the first years of Make: the program my_application coded in two source files in the C language, my_application.c and my_library.c. This might be built as

cc -o my_application my_application.c my_library.c

Before launching my_application, we arrange its construction through such a recipe. What if we want to run it a second time? There’s no need to rebuild, of course; the my_application executable is still present in the file system and can be rerun.

Or can it? What if the source files changed? What if the sources didn’t change, but something in the underlying operating system, such as a header file, did? What if compilation is a time-consuming task, and only one of the sources changed — is there a way to update just part of the executable, and leave untouched the part that needn’t vary? What if the sources are just as we want them, but now the application is needed on a completely different operating system?

In all these circumstances, the executable might be available, but invalid: It no longer corresponds to the source and environment it’s supposed to embody.

Build, Test, and Deploy with Confidence

Scalable build and release management

Programmers’ needs have evolved

These were the kinds of problems Make addressed in the 1980s and ’90s. Make’s functionality mostly has to do with expression of build dependencies and specific optimizations over the graphs of those dependencies. While Make can still do the same work right now, we don’t need it as much as programmers did in those earlier times.

Disk space is less expensive. Programmers today more often work with IDEs. To provision and deploy a new CPU can take a matter of seconds, rather than months. Programmers now need faithful horizontal replication of build processes more than we need baroquely hand-tuned ways to skip over a few compilation passes.

The best use of programmers’ effort is to express a build as simply as possible and apply enough inexpensive computing power to bring the runtime of the build down to an acceptable level. Build automation is more important than ever because our ambitions for applications keep growing — we count on the leverage that automation brings us. Make is just a little off-center for the requirements we put on automation now.

Conclusion

Exceptions certainly exist. Legacy systems focused on Make, cross-generations often found when working in embedded domains, and unusual circumstances constrained by under-powered hardware all have the potential to reward expertise in Make.

For the majority of us programming in the 2020s, though, we’re better off with modern tooling that better supports modern approaches, like Twelve-Factor, IDEs, functional programming and agile.

Make can play a role in all these approaches; it’s rarely at its best when doing so, though. Construct simple automations that make sense to everyone on your development team, take advantage of the latest tooling for continuous integration, and recognize that regrets about a different way a build might have been expressed are just distractions.

You’re best off doing what you must to keep any use of Make running smoothly, while focusing your training time on newer technologies that better fit the needs of this decade.

All-in-one Test Automation

Cross-Technology | Cross-Device | Cross-Platform

About the Author

Cameron Laird is an award-winning software developer and author. Cameron participates in several industry support and standards organizations, including voting membership in the Python Software Foundation. A long-time resident of the Texas Gulf Coast, Cameron's favorite applications are for farm automation.

You might also like these articles