Penetration testing, also known as a pen test, pentest, and type of application program interface testing (API testing), is a kind of simulated attack on a computer system. The goal of this kind of cybersecurity testing is to evaluate the overall security level of the...
While containerization’s basic premise of isolation from an underlying platform is widely understood, a properly running container requires attention to quite a few technical details. Port forwarding, in particular, is a common hurdle. Here are the essentials a DevOps team needs for reliable port management.
Networking in the Docker world
Docker and most of its alternatives, including Vagrant, Mesos, CRI-O, and so on, isolate applications and services: They operate in consistent environments, protected from hazards. Isolation can’t usefully be absolute, though. A container needs some way to deliver its results to the world outside the container.
In container-land, communication is generally implemented through familiar-looking TCP/IP or UDP/IP networking on defined ports. A default Docker container has no live networking ports; any effective networking requires explicit configuration. This configuration is often referred to as “publishing” ports of interest.
“Hello, world” for a useful container is a couple of steps more involved than for a programming language or conventional tool. A newcomer to Docker not only needs to model a container that does something but simultaneously a configuration that communicates that result in the world outside of the container. Similarly, a good design for a container implementation encompasses not only the containerized service or application but also details of how clients access that application.
DevOps practitioners need to know how to manage not only construction and maintenance of containers, but also their exposure, or publication. Containers’ lifecycles go beyond creation, activation, and removal to include a plan with enough specifics to make communication possible.
“Plan” is an important word here. Many containers provide a service on a popular port, such as 80 (conventionally used by HTTP) or 443 (HTTPS). The container host might expect to use that same port for a dashboard or other responsibility it supports, however. While IP supports an abundance of IP ports — over 65 thousand of them — it’s wise to map explicitly how your particular container project uses the ones it needs.
A little planning ahead of time helps keep used of ports consistent and coherent and thus minimize conflicts and confusion between different projects. Think of an organization where one particular service relies on ports 2001, 2002, 2005 and 6080, while another uses 5100, 5101, 5102 and 2004. That certainly can work; the gaps are likely to puzzle the humans involved, though, or at least distract them from deeper problems.
A few concrete examples of container actions will help
First, “Hello, world”
Assume the Docker infrastructure properly installed on a convenient desktop or server. The first test to make of it is:
docker run hello-world
which typically returns something along the lines of:
Hello from Docker! This message shows that your installation appears to be working correctly. ...
To display this text, the Docker host launched the container, received its output, and reported it back to the visible window in which we’re working.
Build, Test, and Deploy with Confidence
Scalable build and release management
Forward a port
For a more ambitious model, launch
docker run -it -p 22022:22 ubuntu
This brings up a minimal Ubuntu command line. If you happen to run
ssh root@localhost -p 22022
from the host — the command line outside the container — the connection will appropriately be refused: no ssh daemon is running on 22022.
Extend the experiment: Command the Ubuntu container to do the following:
apt-get install openssh-server
Edit /etc/ssh/sshd_config to include the following:
Set a convenient password for root:
and also, within the Ubuntu container:
mkdir -p /run/sshd
Launch a login server:
/usr/sbin/sshd -D &
At this point, from within the Ubuntu container, you can successfully do the following, using the password just established:
That’s not all, though! Back on the “outside,” in the container host, you’ll find that the command:
ssh root@localhost -p 22022
also logs into the Ubuntu command line. You forwarded the container’s port 22 to the host’s 22022 and can now use networking to open any number of connections to the container.
Notice this example is only for educational purposes. No practical situation should PermitRootLogin. Real-world systems most often are configured through Dockerfile and adjust proxies and firewalls to harden networking protections, and so on.
As a starting point, though, the example meets our purposes. It exercises fundamental techniques DevOps workers need to practice:
- Construction of a specific service within a container (in this case, sshd within Ubuntu)
- Publication of that service on a convenient port where external processes can access it
Extend this example to make your own services, and forward them to the ports you choose. Keep in mind that any one host can only run a single TCP service on any one port. At that point, you’re on the road to fluency with individual containers.
With this base in container operation and networking, you are ready to tackle orchestration, the next level in container use.
All-in-one Test Automation
Cross-Technology | Cross-Device | Cross-Platform
Learn how automated unit testing saves development teams time and frustration while detecting bugs and software defects that affect your product performance.
Developers and programmers have long been resorting to tools, techniques, and methods to help them better create software applications, with one of them being code profiling. But what is code profiling, and how can it enhance your IT programming projects? Our...