Deployment: Fixing the local cloud
Whether you are deploying your Docker containers on your laptop using Docker Compose or on your local servers using Docker Swarm or some other orchestration tool, chances are you’ll need to tweak the deployment tools to handle the inherent limitations of each environment so that the overall user experience is the same regardless of how the application is deployed. One step toward this is to use a consistent DNS naming convention to name your application components. DNS allows you to decouple your deployment from access. Your application could run from your laptop or local servers or cloud, and you could always access it using its fully qualified domain name (FQDN).
Docker on Windows/Mac is private to your machine
While the Docker toolbox makes it relatively easy to set up an almost-native Docker experience on both Windows and Mac, it’s not as seamless as it appears. When Docker-machine creates a VM to host the Docker engine, it assigns two interfaces to the machine. One is a host-only interface, which lets the local Docker process communicate with the Docker daemon running inside the VM. The other is a NAT interface, which allows the Docker containers to make outgoing connections to your local LAN and the Internet.
This is a point worth emphasizing, since all of the examples I’ve seen on the Internet explaining how to use Docker Machine and Docker Compose seem to gloss over this limitation by assuming you do all your testing from the same machine the Docker VM is hosted on. If you needed to access your Docker application from a different machine, it wouldn’t work.
Fixing Docker toolkit network access
Fortunately, it’s really not too difficult to mimic a native Docker-on-Linux experience on a Windows or Mac machine. The solution to local network access is to get the Docker VM directly connected to your local network as a first-class citizen. It needs to be bridged with your local network interface to get an IP from the DHCP server if that’s what configured on your network. To do that:
Step 1 – Shut down the Docker VM using Docker-machine:
$ Docker-machine stop []
If you only have one Docker machine, and your Docker toolbox is relatively new, you can get by with omitting the machine-name.
Step 2 – Open the VirtualBox application.
Step 3 – Select the Docker VM. If you have multiple Docker machines, you’d know which one—otherwise pick the Linux machine named Default.
Step 4 – Click on Settings -> Network -> Adapter 3.
Step 5 – Check the “Enable Network Adapter” checkbox.
Step 6 – For the “Attached to” dropbox, select which interface you use to connect to your local LAN. If you use wired Ethernet, pick that, otherwise select your WiFi adapter from the list.
Step 7 – Expand the “Advanced” section, and there in the “Adapter type” dropbox, select one of the “Intel PRO” models. I normally use the “Intel PRO 1000/MT Server” option. This is important, as we’ll see soon.
Step 8 – While you’re at it, you might tune your CPU and memory requirements to better suit your actual runtime needs.
Step 9 – That’s it. Save machine configuration and start the Docker machine using
$ Docker-machine start []
When the Docker machine comes up, you might be presented with an error message regarding TLS certificates. Follow instructions there to reset the TLS certificate if needed.
To verify your setup, you need to log in to the Docker VM and check if the new interface created is bridged correctly to your LAN and has a valid IP address. To do that, log into the VM using:
$ Docker-machine ssh []
You should then be logged into the VM. Run:
# ifconfig eth2
This should spit out the interface configuration for the eth2 network interface. Look for a line that says “inet addr”. The IP address given there should belong to the same network as your host machine IP address. If it doesn’t show a valid IP address, go back to the steps above and ensure that you’ve selected the correct interface in Step 5. If it appears to have a valid IP address but not your local network IP, then it’s likely your network interface order is broken. To fix it, you’ll need to experiment with the adapter model from Step 7 above. If that fails to resolve the problem, you might need to change the interface MAC address and reboot the VM. If the ifconfig check succeeds, you’re done. Now your Docker VM is actually a new machine on your network and is as good as using a native Linux Docker setup. You needn’t redo this configuration in Virtualbox unless you happen to destroy the VM and recreate it.
All of the above steps could also be scripted with the VBoxManage virtualbox command line tool to help automate this to some extent.
TLS/HTTPS, SSL certificates and host naming
One of the big challenges of working with SSL/TLS/HTTPS is the SSL certificate and host naming.
In a nutshell, if you connect to a machine using name ‘X’ (hostname), then the certificate it produces when negotiating a secure connection must be owned by ‘X’ (Subject) and this fact needs to be asserted by someone you trust (CA). So in addition to using a trusted CA and creating certificates properly, you also need to name your machines according to the way you plan to connect to them.
The most secure way of setting up the machines, therefore, is to buy a certificate pair from a trusted CA for each host that needs to serve TLS. But not only would this be crazily expensive, it would also be crazy to dp this for a test environment. An alternative path is to create a private CA and explicitly add trust for this CA to all the machines that expect to connect to your test/dev setup. This involves a lot of work and potential breakage when you miss adding the trust to one of your test boxes.
A more flexible approach is to first register a test/dev domain, then buy a wildcard certificate for the domain and use that instead. This is a single certificate/key pair that ends up on all the TLS servers, and provided your machine names match the certificate wildcard pattern, clients will be able to validate the certificate correctly, without any extra prep work for the client. You don’t strictly need a new domain for your test/dev environment, but for safety, it’s better to separate it from your production environment so that the dynamic DNS setup in the next section won’t interfere with your production hostnames.
Dynamic DNS and Work-from-home
Whether or not you now have a wildcard certificate for your test/dev domain, the next step is to name your machine appropriately and set up DNS entries so clients can connect to them using FQDNs. While it might be straightforward to set up a dynamic DNS mapping mechanism using off-the-shelf open source components, using online services makes more sense—especially if you don’t really need it tied down to your office infrastructure. Note that the DNS entries—though accessible from anywhere, refer to IPs on your local network. (Nothing wrong with that.) Pick the online service that best fits your needs and budget.
We use Amazon’s route53 service to manage our DNS entries. We use a pre-baked Docker container that embeds a Python script and credentials to modify the DNS entries in route53 for the test/dev domain. The test/dev domain name and application FQDN is passed in via Docker Compose environment variables. The container runs early in the Docker Compose order and runs with the network mode set to host. Running it this way means the container uses the same network stack as the host machine, so instead of getting a private IP from Docker, it actually sees all the interfaces of the Docker VM itself—including the eth2 interface, which is LAN accessible. The container entry-point shell script fetches the IP from the eth2 interface (the one we set up in bridged mode) and then invokes the Python script with this IP, which, in turn, creates all the DNS entries needed for the app. Here’s an example of how that works:
Part of the Docker Compose.yml file:
update-dns:
image: local-registry/update-dns:sometag
container_name: update-dns
env_file: app.env
network_mode: host
The app.env looks a bit like this:
APPNAME=foobar
DOMAIN=example.com
FQDN=foobar.exmple.com
BRIDGE_IFACE=eth2
As you can see, the application configuration (which each user needs to modify before firing up the application) is kept in the app.env file, allowing you to check the Docker Compose .yml file into source control along with the application sources.
Admittedly, it is a fair bit of work to build out the whole infrastructure this way, but the long-term benefits far outweigh the upfront effort involved. All that’s required of a user is to check out the sources, modify a configuration file and fire up Docker Compose. What he gets in return is a fully functional application deployment that runs using the checked-out source tree, and is accessible on the LAN using a valid FQDN and full SSL support if so configured. Could it get any easier?
Editor’s Note: Dockerizing Apps that Scale Using Docker Compose is published by the Sumo Logic DevOps Community. If you’d like to learn more or contribute, visit devops.sumologic.com. Also, be sure to check out the Sumo Logic Developers Open Source page for free tools, API’s and example code that will enable you to monitor and troubleshoot applications from code to production.
Kubernetes vs. Docker: What Does It Really Mean?