On June 9, 2014, the world of IT infrastructure was quietly revolutionized with the release of Docker 1.0. At first glance, it seems benign enough; Docker is a project that “containerizes” applications. Built using the (also up-and-coming) Go language, Docker relies on LXC (Linux Containers), which have been in the Linux kernel since 2008. Conceptually, the easiest way to think of Docker is to visualize the role that standard-sized intermodal shipping containers have played in the freight industry: put anything you want inside, and it is easily moved around the world on ship, train, plane, or truck.

Containers in the port of Odessa

There’s no need for specialized transport or handling of your composite wood flooring or Baby-Don’t-Shake-Me-Doll – everything looks the same. Shipping existed before intermodal containers, but at freight terminals everywhere transports were hand-loaded to optimize space and logistics. Intermodal containers standardized all of that, and had a huge impact on freight transport cost and accessibility. Docker is just like that – for applications. From docker.com : “Build, Ship and Run Any App, Anywhere.”

Specifically, Docker provides the following (adapted from material posted by Solomon Hykes, creator of Docker):

  • Portable deployment across machines. Docker defines a format for bundling an application and all its dependencies into a single object that can be transferred to any Docker-enabled machine.
  • Application-centric. Docker is optimized for the deployment of applications, as opposed to machines.
  • Automatic build. Docker includes a tool for developers to automatically assemble a container from their source code, with full control over application dependencies, build tools, packaging, etc.
  • Versioning. Docker includes git-like capabilities for tracking successive versions of a container, inspecting the diff between versions, committing new versions, rolling back, etc.
  • Component re-use. Any container can be used as a “base image” to create more specialized components.
  • Sharing. Docker has access to a public registry (http://index.docker.io) where thousands of people have uploaded useful containers.
  • Tool ecosystem. Docker defines an API for automating and customizing the creation and deployment of containers.

So what? Just another virtualization technology to add to an already long list of both in-house (bare metal) and cloud virtualization solutions? No, my friend, Docker is the genius combination of both existing and new technology that will push those other solutions aside. Mark this date in your IT history diary: This is ground zero for throwing out years of virtualization vendor (VMware, Microsoft, ..) and cloud-provider (Google, AWS, ..) lock-in that has been tying the hands of IT infrastructure engineers everywhere.

The key problem with traditional virtualization technologies is that they only virtualize the underlying hardware. (There many different forms of this, but the concept is the same.) At the end of the day, you have yet another instance of a full operating system to manage, patch, secure, monitor, and otherwise maintain (system administration, license fees, technology lifecycle management, etc.). As shown in the figure below, Docker really is fundamentally different – it effectively virtualizes the OS, so you have one base OS to maintain, and only the components you need specifically for your application live in the Docker container. Oh, and you can easily move that Docker container in seconds to a different server running Docker, even if that server is in a different part of the world, on a different hardware base, on a different provider, or under different administrative control. Just like an intermodal shipping container, it doesn’t matter whether the Docker host looks like a plane, train, ship, or truck: Your application runs. Flawlessly. Every time. And you didn’t have to spend $100K on virtualization licenses (and pay annual support fees), and send four engineers to three weeks of training.

source: http://tiewei.github.io/cloud/Docker-Getting-Start/

The punch line is that Docker is disruptive because it breaks the vendor/provider lock-in model by providing a standardized application container. Sure, I can migrate a VMware virtual machine to a cloud provider such as AWS (or vice versa), but it takes work, and sometimes is finicky, and sometimes fails. This results in a tremendous amount of static inertia for virtualized environments. When I’m in a bind and need a new place to put an application, out of convenience or even fear I put it in the same technology environment and buy more instances or licenses, whatever is needed. In this model, vendors and providers win while the rest of us lose.

With Docker, every hour I can change my mind as to what hardware, or cloud provider, or data center to use. And I can “wrap” applications in containers and give them to my friends and colleagues – with everything they need, just launch and go. No more hours of installing a huge list of packages or support libraries. No more one-of-a-kind “snowflake servers.” This all means I can more easily shop for compute and network resources based on price, customer service, availability, geography, etc.

Long term, application containerization with Docker will commoditize all hardware and OS virtualization. Game on!

Want to see how Docker can simplify your IT Infrastructure? Reach out to see how OpsBot can help!