Virtually every single high-performing application that we use on our phones uses containers for deployment. From Google Search to Gmail, YouTube, Spotify, Netflix, AirBnB, and Uber, everything runs on containers.
To understand what containers are and how they've contributed to almost every aspect of our digital lives, it's probably important to understand where they come from. Containers were born from the need to better isolate processes in environments that were growing too large and complex to manage. Modern applications deliver an end-user experience unlike anything in the past, where downtime is unheard of and updates are fast and frequent. That's not how things always were, however. Before we had mobile applications, updating software was a long and tiresome process while servers crashing and websites going down weren't uncommon occurrences.
Virtual Machines (VMs)
Before we had virtual machines, deploying software meant running a dedicated server. This meant any additional software that had different dependencies, different libraries, a different OS, or a different version of a framework, would need a separate server to run on. One business application per machine is quite expensive considering applications often use a fraction of the power available on a server. Virtual machines changed that by enabling users to run multiple operating systems on individual servers. This was done by using hypervisors that virtualize and allocate resources like CPU cores and RAM to virtual systems (or sandboxes) that look and run exactly as actual systems would.
The problem, however, is that a virtual machine needs virtually the same amount of resources that an actual machine would need to do the same job. For example, if the minimum requirement to run a particular OS is an i5 processor with 4GB memory and 500GB of storage, that's how much you will need to allocate to a virtual machine to run an application that requires that OS. This leads to a lot of unused resources and is the reason why it is still quite rare for resources that are allocated to VMs to be fully and efficiently utilized.
Similar to the problems encountered while trying to load multiple applications on a single physical server, deploying multiple services on a single VM can cause a number of issues varying from conflicting components to incompatible libraries or versions. This means no components of an application can be executed independently without placing it in a separate VM. If you take into account modern applications that consist of multiple modular elements and services, management can be cumbersome with each element of an application requiring a separate VM that must store and run its own OS as well as its own execution environment.
Having to run a separate OS for every piece of your application is a problem that keeps getting bigger as your application grows and acquires more moving parts. Additionally, modern environments are often a hybrid mix of multiple public clouds, private clouds, and on-premise resources. This makes moving an application workload from one cloud to another or even one physical server to another a complex affair involving migrating the entire OS along with the execution environment and all dependencies and libraries. Again when you factor in modern applications with multiple services this can go from complex to nightmarish quite easily.
Now as opposed to VMs that virtualize hardware and emulate processing power, memory, network, and storage in order to run multiple operating systems, containers virtualize the OS in order to run multiple OS instances. So while a dozen VMs would imply a dozen copies of the OS running and on file, even 100 or 1000 containers would all share a single OS with the kernel. The absence of an OS makes containers a lot lighter than VMs and a lot quicker to spin up. As compared to VMs that require gigabytes of storage and memory and require several minutes to load, a container requires mere megabytes and can be created in seconds.
In addition to the absence of a heavy OS, containers come bundled with all the necessary code, dependencies, and configurations so they can be deployed anywhere, irrespective of platform or environment. These self-sufficient bundles are referred to as container images. Container images are stored in repositories like Docker Hub or Azure Container Registry. A user can then either "pull" an image from the directory or "push" a new image to the directory as required. Additionally, any changes made to container images can be tracked and version-controlled, making the life of everyone involved a whole lot simpler.
Enhancing the end-user experience
With regards to the impact that containers have on our everyday lives, virtually every single high-performing application that we use on our phones uses containers for deployment. From Google Search to Gmail, YouTube, Spotify, Netflix, AirBnB, and Uber, everything runs on containers. This isn't to say VMs are obsolete but even in the cases where people are still using VMs, they're mostly used with containers running in them. The levels of service that we see today with modern applications that rarely crash and are quick to offer updates and fixes for bugs, can in no small way be attributed to the advent of containers and their ability to run 1000s of "microservices" on a fraction of the resources available on modern servers.
About the author:
With a background in Linux system administration, Nigel Pereira began his career with Symantec Antivirus Tech Support. He has now been a technology journalist for over 6 years and his interests lie in Cloud Computing, DevOps, AI, and enterprise technologies.
More by Nigel Pereira:
5G, a software-defined future
How video games changed the way we process data