The traditional way of software development is cumbersome — a developer writes code, builds, and runs it in one environment and it functions as intended. However, it may fail with errors and defects when shipped and run in another environment. Containerization is the encapsulation of an application and its required environment, and addresses this problem using containers/dockers. This helps organizations modernize legacy applications and create new cloud-native applications that are both scalable and agile.
Container engines such as Docker, and frameworks such as Kubernetes, provide a standardized way to package applications — including the code, runtime, and libraries. This enables them to run in a consistent manner across their entire software development life cycle.
Containers are continuing to increase in popularity and demand as they provide a powerful tool to address several development concerns — including the need for faster delivery, agility, portability, modernization, and life cycle management.
How do containers work?
Containers abstract the application platform, its dependencies, and the underlying infrastructure. In a nutshell, a container bundles the runtime environment, application, libraries, binaries, and configuration files to run in an efficient and bug-free way across different computing environments. Refer to the diagram below for a depiction of high-level container architecture.
Container orchestration automates the deployment, management, scaling, and networking of containers. Enterprises that need to deploy and manage hundreds or thousands of Linux-based application containers and hosts can benefit from container orchestration.
This automation can be used in any environment where you use containers. It can simplify the deployment of applications across different environments and eliminates the need to redesign the applications. Kubernetes is commonly used for container orchestration.
Containers vs. virtual machines
Containers are often compared with virtual machines (VMs) — both technologies can host multiple applications on a single server in a confined environment.
VMs are an abstraction of the hardware layer. VM technology uses one physical server host to run many VMs. Though multiple VMs run on one physical server host, each VM has its own operating system, applications, associated libraries, binaries, and dependencies. Refer to the diagram below for a depiction of high-level VM architecture.
Furthermore, the table below lists the significant differences between the two technologies.
As IT technologies have continued to grow and develop, infrastructure and applications have as well — from on-premises hardware to virtualization via hypervisors to the cloud. Cloud computing has changed the application development process in two key ways:
- Rehosting or “lift and shift” — The quickest way to move applications from on-premises to cloud, which often results in increased expenses. The goal to reduce TCO by moving to the cloud is defeated as applications are not optimized to operate in the cloud.
- Refactoring — A process broken down into two levels below.
As containers can be hosted anywhere — on-premises and/or the cloud — containerization plays a major role in application modernization. Instead of performing rehosting, software/application architects execute refactoring.
Applications are being re-architected to become service-oriented architectures (SOA).
An application that used to run as a single stack on one server is now refactored to work as a service on the cloud, as a cloud-native application.
How do you refactor an application?
The entire stack of an application is hosted on a single operating system. Typically, an application can be broken down into three tiers: a front-end server/web, middleware/application binaries and libraries, and a record keeper/database (flat file/dbms/rdbms). The first two are hosted on one server, and the third is hosted on another server. These servers connect with each other over the network.
Instead of rewriting the code to make it a cloud-native application, application processes are analyzed for dependencies. The identified processes are then containerized and checked into a repository.
As an example, a two-tier application stack can be broken down into various processes, such as webserver and application binaries. Both of these processes are containerized. As their dependencies are mapped, each containerized process would include these dependencies in the runtime of the containers as shown in the diagram below.
- Traditional application server
- Process identification, segregation, and analysis
- Optimization and containerization of processes
Containerization offers the following benefits for application modernization in the cloud:
- Scalability and agility — Containerization promotes flexibility. Applications can grow and shrink to meet IT demand. Developers automate management and focus efforts on code development.
- Compliant and secured — Containerized applications are isolated from each other, offering better security.
Challenges in containerization and their solutions
Containerization is the future!
Gartner predicts that “by 2022, more than 75% of global organizations will be running containerized applications in production, up from less than 30% today.”¹ Application modernization opens up new business opportunities to the enterprise by means of portability, scalability, security, and agility. It equips the organization to better leverage cloud computing and differentiate itself from the competition.
To learn more about containerization best practices and its implementation in applications such as Kubernetes, listen to the Druva podcast: No Hardware Required; and explore additional cloud backup and management innovations and best practices in the Innovation Series section of Druva’s blog archive.
¹ Gartner, “Gartner Forecasts Strong Revenue Growth for Global Container Management Software and Services Through 2024,” Susan Moore, June 25, 2020.