Docker containers and why they are important

//Docker containers and why they are important

Docker containers and why they are important

Docker Containers Image

As the official site of Docker says:

” Docker containers allows you to package an application with all of its dependencies into a standardized unit for software development.”

with Docker you can make Containers with all dependencies for your application.

Why is this so important?

If you create or package your app with all dependencies, then you can deploy that application in seconds on any other system. For example, you don’t need to bother if the Production Server has all necessary things, you can just deploy the Container and that’s it.

Virtualization vs. Containers

It’s like virtual machines, but the abstraction stops on devices.

With Virtualization the abstraction is made on hardware devices and the whole Operating System. On the other hand, Docker makes the abstraction on the Operating System kernel.

Docker Containers have their own file system, storage, CPU, RAM…

One thing hypervisors can do that containers can’t is to use different operating systems or kernels.

Containers are not new idea

Whilst mostly used in the last few years, the idea and concept for Containers dates back to at least the year 2000 and FreeBSD Jails. It allows users, to run the apps in a sandbox, thus separating them one from another. The FreeBSD Jail had access to the operating system kernel and to very limited set of other system resources.

Oracle Solaris developed similar concept called Zones. Other companies, like Parallels, Google, and Docker have been working on open-source projects as OpenVZ and LXC (Linux Containers) to ensure that containers work well and securely.

While hypervisors can use different operating systems or kernels, Containers use shared operating systems and that makes them much more efficient in system resource terms. Instead of the hardware being virtualized, containers work on top of a single Linux instance.

That makes it available to have as many as four-to-six times the number of server application instances as you can using Xen or KVM VMs on the same hardware. This enables running more apps on the same servers used before, and also makes the packaging and shipping of programs very easy.

With the Docker you can setup live server-like environments for local development of apps.

“Specifically, Docker makes it possible to set up local development environments that are exactly like a live server, run multiple development environments from the same host that each have unique software, operating systems, and configurations, test projects on new or different servers, and allow anyone to work on the same project with the exact same settings, regardless of the local host environment.”

They run as an isolated process in userspace on the host operating system.

While the hypervisor plays a role in the virtualization in a cloud system, Docker provides an additional layer of abstraction and automation of operating-system-level virtualization on Linux. It enables creating of independent Containers that run on top of a single Linux instance, by using cgroups and kernel namespaces (the resource isolation features of the Linux kernel), and aufs and others (union-capable file system).

The Linux kernel’s support for namespaces mostly isolates an application’s view of the operating environment, including process trees, network, user IDs and mounted file systems, while the kernel’s cgroups provide resource limiting, including the CPU, memory, block I/O and network. Since version 0.9, Docker includes the libcontainer library as its own way to directly use virtualization facilities provided by the Linux kernel, in addition to using abstracted virtualization interfaces via libvirt, LXC (Linux Containers) and systemd-nspawn.

The containers make sure that the resources are isolated, the services are limited, and the processes provisioned, so that each instance has its own space and a private view of the operating system, with its own process ID space, file system structure, and network interfaces. The containers share the same kernel, but they are all limited to the resources they use like, CPU, memory, I/O.

This easy creating and managing Container system that Docker brings, simplifies the creation of highly distributed systems. It is because it allows multiple applications, worker tasks and other processes to run autonomously on a single physical machine or across multiple virtual machines. This way nodes are deployed as new resources are available or when they are needed. That enables the deployment and scaling of  platform as a service (PaaS) systems. Docker makes the creation and operation of task or workload queues more simple.

Docker Containers in the Cloud

The Docker container that are created can run on different environments – on any computer, on any infrastructure and in any cloud.

Docker containers are easy to deploy in a cloud. As Ben Lloyd Pearson wrote: “Docker has been designed in a way that it can be incorporated into most DevOps applications, including Puppet, Chef, Vagrant, and Ansible, or it can be used on its own to manage development environments.”

VapourApps

VapourApps is a private cloud platform that brings application support using popular technologies as OpenStack and Docker. It consists of an:

  • installer (that sets up the OpenStack base on multiple nodes),
  • App Suite(pre-existing common apps for businesses and developers),
  • Monitoring & Backup (for health checks, performance stats, backing up all apps and data) and
  • API (for integrating new apps). It is targeted at developers, small, medium and big sized companies.

The owner of the tenant or the IT administrator, can manage their virtual servers, users, groups and monitor the status of the used application from a single dashboard.

Download and try VapourApps 1.0 Beta

Download VapourApps
By |2016-10-26T13:54:08+00:00June 20th, 2016|Docker|Comments Off on Docker containers and why they are important