In DevOps era, Every organization wants to streamline their deployment strategy. To streamline deployments, the center of attraction is IT infrastructure to be precise different kind of servers to maintain application and managed by Operation teams.
Every Organization needs their application should be scalable. But term scalable may be easy to say but believe me, it is very hard to implement.
What do we mean by scalable?
Suppose a project maintain images, on a normal day, it receives 10,000 requests. The project maintains a load balancer under this balancer project maintains two pools.
Pool A and Pool B. Pools are acted as Active/Passive mode as Project supports Blue/Green deployment. For a given moment one pool is active.
Furthermore, each pool contains 5 web servers.
Now if Each webserver can take maximum 10000 requests.
So Total request handling capacity of this project is
5*10,000=50,000 requests ,at a given moment.
I can say this project can handle 50,000 requests at a time.
But consider a situation, Let say for Olympics there will be more traffic than a normal day . After all this project is made for hosting Images.
Say in Olympic this projects got 1,000000 (lakhs) requests, what happens then?
Obviously, servers are gone down as it can maximum handles 50000 requests for a moment so every moment there will 50000 extra requests in a pool and another 100000 comes so next moment it has to serve 150000 requests. So requests in queue increase in exponential order so the system will fall down eventually.
How to manage such situation?
There are two ways to handle such situation
1. Vertical scaling: By meaning of Vertical scaling, make your server a super computer so it can serve many more requests. But Cost of Super computer is Very high ,most of the organization can’t effort it.
2. Horizontal scaling: Another way is to add many more commodity servers under load balancer so load balancer can distribute requests among them.
Second one look promising right…
But in Current Infrastructure the problem is to add a new server to the pool. To add new servers you need to configure those servers, so you need to a. install OS , b. add necessary software’s,c. add network configuration, d. add a firewall, update load balancer etc.
Which takes much more times even if you have golden image still it takes time.
So the Solution is Docker.
Docker is nothing but a slick container. It runs just above host Operating systems and access host m/c hardware but it can spawn multiple containers and each container act as separate m/c and each has separate address spaces, each has separate guest OS with minimum requirement to hosting a software, so If to host a software needs one WebLogic server so a container only contains a Guest OS with WebLogic server.
Docker is faster than a Virtual machine (VM) as Virtual machine contains one or more guest OS which runs on top of host machine OS and in between OS there is a Hypervisor layer which manages guest OS calls or it’s requirements and changes that requirement into host machine specific OS call. Mainly hypervisor responsible for host OS and Guest OS communication.
Unlike VM, Docker just runs on top of your OS it shares your OS network, hardware everything but Docker container maintains a separate OS which has its own address space and only contains require software to work with. So Docker are very light-weight and easy to spawn, later you can see, by a command you can spawn a container so Horizontal Scale up is very easy
Difference between Docker and VM
Docker maintains client-server architecture where Docker client talks with Docker daemon via a socket or Restful API. Docker daemon and client can be in the same host or Docker daemon can be hosted on a different machine. If so Docker client has to communicate with Remote Docker daemon via socket.
Docker contains Three main parts
1. Docker Daemon: Docker Daemon does most of the work upon Docker client or Restful API commands/instruction. According to command, it builds an image, spawn a new container, can run a container, update an Image, can push an image etc.
2. Restful API: Docker publish Restful API so if you want to control Docker daemon through the program you can call this API and controls Docker daemon.
3. Docker client: Docker client is a CLI (command Line Interface) where you can fire command then Docker client talks to Docker daemon visual Rest API or Socket and Docker daemon perform the task for you. You can consider it as Linux terminal which talks to kernel.
Docker Architecture – Picture Courtesy: Docker Site
To understand How docker works you need to know some Docker terms
1. Docker images: Docker images is nothing but a logical template based on this a new Docker container can spawn. So according to Image hosting Project states above ,it needs an Image which has an OS say Ubuntu 13.04 with a WebLogic server. So an image can be created with this requirement.
The image can be created locally using Docker file or can be pulled from a global repository (If it exists) which is managed by Docker. we call it Docker Hub or registry.
2. Docker Hub/Registry: Docker hub contains images which can be pulled to satisfy your requirements. In Docker hub, there can be just OS images and can be Hybrid images such as Solaris with Tomcat. One can push its own image to Docker Hub, to do it you need Docker account.
Docker Hub is a central repository, you can have thought it as a maven Repository. And like GIT you can push your image into it.
3. Docker Containers: Docker containers is an actual runtime environment spawn from an image. It acts as a separate machine.
Some important commands:
To download an Image from Docker Hub:
In Docker client type
Docker pull <image-name> //that is name of the image
To spawn a container and execute a command:
Docker run -i -t <image-name> <command name>
Docker run -I -t ubuntu13.03 ls
To Push an image in Docker Hub:
Docker push <image-name>
I will discuss more command in next Docker section where I will guide you to set up a Docker container.