Docker is a tool that simplifies the installation process of software. Docker, one of the latest crazes, is an amazing and powerful tool for packing, shipping, and running applications. However, understanding and setting up Docker for your specific application can take a bit of time.
A Brief Overview of Docker Core Components
Docker images are read-only templates that describe a Docker Container. They include specific instructions written in a Dockerfile that defines the application and its dependencies. Think of them as a snapshot of your application at a certain time.
Docker Containers are instances of Docker images. They include the operating system, application code, runtime, system tools, system libraries, and so on. You are able to connect multiple Docker Containers together, such as a having a Node.js application in one container that is connected to a Redis database container.
A Docker Registry is a place for you to store and distribute Docker images.
Docker Compose is a tool that allows you to build and start multiple Docker Images at once. Instead of running the same multiple commands every time you want to start your application, you can do them all in one command — once you provide a specific configuration.
Dockerhub is what makes Docker truly powerful. It’s what github is to git, an open platform to share your Docker images. You can always construct a Docker image locally using docker build , but it is always good to push this image to Dockerhub so that the next person simply has to pull for personal use.
How Does it Work?
Docker employs the concept of (reusable) layers. So whatever line that you write inside the Dockerfile is considered a layer. For example you would usually start with:
RUN apt-get install python3
This Dockerfile would install python3 (as a layer) on top of the Ubuntu layer.
What you essentially do is for each project you write all the apt-get install, pip install etc. commands into your Dockerfile instead of executing it locally.