Table of contents
Today, we'll cover the basics of Kubernetes, including its history, why it's important, how it works, its architecture, and its main components. We'll also look at how Kubernetes and Docker work together.
What is Kubernetes?
Kubernetes, often called K8s, is an open-source platform that helps automate the deployment, scaling, and operation of application containers. It was originally created by Google and is now managed by the Cloud Native Computing Foundation (CNCF).
History of Kubernetes
Kubernetes originated from Google's internal system called Borg, which managed thousands of applications running in containers. In 2014, Google introduced Kubernetes as an open-source project, using their decade-long experience in container orchestration.
Why Kubernetes?
Kubernetes addresses the challenges of managing containerized applications in a distributed environment by providing:
Automated Deployment: Simplifies the deployment and management of applications.
Scaling: Automatically scales applications based on demand.
Self-Healing: Automatically restarts failed containers and replaces or reschedules them.
Service Discovery and Load Balancing: Easily manages network traffic to applications.
Storage Orchestration: Manages persistent storage for stateful applications.
Why is it Called K8s?
The abbreviation "K8s" is derived from the 8 letters between the "K" and the "s" in "Kubernetes". It's a common shorthand in the tech community.
How Kubernetes Works
Kubernetes orchestrates containers across multiple hosts, providing a unified platform for deploying, scaling, and managing containerized applications. It abstracts the underlying infrastructure, making it easier to deploy applications consistently across different environments.
Kubernetes Architecture
Kubernetes has a master-slave architecture consisting of a master node(control plane) and worker nodes. Here’s an overview of its architecture:
Understanding Kubernetes Architecture
Kubernetes, a powerful container orchestration platform, has a highly-developed architecture designed to manage containerized applications efficiently. The architecture consists of two main components: the Master Node and the Worker Nodes. Let’s dive deeper into each part of the architecture to understand how Kubernetes orchestrates containers.
Overview of Kubernetes Architecture
Architecture Diagram
Master Node (Control Plane):
The Master Node is responsible for maintaining the desired state of the cluster, managing workloads, and ensuring that the cluster operates smoothly. It consists of several key components:
API Server (kube-apiserver):
Acts as the front-end of the Kubernetes Master.
Exposes the Kubernetes API, which is the entry point for all REST commands used to control the cluster.
All administrative tasks are performed through the API server.
Etcd:
A consistent and highly-available key-value store used to store all cluster data.
Holds the configuration data, state information, and metadata of the cluster.
Scheduler (kube-scheduler):
Assigns newly created pods to nodes based on resource requirements and constraints.
Considers factors like resource availability, quality of service, and affinity/anti-affinity rules.
Ensures efficient use of cluster resources.
Controller Manager (kube-controller-manager):
Runs controller processes to regulate the state of the cluster.
Includes several controllers, such as the Node Controller, Replication Controller, and Endpoints Controller.
Ensures that the desired state of the cluster matches the actual state by performing actions like scaling pods, maintaining node health, and managing endpoint objects.
Cloud Controller Manager:
Integrates with cloud service providers to manage cloud-specific resources.
Runs controllers for cloud-specific operations like node management, route management, and service management.
Allows Kubernetes to be cloud-agnostic, supporting multiple cloud providers.
Worker Nodes:
Also known as minions, are the machines that run the containerized applications. Each node contains the necessary services to run and manage the containers. The key components of a node are:
Kubelet:
An agent running on each node, ensuring that containers are running as expected.
Communicates with the Control Plane to receive instructions and report the node's status.
Manages pod lifecycle and ensures the desired state is maintained.
Kube-Proxy:
Maintains network rules on each node.
Facilitates network communication between pods within the cluster and external traffic.
Manages service discovery and load balancing for pods.
Container Runtime:
The software that runs the containers, such as Docker or containerd.
Responsible for pulling images from container registries, starting, and managing containers.
Interaction Between Components
Pod Creation and Scheduling:
When a user submits a deployment request, the API server receives the request and stores the desired state in etcd.
The scheduler assigns the pod to an appropriate node based on resource requirements.
The kubelet on the selected node receives the pod specification and ensures the pod is running.
Service Discovery and Load Balancing:
Kube-proxy sets up network rules to manage communication between pods and services.
Ensures efficient routing of traffic to the appropriate pods.
Cluster State Management:
Controllers continuously monitor the state of the cluster and perform necessary actions to maintain the desired state.
The API server provides a single point of interaction for all cluster operations.
Conclusion
Kubernetes is a powerful tool for managing containerized applications in a distributed environment. Its architecture and components work together to provide a strong, scalable platform for modern applications.