Concepts Of Kubernetes for Beginners

Manoj Kumar
5 min readSep 21, 2024

--

Explanation of Kubernetes Architecture

Kubernetes is a portable, extensible open-source platform, for managing containerized workloads and services, that facilitates both declarative configuration and automation,

A Kubernetes user can define how the applications should run and how they should be able to interact with other applications or the external world. Services can be scaled up or down, updates can be rolled out and it can be rolled back in case of any failure, and maintain a given number of required instances at all times.

What is Kubernetes?

· It is a project that is managed by CNCF (Cloud Native Computing Foundation).
· Kubernetes is one of the container management and orchestration system.
· It is Google’s open-source container orchestration platform project.
· Most importantly, it uses the same API across bare metal, virtual machines or across every cloud provider.

Things we can do with Kubernetes:

• Auto scaling of applications & Self-healing application
• Efficient & Robust deployment of application can be achieved.
• Supports easy service deployments like Blue/green deployment, canary deployment — updates and scalability
• Stateless and Stateful Applications (databases etc.) can be managed.
• RBAC Role-based access control
• Third-party services integration (service catalogue).

Key features that Kubernetes provides:

  • Scheduling: This feature helps in deciding where my containers should run.
    Lifecycle and health: This feature keeps containers running despite failures.
    Scaling: The sets of containers can be made bigger or smaller.
    Storage volumes: Provides data to the containers.
    Logging and monitoring: We can track what’s happening with our containers.
    Load balancing: Based on load distribute the traffic across the set of containers.
    Debugging and introspection: We can easily Enter or attach to the containers.
    Identity and authorization: Who can do what things on our containers can be controlled.

What is a Container?

A containerized application is built with cloud-native architecture and is designed to run within containers. Containers are lightweight, portable, and can host either the entire application or specific microservices.

Containers enable the use of microservices, allowing applications to be broken down into smaller, distributed components. This approach enhances scalability, maintainability, and flexibility.

Developing, packaging, and deploying applications in containers is known as containerization. This process provides consistency in running applications across diverse environments, contributing to portability and ease of deployment.

Containerized apps are designed to run seamlessly in various environments and devices. This ensures that compatibility issues are minimized, making it easier to deploy and manage applications in different settings.

Containers offer isolation, allowing developers to identify and fix issues within a specific container without affecting the entire application.

What Is A Kubernetes Pod?

A Kubernetes pod is like a group of one or more application containers, providing a useful way to organize them.

It adds an extra layer that includes shared elements such as storage, an IP address, and communication channels between containers. Here’s a simple guide to understand how it works:

So, containers do not run directly on virtual machines and pods are a way to turn containers on and off.

Containers that must communicate directly to function are housed in the same pod. These containers are also co-scheduled because they work within a similar context.

Also, the shared storage volumes enable pods to last through container restarts because they provide persistent data. Kubernetes also scales or replicates the number of pods up and down to meet changing load/traffic/demand/performance requirements. Similar pods scale together.

Another unique feature of Kubernetes is that rather than creating containers directly, it generates pods that already have containers. Also, whenever you create a K8s pod, the platform automatically schedules it to run on a Node. This pod will remain active until the specific process completes, resources to support the pod run out, the pod object is removed, or the host node terminates or fails.

Each pod runs inside a Kubernetes node, and each pod can fail over to another, logically similar pod running on a different node in case of failure. And speaking of Kubernetes nodes.

What is a Kubernetes Node?

A Kubernetes node is either a virtual or physical machine that one or more Kubernetes pods run on. It is a worker machine that contains the necessary services to run pods, including the CPU and memory resources they need to run.

Every node consists of three essential elements:

  1. Kubelet: Kubelet serves as an agent operating within each node, overseeing the proper execution of pods. It facilitates communication between the Master and nodes, ensuring seamless pod operation.
  2. Container Runtime: The container runtime is the software responsible for executing containers. It handles various tasks, such as fetching container images from repositories or registries, unpacking them, and running the respective applications.

3. Kube-proxy: Kube-proxy operates as a network proxy within each node, managing the networking rules both within the node (among its pods) and across the entire Kubernetes cluster. This includes overseeing communication and connectivity aspects within the node and throughout the broader cluster.

What is a Kubernetes Cluster?

A Kubernetes cluster is a set of machines, called nodes, that collectively run containerized applications orchestrated by Kubernetes.

The cluster acts as a unified computing environment where workloads are deployed, managed, and scaled. Each node within the cluster can be a virtual or physical machine.

--

--

Manoj Kumar

Passionate about cloud security and sharing experience with friends and DevOps engineer.