All Kubernetes Terms & commands

K8s terms

Container:

  • A lightweight and portable executable image that contains software/OS and all its dependencies(bin/lib)

  • A set of machines/instances where required tools are installed called nodes.

  • Containerized node applications managed by Kubernetes.

Cluster:

  • Has at least one worker node and at least one master node

    cluster has a desired state, which defines which applications or other workloads should be running.

minikube / docker – A tool for running Kubernetes locally.

kubectl – A command line tool for communicating with Kubernetes.

kubelet – An agent that runs on each node in the cluster.

Control Plane – The backbone of Kubernetes.

K8s API/Desired state – Hign level overview

Pod – The smallest deployable object in the Kubernetes object model

Replica set: Manages the number of running pod replicas

Deployment: Manages pods and replica sets

Services: Abstract a set of pods / Exposes via port/labeling between pods and services

Namespaces: Divide your cluster ex monitoring

Volume: A directory which is accessible to Pods

Job: Creates one or more Pods and retires execution of the Pods until a specified number of them successfully terminate

DaemonSet: Runs a Pod on all (or some) Nodes.

StatefulSet: Used to manage stateful applications.

Kubernetes:

Kubernetes is an open-source container-orchestration system for automating computer application deployment, scaling, and management.

It was originally designed by Google and is now maintained by the Cloud Native Computing Foundation

Kubernetes Cluster: is primarily made up of the following components –

  • Master:

    • Kube API Server

    • Control Plane (kube-scheduler + kube-controller-manager + Cloud-controller-manager)

    • Etcd

  • Node:

    • Kubelet

    • Kube-proxy

    • Container Runtime

  • Addons:

    • DNS

    • WebUI

    • Container Resource Monitoring

    • Cluster Leven Logging

Important Kubernetes

  • Kubernetes allows developers of containerized applications like those created with Docker to develop more reliable infrastructure, a critical need for applications and platforms that need to respond to events like rapid spikes in traffic or the need to restart failed services.

  • Kubernetes manages all container activities.

  • Optimizes container orchestration deployment and manages for cloud infrastructure by creating groups, or pods of containers that scale without writing additional lines of code and responding to the needs of the applications.

  • The key benefits of moving to container-centric infrastructure with Kubernetes is knowing that infrastructure will self-heal and that there will be environmental consistency from development to production.

Container Orchestration

In Dev environments, running containers on a single host for the development and testing of applications may be an option. However, when migrating to Quality Assurance (QA) and Production (Prod) environments that is no longer a viable option because the applications and services need to meet specific requirements.

  • Fault-tolerance

  • On-demand scalability

  • Optimal resource usage

  • Accessibility from the outside world

  • Seamless updates/rollbacks without any downtime

Kubernetes runs containers on a cluster of virtual machines (VMs). It determines where to run containers, monitors the health of containers, and manages the full lifecycle of VM instances. This collection of tasks is known as Container Orchestration.

All about Kubernetes

Kubernetes Cluster Architecture

A Kubernetes cluster includes a cluster master node and one or more worker nodes. These are referred to as the master and nodes.

The master node manages the cluster.

Cluster services such as Kubernetes API server, resource controllers, and schedulers, run on the master.

The K8s API Server is the coordinator for all communications to the cluster.

The master determines what containers and workloads should run on each node.

When a Kubernetes cluster is created from either through a console or a command line, nodes are created as well. These are Compute Engine VMs.

Master and a Worker Node

We have two kinds of servers – a Master and a Worker Node

These can be VMs or physical servers. Together, these servers form a cluster controlled by the services that make up the Control Plane

K8s Master, Nodes and Control Plane are the essential components that run and maintain the cluster.

The Control Plane refers to the functions that make decisions about cluster maintenance, whereas the Master is what you interact with on the command line interface to assess cluster state.

Kubernetes Master

The K8s master is normally a separate server responsible for maintaining the desired state of the cluster. It tells the Nodes how many instances of your application it should run.

Nodes

K8s Nodes are worker servers that run your application(s).

The user creates and determines the number of nodes.

In addition to running your application, each node runs two processes:

  • Kubelet: receives descriptions of the desired state of a pod from the API server, and ensures the pod is healthy and running on the Node

  • Kube-proxy: is a networking proxy that proxies the USP, TCP and SCTP networking of each Node and provides load balancing. Kube-proxy is only used to connect to Services.

Control Plane

The Control Plane is responsible for making decisions about the cluster and pushing it towards the desired state.

kube-apiserver, kube-contoller-manager, and kube-scheduler are processes and etcd is a database. The k8s master runs all four.

  • kube-apisesrver is the front end for Kubernetes API server.

  • kube-controller-manager is a daemon that manages the Kubernetes control loop.

  • kube-scheduler is a function that looks for newly created pods that have no Nodes and assigns them a Node based on a host of requirements.

  • Etcd is a highly available key-value store that provides the backend database for Kubernetes.

Visualize the Kubernetes cluster as two parts: the Control plane (master) and the computer machines, or Nodes (workers)

Each node is its own environment and could be either a physical or Virtual machine.

Kubernetes Architecture

Kubernetes basic concepts

Workloads are distributed across nodes in a Kubernetes cluster. To understand how work is distributed it is important to understand below basic concepts

  • Pods – included container

  • Services – frontend/web of pod

  • Volumes – store and data

  • Namespaces – distributed in an application

kubectl basic commands

Info

  • kubectl cluster-info

  • kubectl config get-contexts

Get the details

  • kubectl get all

  • kubectl get namespaces, ns

  • kubectl get nodes, no

  • kubectl get pods, po

Describe

  • kubectl describe no

  • kubectl describe po

Delete

  • kubectl delete no

  • kubectl delete po

  • kubectl delete rs

Logs

  • kubectl logs / kubectl logs -f <podname>

List pods in a namespace

  • kubectl get pods –all-namespace

Create

  • kubectl create -f ./ns.yml

Apply

  • kubectl apply -f ./ns.yml

Sample yaml to create Namespace

#nsdev.yaml

---

apiVersion:v1

kind: Namespace

metadata:

name: dev