Table of contents
- Questions:
- <mark>What is Kubernetes and why it is important?</mark>
- <mark>What is the difference between docker swarm and Kubernetes?</mark>
- <mark>How does Kubernetes handle network communication between containers?</mark>
- <mark>How does Kubernetes handle the scaling of applications?</mark>
- <mark>What is a Kubernetes Deployment and how does it differ from a ReplicaSet?</mark>
- <mark>Can you explain the concept of rolling updates in Kubernetes?</mark>
- <mark>How does Kubernetes handle network security and access control?</mark>
- <mark>Can you give an example of how Kubernetes can be used to deploy a highly available application?</mark>
- <mark>What is a namespace in Kubernetes? Which namespace any pod takes if we don't specify any namespace?</mark>
- <mark>How Ingress helps in Kubernetes?</mark>
- <mark>Can you explain the concept of self-healing in Kubernetes and give examples of how it works?</mark>
- <mark>How does Kubernetes handle storage management for containers?</mark>
- <mark>How does the NodePort service work?</mark>
- <mark>What is a multinode cluster and a single-node cluster in Kubernetes?</mark>
- <mark>Difference between "create" and "apply" in Kubernetes?</mark>
- Happy Learning :)
Questions:
What is Kubernetes and why it is important?
Kubernetes
is anopen-source
container orchestration tool developed byGoogle
to manage containerized applications in different types of environments such asphysical
,virtual
, andcloud infrastructure
. It is used to automate the deployment, scaling, and management of containerized applications.Kubernetes is important because it allows you to deploy containerized applications to a cluster without tying them specifically to individual machines. Kubernetes automates the distribution and scheduling of application containers across a cluster in a more efficient way.
What is the difference between docker swarm and Kubernetes?
Docker Swarm
is a container orchestration tool, meaning that it allows the user to manage multiple containers deployed across multiple host machines.Kubernetes
is also a container orchestration tool. However, it is more extensive than Docker Swarm and is more popular as well.Docker Swarm
is a native clustering or orchestration tool for Docker. It allows us to create and access a pool of Docker hosts using the full suite of Docker tools.Kubernetes
is an open-source container orchestration tool developed by Google.Docker Swarm
is easy to set up and use.Kubernetes
is complex to set up and use.Docker Swarm
is less secure than Kubernetes.Kubernetes
is more secure than Docker Swarm.Docker Swarm
is less scalable than Kubernetes.Kubernetes
is more scalable than Docker Swarm.Docker Swarm
is less flexible than Kubernetes.Kubernetes
is more flexible than Docker Swarm.Docker Swarm
is less portable than Kubernetes.Kubernetes
is more portable than Docker Swarm.Docker Swarm
is less efficient than Kubernetes.Kubernetes
is more efficient than Docker Swarm.How does Kubernetes handle network communication between containers?
Kubernetes
useskube-proxy
to handle network communication between containers.Kube-proxy
is a network proxy that runs on each node in the cluster. It maintainsnetwork rules on nodes
. These network rules allow network communication to your Pods from network sessions inside or outside of your cluster.Kube-proxy
uses theiptables
utility to set up rules for handling network operations.Kube-proxy
sets up rules to forward all traffic to Services (that have thetype=NodePort
ortype=LoadBalancer
).How does Kubernetes handle the scaling of applications?
Kubernetes lets you
automate many management tasks
includeprovisioning
andscaling
.The Kubernetes autoscaling mechanism uses two layers:
Pod-based scaling—supported
by the Horizontal Pod Autoscaler (HPA) and the newer Vertical Pod Autoscaler (VPA).Node-based scaling
— supported by the Cluster Autoscaler.
- Horizontal Pod Autoscaler (HPA):
When the level of application usage changes, you need a way to
add or remove pod replicas
. Once configured, the Horizontal Pod Autoscaler manages workload scaling automatically.The Horizontal Pod Autoscaler (HPA) automatically scales the
number of pods
in areplication controller
,deployment
,replica set
orstateful set
based on observed CPU utilization (or, with custom metrics support, on some other application-provided metrics).
- Vertical Pod Autoscaler (VPA):
The Vertical Pod Autoscaler (VPA) frees you from having to think about the
resource requirements
your containers. It automatically sets the resource requirements for your containers based on usage patterns.Some workloads can require short periods of
high utilization
. Increasing request limits by default would entail wasting unused resources, and would limit the nodes that can run those workloads.
- Cluster Autoscaler:
The Cluster Autoscaler automatically adjusts the
number of nodes
in acluster
when pods fail to launch due tolack of resources
in thecluster
. It can also scale down thenumber of nodes
in thecluster
ifnodes
areunderutilized
and theirpods
can befreed
up bybeing moved
ontoother nodes
in thecluster
.What is a Kubernetes Deployment and how does it differ from a ReplicaSet?
A
ReplicaSet
is aPod
controller
that ensures a specified number ofPod
replicas are running at any given time. However, aReplicaSet
does not guarantee that aPod
isscheduled
onto anode
. Toguarantee
that aPod
isscheduled
, you can use aDeployment
.A
Deployment
is ahigher-level
concept that managesReplicaSets
and provides declarative updates toPods
along with a lot of other useful features. Therefore, you should useDeployments
instead ofReplicaSets
.Can you explain the concept of rolling updates in Kubernetes?
Rolling updates
allow Deployments update to take place with zero downtime by incrementally updating Pods instances with new ones. The new Pods will be scheduled on Nodes with available resources.How does Kubernetes handle network security and access control?
Networking is a particular
complex part
ofKubernetes
.Networks
can be configured in a variety of ways. You might use a service mesh, but you might not. Some resources in your cluster may interface only withinternal networks
, while others requiredirect access to the Internet
. Ports, IP addresses, and other network attributes are usuallyconfigured dynamically
, which can make it difficult to keep track of what is happening at the network level.Network policies
definerules
that govern howpods can communicate with each other
at thenetwork level
.In addition to providing a systematic means of controlling pod communications, network policies offer the important advantage of allowing admins to define resources and associated networking rules based on contexts like pod labels and namespaces.Access COntrol:
Access control
inKubernetes
involves authentication and authorization mechanisms.RBAC, roles, and role bindings are used for defining permissions.
Cluster roles and bindings provide global-level access control.
Admission controllers validate and enforce access policies.
Security contexts and auditing enhance security and accountability.
Can you give an example of how Kubernetes can be used to deploy a highly available application?
Kubernetes achieves high availability for applications by using features such as replication, scaling, pod anti-affinity, health checks, self-healing, service discovery, load balancing, and persistent storage. These features distribute the
workload
,monitor application health
, ensureuninterrupted service
,balance traffic
, and maintaindata integrity
, resulting in a highly available application deployment.What is a namespace in Kubernetes? Which namespace any pod takes if we don't specify any namespace?
In
Kubernetes
, a namespace isvirtual cluster
within aphysical cluster
. It provides a way to divide andsegregate resources
objects
, such aspods
,services
, anddeployments
, into distinct groups. Namespaces are primarily used to create logical boundaries and enable multi-tenancy in a Kubernetes cluster.If you don't specify a namespace for a pod, it will be created in the default namespace. The default namespace is the
initial namespace
created by Kubernetes, and if no specific namespace is specified, all objects are assumed to belong to this default namespace.**
Note:
**It's worth noting that you can create and use custom namespaces to organize and manage resources based on your requirements, enabling better isolation and resource allocation within the cluster.How Ingress helps in Kubernetes?
Ingress
acts as atraffic controller
andload balancer
in Kubernetes.It provides
external access
servicesrunning within the cluster
.Ingress
enablesrouting of incoming traffic
based onhost
,path
, orother criteria
.It supports
load balancing
to distributetraffic across multiple backend services
.Ingress allows for
TLS termination
,handling SSL/TLS encryption at the edge
.It simplifies the management and exposure of multiple services behind a single entry point.
Explain different types of services in Kubernetes.
A Kubernetes Service is an
abstraction
which defines a logical set of Pods running somewhere in your cluster, that all provide the same functionality. When created, each Service is assigned aunique IP address (also called clusterIP).
This
address
is tied to thelifespan of the Service
, and will not change while the Service is alive. Pods can be configured to talk to the Service and know that communication to the Service will be automatically load-balanced out to some pod that is a member of the Service.
Types of Services in Kubernetes:
Visual Representation of Services in Kubernetes:
Can you explain the concept of self-healing in Kubernetes and give examples of how it works?
Auto-Healing
: Auto-healing is a feature that allowsKubernetes to automatically restart containers
that fail for various reasons. It is a very useful feature that helps tokeep your applications up and running
.Auto-Scaling
: Auto-scaling is a feature that allowsKubernetes to automatically scale the number of pods
a deployment based on the resource usage of the existing pods.
Here on purpose, I have killed the pod
to check the auto-healing
feature. for Auto scaling
we use replica set
.
How does Kubernetes handle storage management for containers?
A
PersistentVolume (PV)
is a piece ofstorage in the cluster
that has been provisioned by an administrator or dynamically provisioned using Storage Classes.It is a resource in the cluster just like a node is a cluster resource.
PVs are volume
plugins likeVolumes
, but have a lifecycle independent of any individual Pod that uses the PV.This API object captures the details of the storage implementation, be that NFS, iSCSI, or a cloud-provider-specific storage system.
A
PersistentVolumeClaim (PVC)
is a request for storage by a user. It is similar to a Pod. Pods consume node resources and PVCs consume PV resources. Pods can request specific levels of resources (CPU and Memory). Claims can request specific sizes and access modes (e.g., they can be mountedReadWriteOnce
,ReadOnlyMany
orReadWriteMany
, seeAccessModes
.)How does the NodePort service work?
A NodePort service in
Kubernetes
exposes aspecific port
of service to the outside world.Each worker node
in the cluster listens on theassigned NodePort
andforwards incoming traffic to the service
.NodePort services are assigned a
random port
from the range30000-32767
.External clients
access theNodePort service
using the node'sIP address
orhostname
along with theassigned NodePort
.Load balancing
is automatically handled acrossworker nodes
hosting the service.Security measures like
firewall rules
ornetwork policies
should be implemented tocontrol access
andensure security
.What is a multinode cluster and a single-node cluster in Kubernetes?
Multinode Cluster:
Amultinode cluster
consists ofmultiple worker nodes
acontrol plane
. Each worker node is a separatephysical
orvirtual machine
thatruns containerized applications
. The control plane, typically consisting of multiple master nodes, manages and orchestrates the worker nodes. A multinode cluster offers scalability, high availability, and fault tolerance as theworkload
is distributed acrossmultiple nodes
.Single-Node Cluster:
Asingle-node cluster
, as the name suggests, comprises onlyone worker node
acontrol plane
. Both theworker node
andcontrol plane run on the same physical or virtual machine
. In this setup, all Kubernetes components and theworkload are running on a single node
. A single-node cluster is often used for development, testing, or learning purposes when you don't require the full capabilities of a multinode cluster.Difference between "create" and "apply" in Kubernetes?
kubectl apply
is used tocreate
andupdate
a resource in Kubernetes. If the resource does not exist, it will be created. If the resource already exists, it will be updated with the new configuration.kubectl create
is used tocreate
a resource in Kubernetes. If the resource already exists, it will throw an error.