What is Kubernetes? Container orchestration explained
Many modern workloads run in containers, so organizations need container orchestration platforms like Kubernetes to easily spin up and down containerized workloads at scale. Read on to learn the basics around Kubernetes and Kubernetes security.
Kubernetes defined
Kubernetes, sometimes referred to as K8s, is the leading open source orchestration platform for automating deployment and management of containerized workloads at scale.
You can use either an imperative or declarative state approach with Kubernetes, but the latter is generally more reliable and better suited for scaling. Kubernetes is designed to be configurable, extensible, portable, and manage the ephemeral nature of containers.
Modern applications run on containers in 2026. They are lightweight, quick to deploy, and provide application-level isolation. They are great for organizations running cloud workloads because they are simple to scale and easy to spin up or down as demand dictates.
The challenge, then, becomes managing all your short-lived containerized workloads without going mad. That’s where Kubernetes comes in and why it’s important for organizations looking to scale.
With Kubernetes, you get automation capabilities to:
- Deploy or shut down containers as needed.
- Replace faulty containers that fail a health check.
- Adjust compute resources based on real-time demand.
- Implement secrets management and policy enforcement.
To understand how to implement Kubernetes, let’s look at a few use cases, which include:
- Microservices architectures: For cloud-native organizations, many services and applications consist of small components, aka microservices, which are often deployed via containers. These organizations can use K8s to manage all the different components while providing high availability and uptime.
- CI/CD pipelines: Many DevOps teams utilize continuous integration/continuous deployment (CI/CD) pipelines to automate and speed up app development. K8s enables your DevOps to automate building, testing, and releasing applications efficiently.
- Hybrid/multi-cloud deployments: Many organizations use a combination of different cloud services and on-premise systems, and Kubernetes provides a standardized operational model which works across any cloud service and on-prem.
- AI/ML workloads: Kubernetes has become the popular choice (e.g., Amazon EKS) for how to deploy AI and machine-learning (ML) services, because K8s can scale up compute resources and optimize performance. The 2026 CNCF Annual Cloud Native Survey showed that 66% of organizations already use K8s for generative AI models.
This is just the tip of the iceberg when it comes to K8s use cases, as other possibilities include virtual machine (VM) orchestration, running serverless functions and high-performance computing, and managing Internet of Things (IoT) devices.
If you’re looking for a more lightweight version of Kubernetes for edge computing and IoT devices, you could deploy K3s instead. If you want to run a Kubernetes cluster on your local systems, then you could try MicroK8s.
One way to wrap your head around how to use Kubernetes is to deploy Minikube. This version of K8s is intended to create a local Kubernetes environment for experimenting and testing. Other options for testing how to manage production clusters are OpenShift or Rancher, which are hosted in the cloud. Learn more about them and other Kubernetes alternatives here.
Kubernetes vs. Docker
You may understand what containers are – otherwise you likely wouldn’t be trying to learn about Kubernetes – but one area that can still cause confusion is the difference between K8s and Docker.
They aren’t competing platforms, but rather complementary ones. At a high level, Kubernetes provides the orchestration and maintenance of containers running in different clusters and environments, while Docker focuses on the creation, runtime, and management of containers themselves.
Like K8s, Docker is open source. Docker images are used to create each container, providing the minimum build needed to run a container, such as libraries, OS, binaries, configuration files, etc.
Docker does have its own orchestration tool called Docker Swarm. Docker is ideal for small container deployments, while Kubernetes is best for large and complex environments.
It’s very common to use Kubernetes and Docker together. Docker provides the container runtime to build or manage container images, and Kubernetes orchestrates and manages the containers while running and decides whether to scale up or down automatically.
Benefits of Kubernetes
- Scalability: If your organization deploys a large amount of applications to users, then K8s’ autoscaling capabilities spin up and down applications as your needs change so you don’t have too many containers in use or, conversely, not enough.
- High availability: If a containerized application fails, demand changes, or other issues arise, Kubernetes can use features like self-healing and autoscaling to respond and keep services up and running.
- Cloud agnostic: You don’t need to use a specific cloud service to run Kubernetes. Rather, it provides a standard way no matter where you wish to use K8s. For hybrid cloud, you can keep sensitive workloads and data on-prem, while using the cloud for all other workloads. This makes it easy to migrate to other cloud services as needed.
- OOTB security: Kubernetes features workload protection, secrets management, role-based access control (RBAC), and more.
Challenges of Kubernetes
Kubernetes is not without its fair share of challenges, especially when first learning how to deploy it. Some Kubernetes challenges include:
- Complexity and learning curve: Deploying Kubernetes isn’t too difficult but can still result in poor resource usage and operational issues while you’re learning, such as not specifying compute requirements for pods or errors like CrashLoopBackOff.
- Cost management: Kubernetes makes scaling easy, but can also potentially lead to higher cloud service costs if it’s optimized poorly.
- Vulnerabilities and misconfigurations: Kubernetes has a few common vulnerabilities and exposures (CVEs) to be aware of, and misconfigurations can lead to inefficient resource usage and result in container escapes as well as attack surface growth.
Kubernetes architecture
The Kubernetes cluster architecture is broken out into a control plane and individual worker nodes. You run what is called a “cluster” that consists of one control plane and at least one worker node, but usually more (up to 5,000 nodes per cluster, if desired).
The different Kubernetes components to learn about include:
- Control plane: Like its name suggests, this component is a set of tools to manage K8s clusters and workloads running in them, and consists of four main components and one optional one.
- API Server: The kube-apiserver hosts the APIs used to manage your Kubernetes clusters and enables admins to interact with these clusters.
- etcd: Kubernetes etcd serves as the key-value store for data on managing clusters. It is not for data storage like storage volumes, but rather stores the critical data to keep clusters running.
- Scheduler: kube-scheduler identifies new pods that aren't currently assigned to a specific node and selects the ideal one for the new pods.
- Controller Manager: kube-controller-manager is a daemon that runs controllers to manage the state and make changes to achieve your desired state.
- Cloud Controller Manager: The cloud-controller-manager is an optional component that enables Kubernetes to interact with cloud services for load balancing and storage configuration.
- Worker nodes: Kubernetes nodes provide the virtual or physical environment where workloads run in different pods, and consists of three components.
- kubelet: This component ensures that workloads in the node are running as expected and registers new nodes to the kube-apiserver.
- kube-proxy: An optional network proxy that maintains network rules for communication to and from pods.
- Container runtime: The container runtime is responsible for managing the lifecycle of containers and enabling containers to operate on the host system (e.g., containerd, Docker Engine, etc.).
Key Kubernetes objects
Alongside the main components of Kubernetes listed above, some important K8s objects to understand include:
- Pods: Kubernetes pods are the smallest deployable object and are one or more containers running in a worker node that share storage and resources (though best practice is one container per pod). Learn more about Kubernetes pods here.
- Deployments: Kubernetes Deployments provide desired state management for how to handle the creation and rollout of pods. Learn more about Kubernetes Deployments here.
- ReplicaSets: The Kubernetes ReplicaSet maintains a stable set of replicated pods in a cluster to scale service availability. Learn more about ReplicaSet here.
- StatefulSets: Kubernetes StatefulSets are an option that allows databases and other data stores to persist even after a container is shut down or fails. Learn more about Kubernetes StatefulSets here.
- Dashboard: The Kubernetes Dashboard is a web application that enables admins to monitor clusters, review logs, and other system information. Learn more about Kubernetes Dashboard here.
- ConfigMaps: This is an API object for storing non-sensitive data in key-value pairs. Learn more about ConfigMaps here.
- Secrets: Kubernetes secrets are similar to ConfigMaps, except they are intended for confidential data. Secrets are stored unencrypted in etcd and accessible by anyone with API access by default, so enable encryption and role-based access (RBAC). Learn more about Kubernetes secrets here.
- Namespaces: Kubernetes Namespaces are used to organize resources within a single cluster among multiple users within that cluster. Learn more about Kubernetes Namespaces here.
- Audit log: A Kubernetes audit log records information from the Kubernetes auditing service to provide visibility into user activity and find potential security issues. Learn more about Kubernetes audit log here.
Networking in Kubernetes
Kubernetes networking enables cluster components to communicate with each other and does not use an internal Network Address Translation (NAT). Kubernetes uses a flat network model to limit complexity.
Communication in Kubernetes is broken out into four areas:
- Pod-to-pod: Every pod gets assigned an IP address and network namespace; this is how pods communicate with each other. To communicate with pods in other nodes, you’ll need to create a virtual network overlay for the cluster (e.g., Calico, Weave, Flannel).
- Pod-to-Service: A Kubernetes Service is used to assign a group of pods a unique IP address (clusterIP), and requests to communicate with other pods are done via kube-proxy.
- Internet-to-Service: To enable a Service used by an application in a container to communicate with outside sources, you’ll use your cloud service provider’s NAT gateway so you get access to the internet without publicly exposing them.
- Container-to-container: Containers within the same pod can communicate freely with each other by using the localhost.
Kubernetes Network Policies will define what pods can communicate. Kubernetes Ingress Controller provides traffic routing and load balancing, and uses rules for how to route inbound requests.
Learn more about Kubernetes networking here.
How Kubernetes workload scaling works
Kubernetes enables the automatic scaling of pods, containers, etc. based on an organization’s needs in real time. Scaling can be done horizontally or vertically. Horizontal scaling involves increasing system capacity by adding more nodes to handle growing demand, while vertical scaling involves increasing a single host’s capacity.
For the former, you use the HorizontalPodAutoscaler (HPA), which is deployed as a Kubernetes API resource and controller. HPA runs a control loop that queries resource utilization periodically (default is every 15 seconds) against specified metrics.
To scale vertically, you use the VerticalPodAutoscaler (VPA), an add-on that creates CustomResourceDefinitions which determine how and when to scale resources, such as cluster size, events, and schedule.
You can also manually scale workloads, should you desire. Horizontal manual scaling uses the kubectl command-line interface (CLI). For vertical manual scaling, you resize CPU and memory resources assigned to containers.
Kubernetes security best practices
Kubernetes includes security capabilities to protect the clusters, control plane, secrets, and workloads. Ensure you understand what Kubernetes can and can’t do security-wise before rolling it out into your production environment.
To effectively secure K8s from threats, vulnerabilities, and other risks, implement the following Kubernetes security best practices:
- Use TLS for Kubernetes control plane security.
- Encrypt all other Kubernetes data.
- Implement RBAC for Kubernetes API security.
- Harden node security.
- Isolate workloads and use network policies.
- Secure pods to protect applications.
- Use audit logs.
- Isolate workloads with separate namespaces.
- Secure the Kubernetes dashboard.
- Adopt third-party Kubernetes security tools for extra protection.
Read our Kubernetes security best practices article for more in-depth explanations for each method.
Additional Kubernetes security tutorials
With all the different components for Kubernetes, it can be confusing to understand how to secure the entire infrastructure without accidentally introducing visibility gaps.
Use the following how-to’s to better protect your Kubernetes deployment:
- Kubernetes cluster security
- Kubernetes network security
- Kubernetes architecture security
- Kubernetes pod security
- Kubernetes admission controllers for security
- Kubernetes secrets management
- Kubernetes security posture management (KSPM)
- OWASP Kubernetes security projects
- Amazon EKS security best practices
- AWS Fargate security
Keep Kubernetes deployments secure with Sysdig
Cloud threats move fast and you need to be able to respond just as quickly. With Sysdig Kubernetes and container security, you get deep visibility into your container and orchestration infrastructure to detect and respond to threats before they become breaches.
One option is to adopt Sysdig KSPM to mitigate risks, such as misconfigurations, poor identity policies, and compliance violations. But that’s only part of our comprehensive Kubernetes and container security solution.
