Linux Foundation KCNA - Kubernetes and Cloud Native Associate
Which of the following is a challenge derived from running cloud native applications?
The operational costs of maintaining the data center of the company.
Cost optimization is complex to maintain across different public cloud environments.
The lack of different container images available in public image repositories.
The lack of services provided by the most common public clouds.
The Answer Is:
BExplanation:
The correct answer is B. Cloud-native applications often run across multiple environments—different cloud providers, regions, accounts/projects, and sometimes hybrid deployments. This introduces real cost-management complexity: pricing models differ (compute types, storage tiers, network egress), discount mechanisms vary (reserved capacity, savings plans), and telemetry/charge attribution can be inconsistent. When you add Kubernetes, the abstraction layer can further obscure cost drivers because costs are incurred at the infrastructure level (nodes, disks, load balancers) while consumption happens at the workload level (namespaces, Pods, services).
Option A is less relevant because cloud-native adoption often reduces dependence on maintaining a private datacenter; many organizations adopt cloud-native specifically to avoid datacenter CapEx/ops overhead. Option C is generally untrue—public registries and vendor registries contain vast numbers of images; the challenge is more about provenance, security, and supply chain than “lack of images.†Option D is incorrect because major clouds offer abundant services; the difficulty is choosing among them and controlling cost/complexity, not a lack of services.
Cost optimization being complex is a recognized challenge because cloud-native architectures include microservices sprawl, autoscaling, ephemeral environments, and pay-per-use dependencies (managed databases, message queues, observability). Small misconfigurations can cause big bills: noisy logs, over-requested resources, unbounded HPA scaling, and egress-heavy architectures. That’s why practices like FinOps, tagging/labeling for allocation, and automated guardrails are emphasized.
So the best answer describing a real, common cloud-native challenge is B.
=========
What default level of protection is applied to the data in Secrets in the Kubernetes API?
The values use AES symmetric encryption
The values are stored in plain text
The values are encoded with SHA256 hashes
The values are base64 encoded
The Answer Is:
DExplanation:
Kubernetes Secrets are designed to store sensitive data such as tokens, passwords, or certificates and make them available to Pods in controlled ways (as environment variables or mounted files). However, the default protection applied to Secret values in the Kubernetes API is base64 encoding, not encryption. That is why D is correct. Base64 is an encoding scheme that converts binary data into ASCII text; it is reversible and does not provide confidentiality.
By default, Secret objects are stored in the cluster’s backing datastore (commonly etcd) as base64-encoded strings inside the Secret manifest. Unless the cluster is configured for encryption at rest, those values are effectively stored unencrypted in etcd and may be visible to anyone who can read etcd directly or who has API permissions to read Secrets. This distinction is critical for security: base64 can prevent accidental issues with special characters in YAML/JSON, but it does not protect against attackers.
Option A is only correct if encryption at rest is explicitly configured on the API server using an EncryptionConfiguration (for example, AES-CBC or AES-GCM providers). Many managed Kubernetes offerings enable encryption at rest for etcd as an option or by default, but that is a deployment choice, not the universal Kubernetes default. Option C is incorrect because hashing is used for verification, not for secret retrieval; you typically need to recover the original value, so hashing isn’t suitable for Secrets. Option B (“plain textâ€) is misleading: the stored representation is base64-encoded, but because base64 is reversible, the security outcome is close to plain text unless encryption at rest and strict RBAC are in place.
The correct operational stance is: treat Kubernetes Secrets as sensitive; lock down access with RBAC, enable encryption at rest, avoid broad Secret read permissions, and consider external secret managers when appropriate. But strictly for the question’s wording—default level of protection—base64 encoding is the right answer.
=========
What element allows Kubernetes to run Pods across the fleet of nodes?
The node server.
The etcd static pods.
The API server.
The kubelet.
The Answer Is:
DExplanation:
The correct answer is D (the kubelet) because the kubelet is the node agent responsible for actually running Pods on each node. Kubernetes can orchestrate workloads across many nodes because every worker node (and control-plane node that runs workloads) runs a kubelet that continuously watches the API server for PodSpecs assigned to that node and then ensures the containers described by those PodSpecs are started and kept running. In other words, the kube-scheduler decides where a Pod should run (sets spec.nodeName), but the kubelet is what makes the Pod run on that chosen node.
The kubelet integrates with the container runtime (via CRI) to pull images, create sandboxes, start containers, and manage their lifecycle. It also reports node and Pod status back to the control plane, executes liveness/readiness/startup probes, mounts volumes, and performs local housekeeping that keeps the node aligned with the declared desired state. This node-level reconciliation loop is a key Kubernetes pattern: the control plane declares intent, and the kubelet enforces it on the node.
Option C (API server) is critical but does not run Pods; it is the control plane’s front door for storing and serving cluster state. Option A (“node serverâ€) is not a Kubernetes component. Option B (etcd static pods) is a misunderstanding: etcd is the datastore for Kubernetes state and may run as static Pods in some installations, but it is not the mechanism that runs user workloads across nodes.
So, Kubernetes runs Pods “across the fleet†because each node has a kubelet that can realize scheduled PodSpecs locally and keep them healthy over time.
=========
What is the goal of load balancing?
Automatically measure request performance across instances of an application.
Automatically distribute requests across different versions of an application.
Automatically distribute instances of an application across the cluster.
Automatically distribute requests across instances of an application.
The Answer Is:
DExplanation:
The core goal of load balancing is to distribute incoming requests across multiple instances of a service so that no single instance becomes overloaded and so that the overall service is more available and responsive. That matches option D, which is the correct answer.
In Kubernetes, load balancing commonly appears through the Service abstraction. A Service selects a set of Pods using labels and provides stable access via a virtual IP (ClusterIP) and DNS name. Traffic sent to the Service is then forwarded to one of the healthy backend Pods. This spreads load across replicas and provides resilience: if one Pod fails, it is removed from endpoints (or becomes NotReady) and traffic shifts to remaining replicas. The actual traffic distribution mechanism depends on the networking implementation (kube-proxy using iptables/IPVS or an eBPF dataplane), but the intent remains consistent: distribute requests across multiple backends.
Option A describes monitoring/observability, not load balancing. Option B describes progressive delivery patterns like canary or A/B routing; that can be implemented with advanced routing layers (Ingress controllers, service meshes), but it’s not the general definition of load balancing. Option C describes scheduling/placement of instances (Pods) across cluster nodes, which is the role of the scheduler and controllers, not load balancing.
In cloud environments, load balancing may also be implemented by external load balancers (cloud LBs) in front of the cluster, then forwarded to NodePorts or ingress endpoints, and again balanced internally to Pods. At each layer, the objective is the same: spread request traffic across multiple service instances to improve performance and availability.
=========
Which Kubernetes Service type exposes a service only within the cluster?
ClusterIP
NodePort
LoadBalancer
ExternalName
The Answer Is:
AExplanation:
In Kubernetes, a Service provides a stable network endpoint for a set of Pods and abstracts away their dynamic nature. Kubernetes offers several Service types, each designed for different exposure requirements. Among these, ClusterIP is the Service type that exposes an application only within the cluster, making it the correct answer.
When a Service is created with the ClusterIP type, Kubernetes assigns it a virtual IP address that is reachable exclusively from within the cluster’s network. This IP is used by other Pods and internal components to communicate with the Service through cluster DNS or environment variables. External traffic from outside the cluster cannot directly access a ClusterIP Service, which makes it ideal for internal APIs, backend services, and microservices that should not be publicly exposed.
Option B (NodePort) is incorrect because NodePort exposes the Service on a static port on each node’s IP address, allowing access from outside the cluster. Option C (LoadBalancer) is incorrect because it provisions an external load balancer—typically through a cloud provider—to expose the Service publicly. Option D (ExternalName) is incorrect because it does not create a proxy or internal endpoint at all; instead, it maps the Service name to an external DNS name outside the cluster.
ClusterIP is also the default Service type in Kubernetes. If no type is explicitly specified in a Service manifest, Kubernetes automatically assigns it as ClusterIP. This default behavior reflects the principle of least exposure, encouraging internal-only access unless external access is explicitly required.
From a cloud native architecture perspective, ClusterIP Services are fundamental to building secure, scalable microservices systems. They enable internal service-to-service communication while reducing the attack surface by preventing unintended external access.
According to Kubernetes documentation, ClusterIP Services are intended for internal communication within the cluster and are not reachable from outside the cluster network. Therefore, ClusterIP is the correct and fully verified answer, making option A the right choice.
What is the primary purpose of a Horizontal Pod Autoscaler (HPA) in Kubernetes?
To automatically scale the number of Pod replicas based on resource utilization.
To track performance metrics and report health status for nodes and Pods.
To coordinate rolling updates of Pods when deploying new application versions.
To allocate and manage persistent volumes required by stateful applications.
The Answer Is:
AExplanation:
The Horizontal Pod Autoscaler (HPA) is a core Kubernetes feature designed to automatically scale the number of Pod replicas in a workload based on observed metrics, making option A the correct answer. Its primary goal is to ensure that applications can handle varying levels of demand while maintaining performance and resource efficiency.
HPA works by continuously monitoring metrics such as CPU utilization, memory usage, or custom and external metrics provided through the Kubernetes metrics APIs. Based on target thresholds defined by the user, the HPA increases or decreases the number of replicas in a scalable resource like a Deployment, ReplicaSet, or StatefulSet. When demand increases, HPA adds more Pods to handle the load. When demand decreases, it scales down Pods to free resources and reduce costs.
Option B is incorrect because tracking performance metrics and reporting health status is handled by components such as the metrics-server, monitoring systems, and observability tools—not by the HPA itself. Option C is incorrect because rolling updates are managed by Deployment strategies, not by the HPA. Option D is incorrect because persistent volume management is handled by Kubernetes storage resources and CSI drivers, not by autoscalers.
HPA operates at the Pod replica level, which is why it is called “horizontal†scaling—scaling out or in by changing the number of Pods, rather than adjusting resource limits of individual Pods (which would be vertical scaling). This makes HPA particularly effective for stateless applications that can scale horizontally to meet demand.
In practice, HPA is commonly used in production Kubernetes environments to maintain application responsiveness under load while optimizing cluster resource usage. It integrates seamlessly with Kubernetes’ declarative model and self-healing mechanisms.
Therefore, the correct and verified answer is Option A, as the Horizontal Pod Autoscaler’s primary function is to automatically scale Pod replicas based on resource utilization and defined metrics.
Which of the following sentences is true about container runtimes in Kubernetes?
If you let iptables see bridged traffic, you don't need a container runtime.
If you enable IPv4 forwarding, you don't need a container runtime.
Container runtimes are deprecated, you must install CRI on each node.
You must install a container runtime on each node to run pods on it.
The Answer Is:
DExplanation:
A Kubernetes node must have a container runtime to run Pods, so D is correct. Kubernetes schedules Pods to nodes, but the actual execution of containers is performed by a runtime such as containerd or CRI-O. The kubelet communicates with that runtime via the Container Runtime Interface (CRI) to pull images, create sandboxes, and start/stop containers. Without a runtime, the node cannot launch container processes, so Pods cannot transition into running state.
Options A and B confuse networking kernel settings with runtime requirements. iptables bridged traffic visibility and IPv4 forwarding can be relevant for node networking, but they do not replace the need for a container runtime. Networking and container execution are separate layers: you need networking for connectivity, and you need a runtime for running containers.
Option C is also incorrect and muddled. Container runtimes are not deprecated; rather, Kubernetes removed the built-in Docker shim integration from kubelet in favor of CRI-native runtimes. CRI is an interface, not “something you install instead of a runtime.†In practice you install a CRI-compatible runtime (containerd/CRI-O), which implements CRI endpoints that kubelet talks to.
Operationally, the runtime choice affects node behavior: image management, logging integration, performance characteristics, and compatibility. Kubernetes installation guides explicitly list installing a container runtime as a prerequisite for worker nodes. If a cluster has nodes without a properly configured runtime, workloads scheduled there will fail to start (often stuck in ContainerCreating/ImagePullBackOff/Runtime errors).
Therefore, the only fully correct statement is D: each node needs a container runtime to run Pods.
=========
How are ReplicaSets and Deployments related?
Deployments manage ReplicaSets and provide declarative updates to Pods.
ReplicaSets manage stateful applications, Deployments manage stateless applications.
Deployments are runtime instances of ReplicaSets.
ReplicaSets are subsets of Jobs and CronJobs which use imperative Deployments.
The Answer Is:
AExplanation:
In Kubernetes, a Deployment is a higher-level controller that manages ReplicaSets, and ReplicaSets in turn manage Pods. That is exactly what option A states, making it the correct answer.
A ReplicaSet’s job is straightforward: ensure that a specified number of Pod replicas matching a selector are running. It continuously reconciles actual state to desired state by creating new Pods when replicas are missing or removing Pods when there are too many. However, ReplicaSets alone do not provide the richer application rollout lifecycle features most teams need.
A Deployment adds those features by managing ReplicaSets across versions of your Pod template. When you update a Deployment (for example, change the container image tag), Kubernetes creates a new ReplicaSet with the new Pod template and then gradually scales the new ReplicaSet up and the old one down according to the Deployment strategy (RollingUpdate by default). Deployments also maintain rollout history, support rollback (kubectl rollout undo), and allow pause/resume of rollouts. This is why the common guidance is: you almost always create Deployments rather than ReplicaSets directly for stateless apps.
Option B is incorrect because stateful workloads are typically handled by StatefulSets, not ReplicaSets. Deployments can run stateless apps, but ReplicaSets are also used under Deployments and are not “for stateful only.†Option C is reversed: ReplicaSets are not “instances†of Deployments; Deployments create/manage ReplicaSets. Option D is incorrect because Jobs/CronJobs are separate controllers for run-to-completion workloads and do not define ReplicaSets as subsets.
So the accurate relationship is: Deployment → manages ReplicaSets → which manage Pods, enabling declarative updates and controlled rollouts.
=========
What is a key feature of a container network?
Proxying REST requests across a set of containers.
Allowing containers running on separate hosts to communicate.
Allowing containers on the same host to communicate.
Caching remote disk access.
The Answer Is:
BExplanation:
A defining requirement of container networking in orchestrated environments is enabling workloads to communicate across hosts, not just within a single machine. That’s why B is correct: a key feature of a container network is allowing containers (Pods) running on separate hosts to communicate.
In Kubernetes, this idea becomes the Kubernetes network model: every Pod gets an IP address, and Pods should be able to communicate with other Pods across nodes without needing NAT (depending on implementation details). Achieving that across a cluster requires a networking layer (typically implemented by a CNI plugin) that can route traffic between nodes so that Pod-to-Pod communication works regardless of placement. This is crucial because schedulers dynamically place Pods; you cannot assume two communicating components will land on the same node.
Option C is true in a trivial sense—containers on the same host can communicate—but that capability alone is not the key feature that makes orchestration viable at scale. Cross-host connectivity is the harder and more essential property. Option A describes application-layer behavior (like API gateways or reverse proxies) rather than the foundational networking capability. Option D describes storage optimization, unrelated to container networking.
From a cloud native architecture perspective, reliable cross-host networking enables microservices patterns, service discovery, and distributed systems behavior. Kubernetes Services, DNS, and NetworkPolicies all depend on the underlying ability for Pods across the cluster to send traffic to each other. If your container network cannot provide cross-node routing and reachability, the cluster behaves like isolated islands and breaks the fundamental promise of orchestration: “schedule anywhere, communicate consistently.â€
=========
Which of the following is the correct command to run an nginx deployment with 2 replicas?
kubectl run deploy nginx --image=nginx --replicas=2
kubectl create deploy nginx --image=nginx --replicas=2
kubectl create nginx deployment --image=nginx --replicas=2
kubectl create deploy nginx --image=nginx --count=2
The Answer Is:
BExplanation:
The correct answer is B: kubectl create deploy nginx --image=nginx --replicas=2. This uses kubectl create deployment (shorthand create deploy) to generate a Deployment resource named nginx with the specified container image. The --replicas=2 flag sets the desired replica count, so Kubernetes will create two Pod replicas (via a ReplicaSet) and keep that number stable.
Option A is incorrect because kubectl run is primarily intended to run a Pod (and in older versions could generate other resources, but it’s not the recommended/consistent way to create a Deployment in modern kubectl usage). Option C is invalid syntax: kubectl subcommand order is incorrect; you don’t say kubectl create nginx deployment. Option D uses a non-existent --count flag for Deployment replicas.
From a Kubernetes fundamentals perspective, this question tests two ideas: (1) Deployments are the standard controller for running stateless workloads with a desired number of replicas, and (2) kubectl create deployment is a common imperative shortcut for generating that resource. After running the command, you can confirm with kubectl get deploy nginx, kubectl get rs, and kubectl get pods -l app=nginx (label may vary depending on kubectl version). You’ll see a ReplicaSet created and two Pods brought up.
In production, teams typically use declarative manifests (kubectl apply -f) or GitOps, but knowing the imperative command is useful for quick labs and validation. The key is that replicas are managed by the controller, not by manually starting containers—Kubernetes reconciles the state continuously.
Therefore, B is the verified correct command.
=========
