Linux Foundation KCNA - Kubernetes and Cloud Native Associate
Which is an industry-standard container runtime with an “emphasis†on simplicity, robustness, and portability?
CRI-O
LXD
containerd
kata-runtime
The Answer Is:
CExplanation:
containerd is a widely adopted, industry-standard container runtime known for simplicity, robustness, and portability, so C is correct. containerd originated as a core component extracted from Docker and has become a common runtime across Kubernetes distributions and managed services. It implements container lifecycle management (image pull, unpack, container execution, snapshotting) and typically delegates low-level container execution to an OCI runtime like runc.
In Kubernetes, kubelet communicates with container runtimes through CRI. containerd provides a CRI plugin (or can be integrated via CRI implementations) that makes it a first-class choice for Kubernetes nodes. This aligns with the runtime landscape after dockershim removal: Kubernetes users commonly run containerd or CRI-O as the node runtime.
Option A (CRI-O) is also a CRI-focused runtime and is valid in Kubernetes contexts, but the phrasing “industry-standard … emphasis on simplicity, robustness, and portability†is strongly associated with containerd’s positioning and broad cross-platform adoption beyond Kubernetes. Option B (LXD) is a system container manager (often associated with LXC) and not the standard Kubernetes runtime in mainstream CRI discussions. Option D (kata-runtime) is associated with Kata Containers, which focuses on stronger isolation by running containers inside lightweight VMs; that is a security-oriented sandbox approach rather than a simplicity/portability “industry standard†baseline runtime.
From a cloud-native operations point of view, containerd’s popularity comes from its stable API, strong ecosystem support, and alignment with OCI standards. It integrates cleanly with image registries, supports modern snapshotters, and is heavily used in production by many Kubernetes providers. Therefore, the best verified answer is C: containerd.
=========
What is the default value for authorization-mode in Kubernetes API server?
--authorization-mode=RBAC
--authorization-mode=AlwaysAllow
--authorization-mode=AlwaysDeny
--authorization-mode=ABAC
The Answer Is:
BExplanation:
The Kubernetes API server supports multiple authorization modes that determine whether an authenticated request is allowed to perform an action (verb) on a resource. Historically, the API server’s default authorization mode was AlwaysAllow, meaning that once a request was authenticated, it would be authorized without further checks. That is why the correct answer here is B.
However, it’s crucial to distinguish “default flag value†from “recommended configuration.†In production clusters, running with AlwaysAllow is insecure because it effectively removes authorization controls—any authenticated user (or component credential) could do anything the API permits. Modern Kubernetes best practices strongly recommend enabling RBAC (Role-Based Access Control), often alongside Node and Webhook authorization, so that permissions are granted explicitly using Roles/ClusterRoles and RoleBindings/ClusterRoleBindings. Many managed Kubernetes distributions and kubeadm-based setups commonly enable RBAC by default as part of cluster bootstrap profiles, even if the API server’s historical default flag value is AlwaysAllow.
So, the exam-style interpretation of this question is about the API server flag default, not what most real clusters should run. With RBAC enabled, authorization becomes granular: you can control who can read Secrets, who can create Deployments, who can exec into Pods, and so on, scoped to namespaces or cluster-wide. ABAC (Attribute-Based Access Control) exists but is generally discouraged compared to RBAC because it relies on policy files and is less ergonomic and less commonly used. AlwaysDeny is useful for hard lockdown testing but not for normal clusters.
In short: AlwaysAllow is the API server’s default mode (answer B), but RBAC is the secure, recommended choice you should expect to see enabled in almost any serious Kubernetes environment.
=========
What does “Continuous Integration†mean?
The continuous integration and testing of code changes from multiple sources manually.
The continuous integration and testing of code changes from multiple sources via automation.
The continuous integration of changes from one environment to another.
The continuous integration of new tools to support developers in a project.
The Answer Is:
BExplanation:
The correct answer is B: Continuous Integration (CI) is the practice of frequently integrating code changes from multiple contributors and validating them through automated builds and tests. The “continuous†part is about doing this often (ideally many times per day) and consistently, so integration problems are detected early instead of piling up until a painful merge or release window.
Automation is essential. CI typically includes steps like compiling/building artifacts, running unit and integration tests, executing linters, checking formatting, scanning dependencies for vulnerabilities, and producing build reports. This automation creates fast feedback loops that help developers catch regressions quickly and maintain a releasable main branch.
Option A is incorrect because manual integration/testing does not scale and undermines the reliability and speed that CI is meant to provide. Option C confuses CI with deployment promotion across environments (which is more aligned with Continuous Delivery/Deployment). Option D is unrelated: adding tools can support CI, but it isn’t the definition.
In cloud-native application delivery, CI is tightly coupled with containerization and Kubernetes: CI pipelines often build container images from source, run tests, scan images, sign artifacts, and push to registries. Those validated artifacts then flow into CD processes that deploy to Kubernetes using manifests, Helm, or GitOps controllers. Without CI, Kubernetes rollouts become riskier because you lack consistent validation of what you’re deploying.
So, CI is best defined as automated integration and testing of code changes from multiple sources, which matches option B.
=========
Which of the following options include only mandatory fields to create a Kubernetes object using a YAML file?
apiVersion, template, kind, status
apiVersion, metadata, status, spec
apiVersion, template, kind, spec
apiVersion, metadata, kind, spec
The Answer Is:
DExplanation:
D is correct: the mandatory top-level fields for creating a Kubernetes object manifest are apiVersion, kind, metadata, and (for most objects you create) spec. These fields establish what the object is and what you want Kubernetes to do with it.
apiVersion tells Kubernetes which API group/version schema to use (e.g., apps/v1, v1). This determines valid fields and behavior.
kind identifies the resource type (e.g., Pod, Deployment, Service).
metadata contains identifying information like name, namespace, and labels/annotations used for organization, selection, and automation.
spec describes the desired state. Controllers and the kubelet reconcile actual state to match spec.
Why other choices are wrong:
status is not a mandatory input field. It’s generally written by Kubernetes controllers and reflects observed state (conditions, readiness, assigned node, etc.). Users typically do not set status when creating objects.
template is not a universal top-level field. It exists inside some resources (notably Deployment.spec.template), but it’s not a required top-level field across Kubernetes objects.
It’s true that some resources can be created without a spec (or with minimal fields), but in the exam-style framing—“mandatory fields… using a YAML fileâ€â€”the canonical expected set is exactly the four in D. This aligns with how Kubernetes documentation and examples present manifests: identify the API schema and kind, give object metadata, and declare desired state.
Therefore, apiVersion + metadata + kind + spec is the only option that includes only the mandatory fields, making D the verified correct answer.
=========
Which tools enable Kubernetes HorizontalPodAutoscalers to use custom, application-generated metrics to trigger scaling events?
Prometheus and the prometheus-adapter.
Graylog and graylog-autoscaler metrics.
Graylog and the kubernetes-adapter.
Grafana and Prometheus.
The Answer Is:
AExplanation:
To scale on custom, application-generated metrics, the Horizontal Pod Autoscaler (HPA) needs those metrics exposed through the Kubernetes custom metrics (or external metrics) API. A common and Kubernetes-documented approach is Prometheus + prometheus-adapter, making A correct. Prometheus scrapes application metrics (for example, request rate, queue depth, in-flight requests) from /metrics endpoints. The prometheus-adapter then translates selected Prometheus time series into the Kubernetes Custom Metrics API so the HPA controller can fetch them and make scaling decisions.
Why not the other options: Grafana is a visualization tool; it does not provide the metrics API translation layer required by HPA, so “Grafana and Prometheus†is incomplete. Graylog is primarily a log management system; it’s not the standard solution for feeding custom metrics into HPA via the Kubernetes metrics APIs. The “kubernetes-adapter†term in option C is not the standard named adapter used in the common Kubernetes ecosystem for Prometheus-backed custom metrics (the recognized component is prometheus-adapter).
This matters operationally because HPA is not limited to CPU/memory. CPU and memory use resource metrics (often from metrics-server), but modern autoscaling often needs application signals: message queue length, requests per second, latency, or business metrics. With Prometheus and prometheus-adapter, you can define HPA rules such as “scale to maintain queue depth under X†or “scale based on requests per second per pod.†This can produce better scaling behavior than CPU-based scaling alone, especially for I/O-bound services or workloads with uneven CPU profiles.
So the correct tooling combination in the provided choices is Prometheus and the prometheus-adapter, option A.
=========
Which kubectl command is useful for collecting information about any type of resource that is active in a Kubernetes cluster?
describe
list
expose
explain
The Answer Is:
AExplanation:
The correct answer is A (describe), used as kubectl describe
kubectl get (not listed) is typically used for listing objects and their summary fields, but kubectl describe goes deeper: for a Pod it will show container images, resource requests/limits, probes, mounted volumes, node assignment, IPs, conditions, and recent scheduling/pulling/starting events. For a Node it shows capacity/allocatable resources, labels/taints, conditions, and node events. Those event details often explain why something is Pending, failing to pull images, failing readiness checks, or being evicted.
Option B (“listâ€) is not a standard kubectl subcommand for retrieving resource information (you would use get for listing). Option C (expose) is for creating a Service to expose a resource (like a Deployment). Option D (explain) is for viewing API schema/field documentation (e.g., kubectl explain deployment.spec.replicas) and does not report what is currently happening in the cluster.
So, for gathering detailed live diagnostics about a resource in the cluster, the best kubectl command is kubectl describe, which corresponds to option A.
=========
What is the correct hierarchy of Kubernetes components?
Containers → Pods → Cluster → Nodes
Nodes → Cluster → Containers → Pods
Cluster → Nodes → Pods → Containers
Pods → Cluster → Containers → Nodes
The Answer Is:
CExplanation:
The correct answer is C: Cluster → Nodes → Pods → Containers. This expresses the fundamental structural relationship in Kubernetes. A cluster is the overall system (control plane + nodes) that runs your workloads. Inside the cluster, you have nodes (worker machines—VMs or bare metal) that provide CPU, memory, storage, and networking. The scheduler assigns workloads to nodes.
Workloads are executed as Pods, which are the smallest deployable units Kubernetes schedules. Pods represent one or more containers that share networking (one Pod IP and port space) and can share storage volumes. Within each Pod are containers, which are the actual application processes packaged with their filesystem and runtime dependencies.
The other options are incorrect because they break these containment relationships. Containers do not contain Pods; Pods contain containers. Nodes do not exist “inside†Pods; Pods run on nodes. And the cluster is the top-level boundary that contains nodes and orchestrates Pods.
This hierarchy matters for troubleshooting and design. If you’re thinking about capacity, you reason at the node and cluster level (node pools, autoscaling, quotas). If you’re thinking about application scaling, you reason at the Pod level (replicas, HPA, readiness probes). If you’re thinking about process-level concerns, you reason at the container level (images, security context, runtime user, resources). Kubernetes intentionally uses this layered model so that scheduling and orchestration operate on Pods, while the container runtime handles container execution details.
So the accurate hierarchy from largest to smallest unit is: Cluster → Nodes → Pods → Containers, which corresponds to C.
=========
What is the reference implementation of the OCI runtime specification?
lxc
CRI-O
runc
Docker
The Answer Is:
CExplanation:
The verified correct answer is C (runc). The Open Container Initiative (OCI) defines standards for container image format and runtime behavior. The OCI runtime specification describes how to run a container (process execution, namespaces, cgroups, filesystem mounts, capabilities, etc.). runc is widely recognized as the reference implementation of that runtime spec and is used underneath many higher-level container runtimes.
In common container stacks, Kubernetes nodes typically run a CRI-compliant runtime such as containerd or CRI-O. Those runtimes handle image management, container lifecycle coordination, and CRI integration, but they usually invoke an OCI runtime to actually create and start containers. In many deployments, that OCI runtime is runc (or a compatible alternative). This layering helps keep responsibilities separated: CRI runtime manages orchestration-facing operations; OCI runtime performs the low-level container creation according to the standardized spec.
Option A (lxc) is an older Linux containers technology and tooling ecosystem, but it is not the OCI runtime reference implementation. Option B (CRI-O) is a Kubernetes-focused container runtime that implements CRI; it uses OCI runtimes (often runc) underneath, so it’s not the reference implementation itself. Option D (Docker) is a broader platform/tooling suite; while Docker historically used runc under the hood and helped popularize containers, the OCI reference runtime implementation is runc, not Docker.
Understanding this matters in container orchestration contexts because it clarifies what Kubernetes depends on: Kubernetes relies on CRI for runtime integration, and runtimes rely on OCI standards for interoperability. OCI standards ensure that images and runtime behavior are portable across tools and vendors, and runc is the canonical implementation that demonstrates those standards in practice.
Therefore, the correct answer is C: runc.
=========
The IPv4/IPv6 dual stack in Kubernetes:
Translates an IPv4 request from a Service to an IPv6 Service.
Allows you to access the IPv4 address by using the IPv6 address.
Requires NetworkPolicies to prevent Services from mixing requests.
Allows you to create IPv4 and IPv6 dual stack Services.
The Answer Is:
DExplanation:
The correct answer is D: Kubernetes dual-stack support allows you to create Services (and Pods, depending on configuration) that use both IPv4 and IPv6 addressing. Dual-stack means the cluster is configured to allocate and route traffic for both IP families. For Services, this can mean assigning both an IPv4 ClusterIP and an IPv6 ClusterIP so clients can connect using either family, depending on their network stack and DNS resolution.
Option A is incorrect because dual-stack is not about protocol translation (that would be NAT64/other gateway mechanisms, not the core Kubernetes dual-stack feature). Option B is also a form of translation/aliasing that isn’t what Kubernetes dual-stack implies; having both addresses available is different from “access IPv4 via IPv6.†Option C is incorrect: dual-stack does not inherently require NetworkPolicies to “prevent mixing requests.†NetworkPolicies are about traffic control, not IP family separation.
In Kubernetes, dual-stack requires support across components: the network plugin (CNI) must support IPv4/IPv6, the cluster must be configured with both Pod CIDRs and Service CIDRs, and DNS should return appropriate A and AAAA records for Service names. Once configured, you can specify preferences such as ipFamilyPolicy (e.g., PreferDualStack) and ipFamilies (IPv4, IPv6 order) for Services to influence allocation behavior.
Operationally, dual-stack is useful for environments transitioning to IPv6, supporting IPv6-only clients, or running in mixed networks. But it adds complexity: address planning, firewalling, and troubleshooting need to consider two IP families. Still, the definition in the question is straightforward: Kubernetes dual-stack enables dual-stack Services, which is option D.
=========
In Kubernetes, what is the primary purpose of using annotations?
To control the access permissions for users and service accounts.
To provide a way to attach metadata to objects.
To specify the deployment strategy for applications.
To define the specifications for resource limits and requests.
The Answer Is:
BExplanation:
Annotations in Kubernetes are a flexible mechanism for attaching non-identifying metadata to Kubernetes objects. Their primary purpose is to store additional information that is not used for object selection or grouping, which makes Option B the correct answer.
Unlike labels, which are designed to be used for selection, filtering, and grouping of resources (for example, by Services or Deployments), annotations are intended purely for informational or auxiliary purposes. They allow users, tools, and controllers to store arbitrary key–value data on objects without affecting Kubernetes’ core behavior. This makes annotations ideal for storing data such as build information, deployment timestamps, commit hashes, configuration hints, or ownership details.
Annotations are commonly consumed by external tools and controllers rather than by the Kubernetes scheduler or control plane for decision-making. For example, ingress controllers, service meshes, monitoring agents, and CI/CD systems often read annotations to enable or customize specific behaviors. Because annotations are not used for querying or selection, Kubernetes places no strict size or structure requirements on their values beyond general object size limits.
Option A is incorrect because access permissions are managed using Role-Based Access Control (RBAC), which relies on roles, role bindings, and service accounts—not annotations. Option C is incorrect because deployment strategies (such as RollingUpdate or Recreate) are defined in the specification of workload resources like Deployments, not through annotations. Option D is also incorrect because resource limits and requests are specified explicitly in the Pod or container spec under the resources field.
In summary, annotations provide a powerful and extensible way to associate metadata with Kubernetes objects without influencing scheduling, selection, or identity. They support integration, observability, and operational tooling while keeping core Kubernetes behavior predictable and stable. This design intent is clearly documented in Kubernetes metadata concepts, making Option B the correct and verified answer.
