Linux Foundation KCNA - Kubernetes and Cloud Native Associate
What Linux namespace is shared by default by containers running within a Kubernetes Pod?
Host Network
Network
Process ID
Process Name
The Answer Is:
BExplanation:
By default, containers in the same Kubernetes Pod share the network namespace, which means they share the same IP address and port space. Therefore, the correct answer is B (Network).
This shared network namespace is a key part of the Pod abstraction. Because all containers in a Pod share networking, they can communicate with each other over localhost and coordinate tightly, which is the basis for patterns like sidecars (service mesh proxies, log shippers, config reloaders). It also means containers must coordinate port usage: if two containers try to bind the same port on 0.0.0.0, they’ll conflict because they share the same port namespace.
Option A (“Host Networkâ€) is different: hostNetwork: true is an optional Pod setting that puts the Pod into the node’s network namespace, not the Pod’s shared namespace. It is not the default and is generally used sparingly due to security and port-collision risks. Option C (“Process IDâ€) is not shared by default in Kubernetes; PID namespace sharing requires explicitly enabling process namespace sharing (e.g., shareProcessNamespace: true). Option D (“Process Nameâ€) is not a Linux namespace concept.
The Pod model also commonly implies shared storage volumes (if defined) and shared IPC namespace in some configurations, but the universally shared-by-default namespace across containers in the same Pod is the network namespace. This default behavior is why Kubernetes documentation explains a Pod as a “logical host†for one or more containers: the containers are co-located and share certain namespaces as if they ran on the same host.
So, the correct, verified answer is B: containers in the same Pod share the Network namespace by default.
=========
What is the main role of the Kubernetes DNS within a cluster?
Acts as a DNS server for virtual machines that are running outside the cluster.
Provides a DNS as a Service, allowing users to create zones and registries for domains that they own.
Allows Pods running in dual stack to convert IPv6 calls into IPv4 calls.
Provides consistent DNS names for Pods and Services for workloads that need to communicate with each other.
The Answer Is:
DExplanation:
Kubernetes DNS (commonly implemented by CoreDNS) provides service discovery inside the cluster by assigning stable, consistent DNS names to Services and (optionally) Pods, which makes D correct. In a Kubernetes environment, Pods are ephemeral—IP addresses can change when Pods restart or move between nodes. DNS-based discovery allows applications to communicate using stable names rather than hardcoded IPs.
For Services, Kubernetes creates DNS records like service-name.namespace.svc.cluster.local, which resolve to the Service’s virtual IP (ClusterIP) or, for headless Services, to the set of Pod endpoints. This supports both load-balanced communication (standard Service) and per-Pod addressing (headless Service, commonly used with StatefulSets). Kubernetes DNS is therefore a core building block that enables microservices to locate each other reliably.
Option A is not Kubernetes DNS’s purpose; it serves cluster workloads rather than external VMs. Option B describes a managed DNS hosting product (creating zones/registries), which is outside the scope of cluster DNS. Option C describes protocol translation, which is not the role of DNS. Dual-stack support relates to IP families and networking configuration, not DNS translating IPv6 to IPv4.
In day-to-day Kubernetes operations, DNS reliability impacts everything: if DNS is unhealthy, Pods may fail to resolve Services, causing cascading outages. That’s why CoreDNS is typically deployed as a highly available add-on in kube-system, and why DNS caching and scaling are important for large clusters.
So the correct statement is D: Kubernetes DNS provides consistent DNS names so workloads can communicate reliably.
=========
What is an advantage of using the Gateway API compared to Ingress in Kubernetes?
To automatically scale workloads based on CPU and memory utilization.
To provide clearer role separation between infrastructure providers and application developers.
To configure routing rules through annotations directly on Ingress resources.
To expose an application externally by creating only a Service resource.
The Answer Is:
BExplanation:
The Gateway API is a newer Kubernetes networking API designed to address several limitations of the traditional Ingress resource. One of its most significant advantages is the clear separation of roles and responsibilities between infrastructure providers (such as platform teams or cluster administrators) and application developers. This design principle is a core motivation behind the Gateway API and directly differentiates it from Ingress.
With Ingress, a single resource often combines concerns such as load balancer configuration, TLS settings, routing rules, and application-level details. This frequently leads to heavy reliance on annotations, which are controller-specific, non-standardized, and blur ownership boundaries. Application developers may need elevated permissions to modify Ingress objects, even when changes affect shared infrastructure, creating operational risk.
The Gateway API introduces multiple distinct resources—such as GatewayClass, Gateway, and route resources (e.g., HTTPRoute)—each aligned with a specific role. Infrastructure providers manage GatewayClass and Gateway resources, which define how traffic enters the cluster and what capabilities are available. Application developers interact primarily with route resources to define how traffic is routed to their Services, without needing access to the underlying infrastructure configuration. This separation improves security, governance, and scalability in multi-team environments.
Option A is incorrect because automatic scaling based on CPU and memory is handled by the Horizontal Pod Autoscaler, not by Gateway API or Ingress. Option C describes a characteristic of Ingress, not an advantage of Gateway API; in fact, Gateway API explicitly reduces reliance on annotations by using structured, portable fields. Option D is incorrect because exposing applications externally requires more than just a Service; traffic management resources like Ingress or Gateway are still necessary.
Therefore, the correct and verified answer is Option B, as the Gateway API’s role-oriented design is a key advancement over Ingress and is clearly documented in Kubernetes networking architecture guidance.
Which cloud native tool keeps Kubernetes clusters in sync with sources of configuration (like Git repositories), and automates updates to configuration when there is new code to deploy?
Flux and ArgoCD
GitOps Toolkit
Linkerd and Istio
Helm and Kustomize
The Answer Is:
AExplanation:
Tools that continuously reconcile cluster state to match a Git repository’s desired configuration are GitOps controllers, and the best match here is Flux and ArgoCD, so A is correct. GitOps is the practice where Git is the source of truth for declarative system configuration. A GitOps tool continuously compares the desired state (manifests/Helm/Kustomize outputs stored in Git) with the actual state in the cluster and then applies changes to eliminate drift.
Flux and Argo CD both implement this reconciliation loop. They watch Git repositories, detect updates (new commits/tags), and apply the updated Kubernetes resources. They also surface drift and sync status, enabling auditable, repeatable deployments and easy rollbacks (revert Git). This model improves delivery velocity and security because changes flow through code review, and cluster changes can be restricted to the GitOps controller identity rather than ad-hoc human kubectl access.
Option B (“GitOps Toolkitâ€) is related—Flux uses a GitOps Toolkit internally—but the question asks for a “tool†that keeps clusters in sync; the recognized tools are Flux and Argo CD in this list. Option C lists service meshes (traffic/security/telemetry), not deployment synchronization tools. Option D lists packaging/templating tools; Helm and Kustomize help build manifests, but they do not, by themselves, continuously reconcile cluster state to a Git source.
In Kubernetes application delivery, GitOps tools become the deployment engine: CI builds artifacts, updates references in Git (image tags/digests), and the GitOps controller deploys those changes. This separation strengthens traceability and reduces configuration drift. Therefore, A is the verified correct answer.
=========
What is the main purpose of the Ingress in Kubernetes?
Access HTTP and HTTPS services running in the cluster based on their IP address.
Access services different from HTTP or HTTPS running in the cluster based on their IP address.
Access services different from HTTP or HTTPS running in the cluster based on their path.
Access HTTP and HTTPS services running in the cluster based on their path.
The Answer Is:
DExplanation:
D is correct. Ingress is a Kubernetes API object that defines rules for external access to HTTP/HTTPS services in a cluster. The defining capability is Layer 7 routing—commonly host-based and path-based routing—so you can route requests like example.com/app1 to one Service and example.com/app2 to another. While the question mentions “based on their path,†that’s a classic and correct Ingress use case (and host routing is also common).
Ingress itself is only the specification of routing rules. An Ingress controller (e.g., NGINX Ingress Controller, HAProxy, Traefik, cloud-provider controllers) is what actually implements those rules by configuring a reverse proxy/load balancer. Ingress typically terminates TLS (HTTPS) and forwards traffic to internal Services, giving a more expressive alternative to exposing every service via NodePort/LoadBalancer.
Why the other options are wrong:
A suggests routing by IP address; Ingress is fundamentally about HTTP(S) routing rules (host/path), not direct Service IP access.
B and C describe non-HTTP protocols; Ingress is specifically for HTTP/HTTPS. For TCP/UDP or other protocols, you generally use Services of type LoadBalancer/NodePort, Gateway API implementations, or controller-specific TCP/UDP configuration.
Ingress is a foundational building block for cloud-native application delivery because it centralizes edge routing, enables TLS management, and supports gradual adoption patterns (multiple services under one domain). Therefore, the main purpose described here matches D.
=========
What is the main purpose of the Open Container Initiative (OCI)?
Accelerating the adoption of containers and Kubernetes in the industry.
Creating open industry standards around container formats and runtimes.
Creating industry standards around container formats and runtimes for private purposes.
Improving the security of standards around container formats and runtimes.
The Answer Is:
BExplanation:
B is correct: the OCI’s main purpose is to create open, vendor-neutral industry standards for container image formats and container runtimes. Standardization is critical in container orchestration because portability is a core promise: you should be able to build an image once and run it across different environments and runtimes without rewriting packaging or execution logic.
OCI defines (at a high level) two foundational specs:
Image specification: how container images are packaged (layers, metadata, manifests).
Runtime specification: how to run a container (filesystem setup, namespaces/cgroups behavior, lifecycle).These standards enable interoperability across tooling. For example, higher-level runtimes (like containerd or CRI-O) rely on OCI-compliant components (often runc or equivalents) to execute containers consistently.
Why the other options are not the best answer:
A (accelerating adoption) might be an indirect outcome, but it’s not the OCI’s core charter.
C is contradictory (“industry standards†but “for private purposesâ€)—OCI is explicitly about open standards.
D (improving security) can be helped by standardization and best practices, but OCI is not primarily a security standards body; its central function is format and runtime interoperability.
In Kubernetes specifically, OCI is part of the “plumbing†that makes runtimes replaceable. Kubernetes talks to runtimes via CRI; runtimes execute containers via OCI. This layering helps Kubernetes remain runtime-agnostic while still benefiting from consistent container behavior everywhere.
Therefore, the correct choice is B: OCI creates open standards around container formats and runtimes.
=========
What component enables end users, different parts of the Kubernetes cluster, and external components to communicate with one another?
kubectl
AWS Management Console
Kubernetes API
Google Cloud SDK
The Answer Is:
CExplanation:
The Kubernetes API is the central interface that enables communication between users, controllers, nodes, and external integrations, so C is correct. Kubernetes is fundamentally an API-driven system: all cluster state is represented as API objects, and all operations—create, update, delete, watch—flow through the API server.
End users typically interact with the Kubernetes API using tools like kubectl, client libraries, or dashboards. But those tools are clients; the shared communication “hub†is the API itself. Inside the cluster, core control plane components (controllers, scheduler) continuously watch the API for desired state and write status updates back. Worker nodes (via kubelet) also communicate with the API server to receive Pod specs, report node health, and update Pod statuses. External systems—cloud provider integrations, CI/CD pipelines, GitOps controllers, monitoring and policy engines—also integrate primarily through the Kubernetes API.
Option A (kubectl) is a CLI that talks to the Kubernetes API; it is not the underlying component that all parts use to communicate. Options B and D are cloud-provider tools and are not universal to Kubernetes clusters. Kubernetes runs across many environments, and the consistent interoperability layer is the Kubernetes API.
This API-centric architecture is what enables Kubernetes’ declarative model: you submit desired state to the API, and controllers reconcile actual state to match. It also enables extensibility: CRDs and admission webhooks expand what the API can represent and enforce. Therefore, the correct answer is C: Kubernetes API.
=========
How long should a stable API element in Kubernetes be supported (at minimum) after deprecation?
9 months
24 months
12 months
6 months
The Answer Is:
CExplanation:
Kubernetes has a formal API deprecation policy to balance stability for users with the ability to evolve the platform. For a stable (GA) API element, Kubernetes commits to supporting that API for a minimum period after it is deprecated. The correct minimum in this question is 12 months, which corresponds to option C.
In practice, Kubernetes releases occur roughly every three to four months, and the deprecation policy is commonly communicated in terms of “releases†as well as time. A GA API that is deprecated in one release is typically kept available for multiple subsequent releases, giving cluster operators and application teams time to migrate manifests, client libraries, controllers, and automation. This matters because Kubernetes is often at the center of production delivery pipelines; abrupt API removals would break deployments, upgrades, and tooling. By guaranteeing a minimum support window, Kubernetes enables predictable upgrades and safer lifecycle management.
This policy also encourages teams to track API versions and plan migrations. For example, workloads might start on a beta API (which can change), but once an API reaches stable, users can expect a stronger compatibility promise. Deprecation warnings help surface risk early. In many clusters, you’ll see API server warnings and tooling hints when manifests use deprecated fields/versions, allowing proactive remediation before the removal release.
Options 6 or 9 months would be too short for many enterprises to coordinate changes across multiple teams and environments. 24 months may be true for some ecosystems, but the Kubernetes stated minimum in this exam-style framing is 12 months. The key operational takeaway is: don’t ignore deprecation notices—they’re your clock for migration planning. Treat API version upgrades as part of routine cluster lifecycle hygiene to avoid being blocked during Kubernetes version upgrades when deprecated APIs are finally removed.
=========
What is a DaemonSet?
It’s a type of workload that ensures a specific set of nodes run a copy of a Pod.
It’s a type of workload responsible for maintaining a stable set of replica Pods running in any node.
It’s a type of workload that needs to be run periodically on a given schedule.
It’s a type of workload that provides guarantees about ordering, uniqueness, and identity of a set of Pods.
The Answer Is:
AExplanation:
A DaemonSet ensures that a copy of a Pod runs on each node (or a selected subset of nodes), which matches option A and makes it correct. DaemonSets are ideal for node-level agents that should exist everywhere, such as log shippers, monitoring agents, CNI components, storage daemons, and security scanners.
DaemonSets differ from Deployments/ReplicaSets because their goal is not “N replicas anywhere,†but “one replica per node†(subject to node selection). When nodes are added to the cluster, the DaemonSet controller automatically schedules the DaemonSet Pod onto the new nodes. When nodes are removed, the Pods associated with those nodes are cleaned up. You can restrict placement using node selectors, affinity rules, or tolerations so that only certain nodes run the DaemonSet (for example, only Linux nodes, only GPU nodes, or only nodes with a dedicated label).
Option B sounds like a ReplicaSet/Deployment behavior (stable set of replicas), not a DaemonSet. Option C describes CronJobs (scheduled, recurring run-to-completion workloads). Option D describes StatefulSets, which provide stable identity, ordering, and uniqueness guarantees for stateful replicas.
Operationally, DaemonSets matter because they often run critical cluster services. During maintenance and upgrades, DaemonSet update strategy determines how those node agents roll out across the fleet. Since DaemonSets can tolerate taints (like master/control-plane node taints), they can also be used to ensure essential agents run across all nodes, including special pools. Thus, the correct definition is A.
=========
When modifying an existing Helm release to apply new configuration values, which approach is the best practice?
Use helm upgrade with the --set flag to apply new values while preserving the release history.
Use kubectl edit to modify the live release configuration and apply the updated resource values.
Delete the release and reinstall it with the desired configuration to force an updated deployment.
Edit the Helm chart source files directly and reapply them to push the updated configuration values.
The Answer Is:
AExplanation:
Helm is a package manager for Kubernetes that provides a declarative and versioned approach to application deployment and lifecycle management. When updating configuration values for an existing Helm release, the recommended and best-practice approach is to use helm upgrade, optionally with the --set flag or a values file, to apply the new configuration while preserving the release’s history.
Option A is correct because helm upgrade updates an existing release in a controlled and auditable manner. Helm stores each revision of a release, allowing teams to inspect past configurations and roll back to a previous known-good state if needed. Using --set enables quick overrides of individual values, while using -f values.yaml supports more complex or repeatable configurations. This approach aligns with GitOps and infrastructure-as-code principles, ensuring consistency and traceability.
Option B is incorrect because modifying Helm-managed resources directly with kubectl edit breaks Helm’s state tracking. Helm maintains a record of the desired state for each release, and manual edits can cause configuration drift, making future upgrades unpredictable or unsafe. Kubernetes documentation and Helm guidance strongly discourage modifying Helm-managed resources outside of Helm itself.
Option C is incorrect because deleting and reinstalling a release discards the release history and may cause unnecessary downtime or data loss, especially for stateful applications. Helm’s upgrade mechanism is specifically designed to avoid this disruption while still applying configuration changes safely.
Option D is also incorrect because editing chart source files directly and reapplying them bypasses Helm’s release management model. While chart changes are appropriate during development, applying them directly to a running release without helm upgrade undermines versioning, rollback, and repeatability.
According to Helm documentation, helm upgrade is the standard and supported method for modifying deployed applications. It ensures controlled updates, preserves operational history, and enables safe rollbacks, making option A the correct and fully verified best practice.
