AWS EKS exam
Single Choice
1)
Your customer wants to use AWS EKS to run a service. All the accounts are separated by permissions. The development team has asked the platform team to create an AWS EKS cluster. The EKS cluster was created successfully, but the customer cannot see the resources inside the cluster in the console.
Why are the resources not visible to the development team?
- The Cluster IAM Role is missing AssumeRole.
- The IAM User that the platform team used to create the AWS EKS does not have the appropriate permissions.
- The IAM User used by the development team does not have the appropriate permissions.
- The development team IAM Role is not mapped to the aws-auth configmap.
Comments: In order to see resources in the EKS Cluster, the user must have IAM mapped to the configmap (aws-auth) of EKS.
2)
Your organization has several applications developed in different languages. Managing centralized logging is a challenge. The centralized monitoring team has developed a monitoring helper microservice that can standardize logs and metrics from each of the ten microservices in a common format before ingesting them into a centralized log store.
What is the BEST way to run the monitoring helper container?
- Run the monitoring helper as a node agent on every node in the cluster.
- Run the monitoring helper as a deployment with 2 replicas for high availability. Configure the application microservices to send logs to the helper deployment.
- Run the monitoring helper as a sidecar container in application pods.
Comments: This is the right answer that best addresses the constraints defined. By running the helper as a sidecar container, it will scale with the application pod with minimal application changes.
- Run the monitoring helper as a library in the application containers.
3)
Your customer is maturing their FinOps practice and cost efficiency is a big part of that effort. They host multiple applications on their shared EKS clusters. They are looking to charge back the EKS costs to the respective application teams.
What is the MOST efficient way to allocate costs across the application teams?
- Cost Explorer with Savings Plans.
- Spot Instances with Karpenter.
- Cluster Autoscaler Priority expander.
- Kubecost for all Kubernetes resources.
Comments: Kubecost is used to allocate cost at a EKS/kubernetes resource level.
4)
A developer needs to use the Docker Container Runtime to start a container locally, using docker container run, and interact with it through the command line.
What command line flag is required so the session will be interactive using TTY?
- -link
- -expose
- -it
Comments: "--it" specifies an interactive session using TTY
- -attach
Score: 1.00
Single Choice
5)
You have developed an application using a container hosting a web service. This web service is accessible via an ingress controller on your intranet and reads and writes data from a DynamoDB table. You want to be able to deploy this solution into different stages: dev, test, preprod and prod. You also want to be able to deploy the whole product with one command using parameters to specify the stage.
Which solution fulfills all your requirements?
- Manually deploy all the resources for each environment.
- Package all the k8s object in one yaml file and call kubectl create -f to install the objects.
- Use kubectl and call all the yaml files in one command to install the k8s objects.
- Create a helm chart containing all the resources and use helm to deploy the product.
Comments: This is the only solution to deploy all the k8s objects in one row. The helm templating feature allows to parameterize these objects so that they can be adapted to the different stages. The helm command accepts values in parameters to be used by the templating engine.
Score: 1.00
Single Choice
6)
You are a DevOps engineer tasked with building a new continuous delivery pipeline to deploy applications on an EKS cluster. Your team is curious about using GitOps but needs to know the key difference between GitOps and traditional CI/CD workflow.
Which of the following BEST summarizes this difference?
- GitOps workflows use declarative manifests as a descriptor of services to be deployed while traditional CI/CD workflows do not.
- GitOps workflows use git for source control while traditional CI/CD workflows do not.
- GitOps uses a push process in which changes are pushed onto the cluster by the agent while in a traditional CI/CD workflow the CD server pulls changes into the cluster.
- GitOps uses a pull process in which changes are pulled into the cluster by the agent while in a traditional CI/CD workflow the CD server pushes changes onto the cluster.
Comments: In GitOps workflows, an agent watches a git repo and when any changes are detected, they are pulled into the cluster overwriting the the current deployed version of an application. <br><br>This way, the state of the cluster is always in sync with what is in source control hence why git is considered the "source of truth" in GitOps workflows.
Score: 1.00
Single Choice
7)
A company's security team needs to be able to detect whenever production containers attempt to communicate with known IP addresses associated with cryptocurrency-related activity. Automated vulnerability scanning of container images is performed in the CI/CD pipeline before deployment into managed node groups in EKS.
Which solution should the security team leverage to meet their requirement?
- Enable EKS Runtime Monitoring with GuardDuty.
Comments: This scenario requires a runtime monitoring solution to detect malicious activity while the containers are running in which GuardDuty's EKS Runtime Monitoring solution can be leveraged.
- Vulnerability scanning is already performed on the container images in the CI/CD pipeline so no other solution is required.
- Enable EKS control plane logging to send the Kubernetes API server logs to CloudWatch Logs and query for events using CloudWatch Logs Insights.
- Configure the deployments to run on AWS Fargate instead since access to the underlying host is restricted.
Score: 1.00
Single Choice
8)
You are a DevOps engineer in a Travel Booking company that has recently deployed its critical application to the Amazon Elastic Kubernetes Service (EKS). During the holiday season, the application experienced a sudden drop in performance, causing disruption to the end users. Following the recent challenges, your manager has emphasized the need for preventive measures to avoid similar issues in the future.
Which of the following observability strategies would be the most effective in detecting, investigating, and mitigating the underlying problem in the EKS cluster using metrics?
- Install the CloudWatch agent on your cluster to gather and display metrics from the control and data planes. Set alarms for anomalies and use filters and dashboards to spot unusual patterns.
Comments: On an EKS Cluster, the CloudWatch Agent collects metrics from both data and control planes. Visualize these on the CloudWatch dashboard and set alarms for performance issues.
- Analyze the control plane's metrics and evaluate its scaling configuration for improvements.
- Use FluentBit to aggregate essential metrics and forward them to CloudWatch.
- Increase EKS cluster size and nodes using EKS Metrics data when anomalies are detected.
Score: 1.00
Single Choice
9)
You are a DevOps engineer at a financial services company. Your team is responsible for managing the company's AWS EKS cluster, which hosts hundreds of critical microservices. You need to create a new node group for a microservice with high performance requirements and must be highly available.
Which type of AWS EKS node group should you use?
- AWS Fargate node group
- Custom node group with on-demand instances
- Managed node group with spot instances
Comments: A managed node group with spot instances is a good choice for workloads that can tolerate interruptions, but it may not be suitable for workloads with high performance requirements or that need to be highly available.
- Custom node group with spot instances
Score: 0.00Correct answer(s):
•
AWS Fargate node group
•
Custom node group with on-demand instances
•
Managed node group with spot instances
•
Custom node group with spot instances
Multiple Choice
10)
You are the administrator for an EKS cluster that runs your company's applications. To comply with secutiry requirements, you enabled network policies on your EKS cluster and implemented a default-deny policy obtained from the security team. After this change, the monitoring team is complaining that they are not getting application health metrics on their monitoring dashboard.
Assuming that the monitoring pod runs in the same namespace as the application pods, how will you remediate this issue, while complying with security requirements? (Select TWO)
- Apply a default-allow policy to allow monitoring pods to get application health metrics
- Delete the default-deny policy from this namespace
- Disable Network Policy
- Add an Ingress policy to allow traffic from monitoring pod to application pods
Comments: This allows monitoring pod to scrape metrics from application pods
- Add an Egress policy to allow traffic from application pods to monitoring pod
Comments: This will allow metrics to be sent from application pods to monitoring pod
Score: 1.00
Single Choice
11)
As a tech lead, you are tasked with selecting the right Kubernetes solution for your company's needs.
Which of the following statements accurately describes the business value and features of EKS?
- With EKS, you get full control over the underlying Kubernetes infrastructure, allowing you to customize it as needed.
- EKS simplifies Kubernetes management by handling the control plane and providing automated updates, allowing your team to focus on application developments.
Comments: EKS indeed simplifies Kubernetes management by handling the control plane and providing automated updates. This allows your team to concentrate on developing and running applications within the Kubernetes cluster, making it a valuable feature.
- EKS is exclusively designed for large enterprises and may not be suitable for smaller businesses due to its complexity.
- EKS offers a highly cost-effective solution compared to other cloud providers, helping to reduce infrastructure expenses significantly.
Score: 1.00
Single Choice
12)
You are asked to provide a high-level summary of the Kubernetes cluster architecture to your team.
Which of the following statements BEST describes the key components?
- A Kubernetes cluster has control plane nodes with components like the API server, scheduler, and controllers. It also has worker nodes to run applications, and a distributed data store like etcd.
Comments: Worker nodes don't run the etcd. The control plane runs it.
- The main components of a Kubernetes cluster are pods, services, replica sets and namespaces that all run on top of the nodes.
- The cluster contains a Kubernetes API server, etcd, controller manager, scheduler and DNS run on the master. Worker nodes run kubelet, kube-proxy and container runtimes.
- A Kubernetes cluster consists of a set of worker nodes and a master node that manages them. Worker nodes run pod containers while the master handles scheduling.
Score: 0.00Correct answer(s):
13)
You are deploying a new application to Kubernetes. You need to understand the core concepts of Kubernetes to successfully deploy and manage your application.
Which Kubernetes resource provides the BEST way to create and manage pods?
- Pods
- Namespaces
- Deployments
Comments: Deployments are used to create and manage pods. They allow you to specify the desired state of your application, and Kubernetes will ensure that your application is in that state.
- Services
Score: 1.00
Single Choice
14)
A new startup company recently launched an E-commerce site hosted on Amazon EKS cluster that has multiple Microservices. Their CEO asked operations team to build a solution to capture application logs across the cluster, so that they can identify what microservice can be improved.
What action should the operations team take in order to capture application logs generated across the cluster?
- Configure cluster-wide log collector agent like FluentBit to capture application logs and send them to a centralized logging destination like CloudWatch or Elasticsearch and build a dashboard.
Comments: Cluster-wide log collector systems like Fluentd or FluentBit can tail log files on the nodes and ship logs for retention.
- Setup AWS Distro for OpenTelemetry (ADOT) collector to capture application logs and store in Amazon Managed Service for Prometheus and visualize using Amazon Managed Grafana.
- Turn on Kubernetes native solution to collect application logs and send them to a centralized logging destination like CloudWatch or Elasticsearch and build a dashboard.
- Setup CloudWatch Agent to capture application logs and store in Amazon Managed Service for Prometheus and visualize using Amazon Managed Grafana.
Score: 1.00
Multiple Choice
15)
After deploying a EKS cluster you discover that all pods within the cluster can communicate with each other. Security has determined multi-tenant EKS clusters are acceptable, but individual pods should NOT have network access to other resources internal or external to the cluster. As the Solutions Architect, you decide to use the Amazon VPC CNI to enforce Network Policies to secure the traffic within the Kubernetes clusters.
What elements of Network Policies can you use to restrict or allow pod's traffic to other pods or external resources. (Select THREE)
- Subnets
- Label Selectors
Comments: A label selector in Kubernetes is a core grouping primitive that allows users to identify a set of objects. Label Selectors are native to Kubernetes cluster and are used by the Network Policy API.
- Security Groups
- IP Blocks
Comments: A network addressable range assigned to pods and resource that can be referenced in the Network Policy API.
- Namespaces
Comments: Namespaces are a way to isolate, group, and organize resources within a Kubernetes cluster. Namespaces are native to Kubernetes cluster and are used by the Network Policy API.
- Network ACLs
Score: 1.00
Single Choice
16)
Your company is planning to move some on-premises applications to AWS. As a development lead, you consider moving these applications as containers to be a good idea.
What is the primary advantage of using containers for deploying and managing applications?
- Containers in Kubernetes enable seamless integration with physical hardware, eliminating the overhead of virtualization layers and providing direct hardware access.
- Containers encapsulate applications and their dependencies, ensuring consistent runtime environments across development, testing, and production.
Comments: One of the main concepts of containerization lies in the encapsulation of applications, dependencies, and the ability to ensure consistent runtime environments.
- Containers in Kubernetes automatically optimize application code for performance, reducing the need for manual tuning and optimization efforts.
- Containers in Kubernetes simplify networking configuration by using software-defined networking (SDN) principles, reducing the need for complex routing tables.
Score: 1.00
Single Choice
17)
You are the Kubernetes administrator for an organization that operates a shared cluster to host various applications. You need to ensure proper access control for different teams and different team members like developers, operators, security admin, etc., allowing them to manage resources in their namespaces while maintaining cluster-wide security standards.
Which configuration is the most efficient way to grant teams the right permissions within their namespaces and manage cluster-wide permissions?
- Use cluster role bindings to grant permissions at the namespace level.
- Create a common cluster role for all teams that complies with the organization's security standards.
- Assign cluster-admin privileges to individual team members.
- Define custom roles for each team and bind them at the cluster level
Comments: Defining custom roles for each team and binding them at the cluster level can be overly complex and does not follow best practices for namespace-level access control, which is the premise in the question.
Score: 0.00Correct answer(s):
Single Choice
18)
An e-commerce application team has more than 25 microservices running within a Kubernetes cluster. The platform architect for this Kubernetes cluster needs to expose 15 of these microservices to the internet.
What is the advantage of exposing these microservices via an Ingress resource compared to exposing them individually via a load balancer?
- Loadbalancer has a max limit of 10 targets and cannot handle 15 microservices
- Ingress resource is more performant when compared to load balancer and better for e-commerce applications
- Ingress resource is more secure than a Loadbalancer
- Ingress resource reduces the cost and complexity of managing individual cloud-native loadbalancers
Comments: CORRECT - As described in the video, large-scale applications deployed as 100s of microservices should prefer to expose via an ingress. This way, a load balancer can handle external load balancing, and an ingress acts as an internal load balancer to route traffic to the appropriate Kubernetes service. This reduces the need for 100s of load balancers (one per service), thus reducing the cost and operational overhead of managing individual cloud native loadbalancers.
Score: 1.00
Single Choice
19)
You manage an EKS Cluster with one autoscaling group using an instance type that has an EC2 Instance Savings Plan and other autoscaling group using instance types that are on demand.
In order to optimize costs, which feature of Cluster Autoscaler can favour the autoscaling group covered by Instance Savings Plan to be used first in an scale-out event?
- Node Termination Handler
- Spot Instances
- Priority Expanders
- Weighted Provisioners
Comments: It is the feature of Karpenter that allows you to favor certain instance types when scaling out
Score: 0.00Correct answer(s):
Single Choice
20)
You have developed a microservices based application that is being deployed to your Amazon EKS cluster. The application is deployed as multiple Kubernetes deployments and has various endpoints that need to be exposed outside of the cluster to allow for external users to make HTTP based API calls against. To reduce complexity for your end users, you would like to expose the different application endpoints on a single URL with different URL paths directing users to the proper endpoint.
Based on these requirements, what is the best way to accomplish exposing the application to your end users?
- Create a service object of type ClusterIP
- Create a service object of type NodePort
- Create service object of type LoadBalancer
- Create an ingress object using the AWS Load Balancer Controller
Comments: Creating an Ingress object with the AWS Load Balancer controller will meet the defined requirements of allowing a single external endpoint with path based routing. The application is HTTP based and aligns with the Ingress's layer 7 based routing capabilities.
Score: 1.00
Single Choice
21)
You are deploying a front-end web app pod called web-app-pod that will handle user traffic in a Kubernetes cluster. The development team wants to ensure the pod does not use too many resources.
What is the most important thing to do when deploying web-app-pod?
- Set resource limits on the pod so it doesn't use too much CPU/memory.
Comments: Setting resource limits on a pod is important to ensure it does not use too many cluster resources.
- Ensure the pod has the proper Kubernetes labels so it can be discovered correctly.
- Expose. The pod has a ClusterIP service, so other pods can access it.
Score: 1.00
Multiple Choice
22)
Your organization's software engineering team is new to containerizing applications and wants to know the artifacts to be included in their container images.
Which of the following elements would you advise the Software Engineering team to include in their container images? (Select THREE)
- Operating System (OS) Drivers
- External volumes
- Software Binary Files
- Application Code
Comments: Typically, you want to have your application code part of the container image, especially if you are trying to containerize an application.
- Credentials
Comments: You don't want to hardcode any username/password in your image. Use the environment variables to retrieve that information from outside the container.
- Software Libraries Files
Comments: Software Libraries are needed to run application code or other supporting software within the container.
Score: 0.67Correct answer(s):
Multiple Choice
23)
A customer wants to expose an application to the internet. The application is running on multiple pods on EKS.
What options does the customer have? (Select TWO)
- Use a Kubernetes service of type LoadBalancer. The AWS Load Balancer Controller will provision a classic Load Balancer and expose the application to the internet
Comments: AWS Load Balancer Controller can only provision Application LoadBalancers from ingresses, and Network Load Balancers from Services
- Use a kubernetes service of type ClusterIP. The AWS Load Balancer Controller will provision a Network Load Balancer and expose the application to the internet
- Configure a kubernetes ingress object. The application is exposed to the internet through the ingress controller
Comments: The Ingress Controller is responsible to provide external access to the cluster. Provide an ingress object with the right routing configuration (path/host)
- Manually configure a Load Balancer in front of the EKS cluster
- Do nothing. The application is automatically exposed to the internet
Score: 0.50Correct answer(s):
Single Choice
24)
As a Solutions Architect, you are asked to design a Kubernetes environment on AWS for a customer.
Which of the following statements BEST describes the components of an EKS cluster?
- EKS clusters contain master nodes that manage scheduling and orchestration, while application pods run on separate worker nodes.
Comments: EKS handles provisioning and managing the Kubernetes control plane, while users are responsible for worker nodes that run application workloads. The control plane and worker nodes comprise the full EKS architecture.
- EKS does not provide a managed control plane - users must deploy their own Kubernetes masters to create a complete cluster.
- EKS clusters contain worker nodes that run pods, but the control plane runs on customer infrastructure.
- The customer has full control over both the master and worker nodes, being responsible for managing the underlying EC2 instances.
Score: 1.00
Single Choice
25)
You are working on a microservice application as a developer. While testing in your local environment, you found that your application container needs at least 128 MiB memory to run on the Kubernetes cluster efficiently. You need to prepare this application to run on a Kubernetes cluster.
How will you ensure your application container will get sufficient memory when deployed in a Kubernetes cluster?
- Configure the kube-scheduler on the cluster.
- Configuring specific resource requirements for a Pod is not supported in Kubernetes.
- Specify an environment variable named MEM_REQ with value "128Mi" in your container definition YAML.
- Specify resource request under "spec.containers[].resources.requests.memory" with value "128Mi" in your container definition YAML.
Comments: You can specify memory and CPU requirements under `spec.containers[].resources.requests.memory` and `spec.containers[].resources.requests.cpu` configuration of the container definition YAML. When you create a Pod, the Kubernetes scheduler selects a node for the pod to run on. Each node has a maximum capacity for each of the resource types: the amount of CPU and memory it can provide for Pods. The scheduler ensures that, for each resource type, the sum of the resource requests of the scheduled containers is less than the capacity of the node. Note that although memory or CPU resource usage on nodes is very low, the scheduler still refuses to place a Pod on a node if the capacity check fails. This protects against a resource shortage on a node when resource usage later increases, for example, during a daily peak in request rate.
Score: 1.00
Single Choice
26)
The Development team on Company A regularly pushes new images directly to ECR for Deployment. With Company A's recent concerns for security, specifically with common vulnerabilities and exposures, the DevOps team is tasked to create a plan to scan these images as soon as they are pushed to ECR AND have reports on vulnerabilities and exposure findings available on SecurityHub for the Security team.
How can the Devops team address ALL the requirements of this task?
- Enable Amazon Inspector, when a new image is pushed and automatically scanned, the reports will be available on SecurityHub.
Comments: As mentioned in the video, when Amazon Inspector is enabled, images pushed to ECR are automatically scanned and the reports are sent automatically to SecurityHub as well.
- Scan the image using Hadolint, save the reports on a txt file, send to the Security team when requested.
- Scan the image using Hadolint, when a new image is pushed and automatically scanned, the reports will be available on SecurityHub.
- Enable ECS, when a new image is pushed and automatically scanned, the reports will be available on SecurityHub.
Score: 1.00
Multiple Choice
27)
Your team has decided to move to a microservices architecture running on Kubernetes. You are tasked with deploying a new Kubernetes cluster running on AWS.
Which of the following statements describes the functions of a control plane? (Select THREE)
- Control plane manages the container runtime engine, which is responsible for running containers.
- Control plane interacts with data plane nodes using kube-proxy, an agent deployed on each node to monitor the health of data plane nodes.
- The main components of a control plane are - API Server, Scheduler, Controllers, etcd.
Comments: These are the main components of a Kubernetes control plane.
- Control plane is responsible for scheduling pods on specific nodes according to automated workflows and user-defined conditions.
Comments: The scheduler is a component in the control plane responsible for scheduling pods to nodes.
- Control plane manages the state of the Kubernetes cluster.
Comments: Control plane is responsible for managing the cluster state and ensuring that the current state is equal to the desired state.
Score: 1.00
Single Choice
28)
You are teaching a class on containerization to a group of aspiring software developers. To assess their understanding of container concepts and features, you decide to ask the following question.
Which of the following BEST describes the key concepts and features of a container?
- Containers are only compatible with Windows-based applications and cannot run Linux-based software.
- Containers encapsulate both the application code and its runtime dependencies, ensuring consistency across different environments.
Comments: Containers encapsulate the application code and its runtime dependencies, ensuring consistency across different environments. This is a fundamental concept of containers.
- Containers are primarily used for data storage and do not execute applications.
- Containers are virtual machines that emulate the entire operating system, including the kernel.
Score: 1.00
Single Choice
29)
You have developed a micro-services based web application that is being deployed to your Amazon EKS cluster. The application has multiple services that need to be exposed externally. You have decided that exposing the application endpoints using a Kubernetes Ingress controller and AWS ALB would meet all requirments. A central administration team deployed the EKS cluster you are leveraging and is using the default configuration, with no additional components installed.
You have created a Kubernetes Ingress manifest with the proper configuration options for the ALB and your applications, but upon applying the manifest, nothing happened. No load balancer was created and your application is not accessible externally.
What is a potential reason that your ingress resource is not being satisfied?
- A NLB must be used with an Ingress resource, not an ALB.
- The ALB must be created prior to the Ingress resource being applied.
- An ingress resource should not be used for this use case. A service of type LoadBalancer should be created instead.
- The AWS Load Balancer Controller was not installed on the EKS cluster.
Comments: You must have an Ingress controller to satisfy an Ingress. Only creating an Ingress resource has no effect.<br><br>EKS clusters do not come with the AWS Load Balancer Controller installed by default.
Score: 1.00
Multiple Choice
30)
Engineers working for a SaaS company noticed one of their Microservices running on EKS is sending packets to an unknown IP address. After a deeper investigation, they realized the application container had been compromised.
Knowing their deployment pipeline runs container image scanning, what are the possible causes for the breach? (Select Three)
- Social engineering hack
- Embedded malware
Comments: Because malware scanning is based on signature sets and behavioral detection heuristics of actual known attacks. A new malware will probably escape detection.
- Zero-Day Vulnerability
Comments: Image scanning is based on known CVEs published by multiple trusted organizations. If a vulnerability is not known it will not be in the published database and will missed by image scanners.
- Using immutable tags for images
- Distroless Images
- Privileged escalation due to wrong configuration
Comments: Adding a user as part of the docker group could lead to escalation of privileges to root access.
Score: 1.00
Single Choice
31)
A DevOps engineer needs to revisit some recently deployed multiple web applications on Amazon EKS. Each web application was exposed with NodePort service type with URL path using AWS Application Load Balancer (ALB) and Amazon Route 53 to connect customers' requests to the web applications. Web applications must handle HTTP/HTTPS traffic and be reachable on the Internet.
Which recommendation is MOST likely cost-effective and has a reduced security risk?
- Create Service resource for each application. Change the service type to NodePort.
- Create multiple Ingress resources. Change the service type to ClusterIP.
- Create Service resource for each application. Change the service type to LoadBalancer.
- Create single Ingress resource with multiple routing rules. Change the service type to ClusterIP.
Comments: This will create one ELB for all the applications and support HTTP/HTTPS traffic. You save on cost and reduce security risk.
Score: 1.00
Single Choice
32)
A DevOps engineer is setting up a GitOps pipeline using GitHub Actions to enable the Deployment of microservices to their EKS cluster. As a consultant, you are asked how to configure the EKS cluster credentials in the GitHub Actions workflow.
What will you advise?
- Store AWS Access Key and Secret Key as secrets in the GitHub repository, configure environment variables to use these secrets along with the EKS cluster details (name, region) to connect to EKS cluster
Comments: As shown in the demo in the course video, this is the correct approach of storing ACCESS and SECRET keys in a GitHub secret and referencing them as env values in the workflow, along with cluster details.
- Configure GitHub Actions workflow to use an IAM role that is authorized to connect to the EKS cluster
- Configure GitHub Actions workflow to use a kube config stored as a GitHub secret in the source code repository
- The username and password for the EKS cluster administrator can be stored as environment variables and referenced by the GitHub Actions workflow
Score: 1.00
Multiple Choice
33)
You've created an EKS cluster and then created a node group in the cluster. You noticed that there are no worker nodes visible inside the EKS cluster. There are EC2 instances being created in the EC2 management console. You've performed troubleshooting and observed that the EC2 instances have no IAM policies attached.
Which of the following policies would you attach to the worker node IAM role to help worker nodes join the EKS cluster but not have permission to do anything else.? (SELECT TWO)
- AmazonEKSWorkerNodePolicy
Comments: AmazonEKSWorkerNodePolicy is an AWS managed policy that: This policy allows Amazon EKS worker nodes to connect to Amazon EKS Clusters.
- AmazonEC2FullAccess
- AmazonEKSClusterPolicy
Comments: This policy provides Kubernetes the permissions it requires to manage resources on your behalf. Kubernetes requires Ec2:CreateTags permissions to place identifying information on EC2 resources including but not limited to Instances, Security Groups, and Elastic Network Interfaces. This policy is not required by worker nodes to join the EKS cluster.
- AmazonEC2ContainerRegistryReadOnly
- AdministratorAccess
Score: 0.50Correct answer(s):
Single Choice
34)
Your team manages an Amazon EKS cluster. The nodegroups are all in private subnets. You have a non-HTTP application running on TCP port 5000.
How can you expose this application to allow access over the public internet?
- Create an endpoint using Kubernetes Service with Type = ClusterIP.
- Create an endpoint using Kubernetes Service with Type = NodePort.
- Create an endpoint using Kubernetes Service with Type = LoadBalancer setup to use a NLB (Network Load Balancer)
Comments: LoadBalancer service with type NLB can be used which will allow communication for non-HTTP application over public internet.
- Create an endpoint using Kubernetes Ingress.
Score: 1.00
Single Choice
35)
A company is planning to containerize an existing Java application. Your team needs to review and prepare the code for containerization.
Which component will your team be responsible for?
- Container runtime
- Container image
Comments: A container image contains the application code, application configuration, and the application library dependencies. The container image is pushed to a container registry. Your team is responsible for porting the code, configuring the application, and providing the application libraries into the container, so it can later be instantiated by the container runtime.
- Container instance
- Container registry
Score: 1.00
Single Choice
36)
You are the cluster administrator for your organization's EKS cluster. You have been informed that your Organization purchased an EC2 instance savings plan for m6g.2xlarge and c6g.2xlarge instances.
How will you influence Karpenter to prefer these instance types first during a scale-out event?
- Use Weighted provisioners to prefer savings plans instance types first
- Create managed nodegroups consisting m6g.2xlarge and c6g.2xlarge to ensure EC2 Instance savings plans are utilized. Karpenter will use this existing capacity to schedule pods
Comments: INCORRECT - This is not a cost-effective solution, as it will create an overprovisioned cluster.
- Limit Karpenter provisioners to only use m6g.2xlarge and c6g.2xlarge
- Karpenter cannot prefer specific instances during scheduling
- Use Karpenter's priority expanders to prefer savings plans instance types first
Score: 0.00Correct answer(s):
Single Choice
37)
As a developer, you have deployed your application to an Amazon EKS cluster. You cluster's administrator has explained that the cluster has been configured with Cluster Autoscaler to automatically scale cluster nodes as needed to meet the demand of workloads.
You application experiences varying levels of demand with higher levels of traffic occurring during business hours. Since the Amazon EKS cluster has been configured with Cluster Autoscaler your expectation was that your workloads would automatically scale out to meet required demand; however, you have noticed that the number of application pods your deployment is using remains static despite the amount of load on your application.
The configuration of what resource may be missing that would allow the automatic scale out of application pods based on utilization?
- Horizontial Pod AutoScaler
Comments: HPA will add and remove pods based on selected metrics.
- ReplicaSet
- Vertical Pod Autoscaler
- Karpenter
Score: 1.00
Single Choice
38)
Your organization is concerned about unusual network traffic involving two pods in its EKS cluster. The DevOps team needs to create a Network Policy to block ingress and egress connections on Pods A and B only, both running on the same namespace. A deny all ingress and egress Network Policy is applied to the pod's namespace but, testing shows that ALL pods in the namespace have been blocked.
What should the Devops team do to fix the Network Policy and block only Pods A and B communication?
- Add labels to both Pods A and B, edit the Network Policy's field named podSelector to match the newly created labels.
Comments: Not setting a correct podSelector field on your Network Policy will result in it not matching the desired pods, so in fact blocking all namespace traffic. Setting labels to the Pods and adding those to the podSelector field in the Network Policy would make sure the pods are matched by the deny all policy and traffic to only Pods A and B is blocked.
- Restart Pods A and B for the Network Policy to take effect.
- Remove policyType Ingress from the Network Policy, so that it blocks all ingress and egress traffic for Pods A and B.
- Remove policyType Egress from the Network Policy, so that it blocks all ingress and egress traffic for Pods A and B.
Score: 1.00
Single Choice
39)
A company is running a microservices application on Amazon EKS. The application consists of a front-end service, several back-end services, and a MongoDB database for persistence. The DevOps engineer wants to deploy MongoDB as a stateful workload.
Which Kubernetes resource should be used to deploy MongoDB for data persistence?
- DaemonSet
- Deployment
- StatefulSet
Comments: StatefulSets are used for stateful applications like databases that need stable network identifiers, persistent storage, ordered Deployment and scaling, and graceful deletion.
- ReplicaSet
Score: 1.00
Single Choice
40)
You are responsible for managing the company's Kubernetes cluster, which hosts a handful of microservices. You are planning to add new nodes to the cluster to meet the increasing demand and are considering using two types of nodes: worker nodes and control plane.
What are the key differences between worker nodes and control plane?
- Worker nodes are responsible for running containerized applications, while the control plane is responsible for managing the cluster.
Comments: Worker nodes are responsible for running containerized applications, such as pods. The control plane is responsible for managing the cluster, such as scheduling pods to worker nodes and monitoring the health of the cluster.
- Worker nodes can be deployed on-premises or in the cloud, while master nodes must be deployed on-premises.
- Worker nodes are more expensive than the control plane.
- Control plane is more scalable than worker nodes.
Score: 1.00
Single Choice
41)
You are working as an SRE (Site Reliability Engineer) for a new company responsible for monitoring their Amazon EKS clusters. You started to receive complaints from the development team regarding one particular cluster. The developers are unable to observe new pods being created after they have created a new deployment. You confirmed that the deployment exists, but pods do not exist.
What could you check to figure out the root cause of the issue?
- Check necessary RBAC permissions for the developer to ensure correct permissions.
- Check the Kube Scheduler logs
Comments: It can be helpful for pod scheduling related issues but not for pod creation related issue.
- Check the ETCD logs
- Check the Kube Controller Manager logs
Score: 0.00Correct answer(s):
Single Choice
42)
You are the DevOps lead for your organization, and your team has manually deployed workloads to the EKS cluster till now. You want to improve it.
Which automated process in GitHub Actions will you utilize to run deployment steps whenever application code is pushed to the main branch?
- Github Actions Jobs.
- Github Actions Runners.
- Github Actions Workflows configured with push Event on main branch
Comments: A workflow is a configurable automated process that will run one or more jobs. Workflows will run when triggered by an event in your repository.
- Github Actions Workflows configured with schedule trigger.
Score: 1.00
Single Choice
43)
You manage a critical web application hosted on multiple pods in an Amazon EKS cluster. The development team releases a new version of the web application image multiple times monthly. You want to incorporate these new changes and do a rolling update to prevent any downtime for the web application.
In this scenario, which kubernetes object(s) can you use to support rolling update?
- Only Deployment support rolling update
- Only ReplicaSets support rolling update
- Both Deployment and ReplicaSets support rolling update.
Comments: RepliaSets doesn't support Rolling Update.
- Neither Deployment nor ReplicaSets support rolling update.
Score: 0.00Correct answer(s):
Single Choice
44)
You are configuring a Helm values file for a microservices-based application. Your team wants to ensure that the application can scale easily and that sensitive information, such as database credentials, is stored securely. Question: Which of the following options demonstrates the correct way to structure a Helm values file for this scenario?
- Utilize helm-secret plugin to store secrets on cloud native secret manager like AWS SecretManager and reference them inside values.yaml
Comments: This ensures that sensitive data is not exposed in plain text, access is managed, and security best practices are followed.
- Place all configuration settings, including sensitive data in the values.yaml file. This ensures that everything is in one place for easy access during deployments.
- Store sensitive information, such as database credentials, in a separate file outside the Helm chart repository, but reference it in the values.yaml file using an absolute file path.
Score: 1.00
Single Choice
45)
You are managing a Kubernetes cluster hosting an application consisting of a backend and database container. To optimize resource utilization, both containers are deployed within the same pod. The backend need to communicate with the database container.
What is the most suitable method to ensure effective communication between microservices?
- The containers use localhost to communicate with each other
- Deploy an ingress object
- Create a service of type ClusterIP
Comments: Containers within the same pod do not need any specific communication mechanisms since they share a common network namespace.
- Create a service of type NodeIP
Score: 0.00Correct answer(s):
Single Choice
46)
A software engineer is deploying two microservices, orders and products, to an Amazon EKS cluster. The microservices need to be accessible over the Internet. The engineer wants to ensure fault tolerance for the microservices.
Which combination of Kubernetes resources should the engineer use to achieve this?
- Use a ReplicaSet for each microservice to ensure the desired number of replicas and a Service of type NodePort to expose them to the Internet.
- Use a single Deployment for both microservices and a Service of type ExternalName to expose them to the internet.
- Use a Deployment for each microservice to ensure the desired number of replicas and a Service of type LoadBalancer to expose them to the Internet.
Comments: Deployments ensure the desired number of pod replicas are maintained, providing fault tolerance. Services of type LoadBalancer expose the microservices to the Internet by creating a cloud provider's load balancer that routes traffic to the service.
- Use a Deployment for each microservice without specifying replicas and a Service of type ClusterIP to expose them to the Internet.
Score: 1.00
Single Choice
47)
As a new developer on your team, you have been tasked with deploying an application to Kubernetes.
Which of the following BEST describes how kubectl can be used?
- Kubectl is the command line tool that allows you to run commands against Kubernetes clusters, such as deploying applications, viewing status, and managing containerized workloads.
Comments: Kubectl is the official CLI for working with Kubernetes clusters. It allows you to deploy, manage, and monitor applications running on Kubernetes from the command line.
- Kubectl should be avoided in favor of third-party UIs and dashboards for managing Kubernetes.
- Kubectl enables you to create Kubernetes manifests locally without connecting to a cluster.
- Kubectl allows you to directly modify infrastructure resources like nodes and networking configurations within a cluster.
Score: 1.00
Single Choice
48)
As a solutions architect, you are evaluating Kubernetes for a new application.
Which statement BEST describes the differences between the control plane and worker nodes?
- Control plane nodes only run controllers while worker nodes run application workloads.
- There is no difference between the control plane and worker nodes - both can perform all tasks.
- Worker nodes manage the Kubernetes cluster while control plane nodes run applications.
- Control plane nodes run the Kubernetes control plane to manage the cluster, while worker nodes run applications and services.
Comments: The control plane manages and schedules workloads on the worker nodes. Worker nodes actually run the applications and services deployed by users.
Score: 1.00
Single Choice
49)
A Devops engineer created an EKS cluster using a deployment IAM role. When the engineer tries to connect to the cluster with a personal IAM role through kubectl, they get "An error occurred (InvalidClientTokenId) when calling the AssumeRole operation: The security token included in the request is invalid". They repeatedly ran the eks update-kubeconfig command, which was completed without errors.
What actions will help to resolve this error most efficiently?
- Assume the deployments IAM role used to create the cluster and add the personal role to the aws-auth.yaml file.
Comments: When an EKS cluster is created, it adds the role assumed to create it to the aws-auth.yaml file. For another role/user to run kubectl commands, they have to be added to the aws-auth.yaml file.
- Create a support ticket to AWS and get them to provide a valid token id.
- Add in Kubernetes RBAC for the user so that they have permission to run kubectl commands.
- Delete the cluster and manually create it on the console with the DevOps engineer's IAM credentials.
Score: 1.00
Single Choice
50)
What command option when used with kubectl will list all resources running in the "workshop" namespace ?
- kubectl get all --all-namespaces
- kubectl get all
- kubectl get pods
- kubectl get all -n workshop
Comments: -n or --namespace is used to see resources running in a namespace