Datadog - Conclusion and evaluation
You have no instance yet for this lesson.
Launching new machine may take time.

Datadog
VIII - Conclusion and evaluation
In software development, DevOps monitoring involves tracking and measuring the performance and health of systems and applications in order to identify and correct problems at the earliest possible stage. This includes collecting data on everything from CPU usage to disk space to application response times. By identifying problems at an early stage, DevOps monitoring can help teams avoid outages or service degradation.
This may sound similar to the type of monitoring used in any well-designed IT operation. However, DevOps monitoring goes further. DevOps methodology guides teams through short cycles of planning, development, deployment and review/assessment. DevOps monitoring, if it is to be fully integrated, will therefore need to be continuous monitoring. Datadog is therefore a key tool that guides us through the monitoring cycle that can integrate with a panoply of open source and enterprise tools.
.With Datadog, we're able to set up continuous monitoring, which involves regularly and vigilantly checking systems, networks, applications, tools, continuous integration pipelines and data for signs of performance degradation.
Exercise: Monitoring a Kubernetes Infrastructure
Context and objective:
You've been recruited as a Devops Engineer at a bank, and you're tasked with working primarily within the monitoring team to help the company visualize its metrics and application traces, which will enable the relevant teams to be alerted very quickly, all with the aim of meeting the SLA (Service Level Agrement) vis à vis their customers:
As a production environment, you'll need to monitor through Datadog various Kubernetes clusters, as the company has set up a multi-cloud environment with the aim of guaranteeing high availability and increased resilience for its workloads.
You'll be using the [https://github.com/datascientest/apm-datadog.git] application, which will need to be deployed within your Kubernetes cluster. You'll be able to use the [https://kompose.io/] library, which will help you convert the
docker-compose.yml
file into Kubernetes Manifests so you can serenely deploy your application.You need to set up within the kubernetes cluster the application traces for the API call methods
GET
,POST
,PUT
andDELETE
on thenotes
service.You need to set up email alerts whenever processors are consumed at more than 60 percent and RAM at more than 80 percent for Kubernetes clusters.
You'll need to set up email alerts whenever a deployed Pod is not in the
running
state. You'll need to deploy 2 replicas of thenotes
service and 2 replicas of thecalendar
service within the clusters. For your review, one Kubernetes cluster will be sufficient.
Deliverables:
To validate the exercise you will need to send to help@datascientest.com in Zip format:
The application configuration files deployed within the cluster, as well as the pod log files.
A pdf file describing the process implemented, together with screenshots of the results obtained
Name the exercise using the following convention:
exercise_datadog_name_firstname
For any questions relating to the evaluation, please send an e-mail to help@datascientest.com.
