Managing hundreds or thousands of containers has quickly become the standard for many organizations. With infrastructures growing more complex, we want every user to find value with Elastic (regardless of where or how they operate). We created Elastic Cloud on Kubernetes (ECK) — the official Operator — to simplify setup, upgrades, scaling, and more for running Elasticsearch and Kibana on Kubernetes. When you use ECK on OpenShift, Red Hat's Kubernetes container platform, you can fully orchestrate and manage a stateful Elastic Stack deployment in minutes, as well as take advantage of ECK's built-in best practices well beyond day 1.Of course, with diverse infrastructures, monitoring becomes both more challenging and more critical. With Elastic Observability, you can store and analyze the logs and metrics from your OpenShift ecosystem alongside your other infrastructure monitoring data for a unified view. In this blog post we're going to see how easy it is to get the Elastic Stack up and running on OpenShift with ECK, as well as how to start monitoring OpenShift.You will learn to:
Set up Elasticsearch and Kibana on OpenShift using ECK.
Ship OpenShift logs and metrics to Elasticsearch using Beats (also on ECK).
PrerequisitesTo run the following instructions, you must first:
Be a system:admin user or a user with the privileges to create Projects, CRDs, and RBAC resources at the cluster level.
Set virtual memory settings on the Kubernetes nodes (as described in Step 1 below).
Part 1: Set up an Elasticsearch deploymentIn this first part, we're going to walk through deploying Elasticsearch and Kibana on OpenShift using ECK. By the end of this section, you'll have a fully functional Elastic Stack deployment up and running. Step 1: Increase your virtual memory (recommended)Elasticsearch uses a mmapfs directory by default to efficiently store its indices. The default operating system limits on mmap counts is likely to be too low, which may result in out of memory exceptions. For production workloads, it is strongly recommended to increase the kernel setting vm.max_map_count to 262144. The kernel setting vm.max_map_count=262144 can be set on the host either directly or by a dedicated init container, which must be privileged. echo 'vm.max_map_count=262144' >> /etc/sysctl.conf Step 2: Create new projectCreate an elastic OpenShift project: oc new-project elastic Step 3: Install the ECK operatoroc apply -f https://download.elastic.co/downloads/eck/1.2.1/all-in-one.yaml Step 4: Deploy ElasticsearchCreate an Elasticsearch monitoring cluster with an OpenShift route.
This sample sets up an Elasticsearch cluster with an OpenShift route
apiVersion: elasticsearch.k8s.elastic.co/v1 kind: Elasticsearch metadata: name: monitoring spec: version: 7.9.3 nodeSets:
-
name: default count: 3 config: node.master: true node.data: true node.ingest: true
apiVersion: route.openshift.io/v1 kind: Route metadata: name: monitoring spec:
host: elasticsearch.example.com # override if you don't want to use the host that is automatically generated by OpenShift ([-].)
tls: termination: passthrough # Elasticsearch is the TLS endpoint insecureEdgeTerminationPolicy: Redirect to: kind: Service name: monitoring-es-http EOF Step 5: Deploy KibanaapiVersion: kibana.k8s.elastic.co/v1 kind: Kibana metadata: name: kibana spec: version: 7.9.3 count: 1 elasticsearchRef: name: "monitoring" podTemplate: spec: containers:
- name: kibana
resources:
limits:
memory: 1Gi
cpu: 1
apiVersion: v1 kind: Route metadata: name: kibana spec:
host: kibana.example.com # override if you don't want to use the host that is automatically generated by OpenShift ([-].)
tls: termination: passthrough # Kibana is the TLS endpoint insecureEdgeTerminationPolicy: Redirect to: kind: Service name: kibana-kb-http EOF Step 6: Collect ES admin passwordTo extract the Elasticsearch elastic admin user, run the following command: PW=$(oc get secret "monitoring-es-elastic-user" -o go-template='{{.data.elastic | base64decode }}') Part 2: Monitoring OpenShift using Metricbeat and FilebeatNow that the Elastic Stack is deployed, we can start using it to monitor OpenShift. We'll deploy Metricbeat and Filebeat to ingest observability data directly from OpenShift, and then we'll be able to easily monitor system health right from Elastic Observability. Step 1: Import Elasticsearch certsBy default, the ECK operator manages a self-signed certificate with a custom CA for each resource. The CA, the certificate, and the private key are each stored in a separate secret. In order to enable secured communication between the beat agents and Elasticsearch, the self-signed certificates need to be imported to the openshift-monitoring namespace, which will populate the Beats containers. oc get secret "monitoring-es-http-certs-public" -o go-template='{{index .data "tls.crt" | base64decode }}' > es.crt; oc -n openshift-monitoring create secret generic monitoring-es-http-certs-public --from-file=tls.crt=./es.crt oc get secret "kibana-kb-http-certs-public" -o go-template='{{index .data "tls.crt" | base64decode }}' > kb.crt; oc -n openshift-monitoring create secret generic kibana-kb-http-certs-public --from-file=tls.crt=./kb.crt Step 2: Create dedicated Beats writer usersCreate a user in Elasticsearch for both Metricbeat and Filbeat with setup and writer roles as specified in the documentation:
Metricbeat: https://www.elastic.co/guide/en/beats/metricbeat/7.10/feature-roles.html Filebeat: https://www.elastic.co/guide/en/beats/filebeat/7.10/feature-roles.html Step 3: Import Beats users as secretsoc create secret generic metricbeat --from-literal=user=PASSWORD oc create secret generic filebeat --from-literal=user=PASSWORD Step 4: Setup MetricbeatDownload the metricbeat manifest and modify it as follows: curl -L -O https://raw.githubusercontent.com/elastic/beats/7.10/deploy/kubernetes/metricbeat-kubernetes.yaml Step 4.1: Enable privileged containersed -i 's/#privileged: true/privileged: true/g' metricbeat-kubernetes.yaml Step 4.2: Modify the DaemonSet container spec in the manifest file kubernetes.yml: |-
- module: kubernetes
metricsets:
- node
- system
- pod
- container
- volume
period: 10s
host: ${NODE_NAME}
hosts: ["https://${NODE_NAME}:10250"]
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
ssl.verification_mode: "none"
If there is a CA bundle that contains the issuer of the certificate used in the Kubelet API,
remove ssl.verification_mode entry and use the CA, for instance:
ssl.certificate_authorities:
- /run/secrets/kubernetes.io/serviceaccount/service-ca.crt Step 4.3: Add resources to the ClusterRoleUnder the Metricbeat service account ClusterRole, add the following resources
- name: kibana
resources:
limits:
memory: 1Gi
cpu: 1
- nodes/metrics
- nodes/stats
Step 5: Setup metricbeat and filebeat secured settingsDownload the Filebeat manifest and modify both Metricbeat and Filebeat as follows:
curl -L -O https://raw.githubusercontent.com/elastic/beats/7.10/deploy/kubernetes/filebeat-kubernetes.yaml
Step 5.1: Use the openshift-monitoring namespacesed -i 's/kube-system/openshift-monitoring/g' metricbeat-kubernetes.yaml
Step 5.2: Edit ELASTICSEARCH_HOST, ELASTICSEARCH_USERNAME, and ELASTICSEARCH_PASSWORDThe default Metricbeat user is metricbeat and the Filebeat user is filebeat. The password secret is metricbeat for both DeamonSet and Deployment:
- name: ELASTICSEARCH_HOST value: monitoring-es-http.elastic.svc.cluster.local
- name: ELASTICSEARCH_PORT value: "9200"
- name: ELASTICSEARCH_USERNAME value: metricbeat-writer
- name: ELASTICSEARCH_PASSWORD valueFrom: secretKeyRef: name: metricbeat key: user Step 5.3: Add the CA path for Kibana and Elasticsearch as an environment variable - name: ES_CA_PATH value: "/etc/ssl/es/es-ca.crt"
- name: KB_CA_PATH
value: "/etc/ssl/kb/kb-ca.crt"
Step 5.4: Mount the CA secrets under volumes - name: es-ca
secret:
secretName: monitoring-es-http-certs-public
items:
- key: tls.crt
- name: kb-ca secret: secretName: kibana-kb-http-certs-public items:
- key: tls.crt Under volumeMounts:
- key: tls.crt
- name: es-ca mountPath: /etc/ssl/es readOnly: true
- name: kb-ca mountPath: /etc/ssl/kb readOnly: true Step 5.5: Configure setup and elasticsearch.output setup: kibana: host: "https://kibana-kb-http.elastic.svc.cluster.local:5601" ssl.certificate_authorities: ['${KB_CA_PATH}'] ssl.verification_mode: "none" dashboards.enabled: true output.elasticsearch: hosts: ['https://${ELASTICSEARCH_HOST:elasticsearch}:${ELASTICSEARCH_PORT:9200}'] username: ${ELASTICSEARCH_USERNAME} password: ${ELASTICSEARCH_PASSWORD} ssl.certificate_authorities: ['${ES_CA_PATH}'] ssl.verification_mode: "none" Step 6: Add permissions to the Beats service accountsNow that the Beats are configured, they're going to need proper access to OpenShift. oc adm policy add-scc-to-user privileged system:serviceaccount:openshift-monitoring:metricbeat oc adm policy add-scc-to-user privileged system:serviceaccount:openshift-monitoring:filebeat Step 7: Patch node-selectorOverride the default node selector for the openshift-monitoring namespace (or your custom namespace) to allow for scheduling on any node: oc patch namespace openshift-monitoring -p \ '{"metadata": {"annotations": {"openshift.io/node-selector": ""}}}' Step 8: Enable network between projectsoc adm pod-network join-projects --to=elastic openshift-monitoring Step 9: Deploy the Beatsoc apply -f metricbeat-kubernetes.yaml oc apply -f filebeat-kubernetes.yaml Monitor OpenShift with Elastic Obser
Connectez-vous pour ajouter un commentaire