In this article, we’ll deploy ECK Operator using helm to the Kubernetes cluster and build a quick-ready solution for logging using Elasticsearch, Kibana, and Filebeat.
What is ECK?
Built on the Kubernetes Operator pattern, Elastic Cloud on Kubernetes (ECK) extends the basic Kubernetes orchestration capabilities to support the setup and management of Elasticsearch, Kibana, APM Server, Enterprise Search, Beats, Elastic Agent, and Elastic Maps Server on Kubernetes.
With Elastic Cloud on Kubernetes, we can streamline critical operations, such as:
Managing and monitoring multiple clusters
Scaling cluster capacity and storage
Performing safe configuration changes through rolling upgrades
Securing clusters with TLS certificates
Setting up hot-warm-cold architectures with availability zone awareness
Install ECK
In this case we use helmfile to manage the helm deployments: helmfile.yaml
In our case, we’ll use only the first three of them, because we just want to deploy a classical EFK stack.
Let’s deploy the following in the order:
Elasticsearch cluster: This cluster has 3 nodes, each node with 100Gi of persistent storage, and intercommunication with a self-signed TLS-certificate.
# This sample sets up an Elasticsearch cluster with 3 nodes.apiVersion:elasticsearch.k8s.elastic.co/v1kind:Elasticsearchmetadata:name:elasticsearch-loggingnamespace:monitoringspec:version:8.2.0nodeSets: - name:defaultconfig:# most Elasticsearch configuration parameters are possible to set, e.g: node.attr.attr_name: attr_valuenode.roles: ["master","data","ingest","ml"]# this allows ES to run on nodes even if their vm.max_map_count has not been increased, at a performance costnode.store.allow_mmap:truepodTemplate:metadata:labels:# additional labels for podspurpose:loggingspec:containers: - name:elasticsearch# specify resource limits and requestsresources:limits:memory:4Gicpu:2env: - name:ES_JAVA_OPTSvalue:"-Xms2g -Xmx2g"count:3# request 100Gi of persistent data storage for pods in this topology elementvolumeClaimTemplates: - metadata:name:elasticsearch-data# Do not change this name unless you set up a volume mount for the data path.spec:accessModes: - ReadWriteOnceresources:requests:storage:100GistorageClassName:gp2http:tls:selfSignedCertificate:# add a list of SANs into the self-signed HTTP certificatesubjectAltNames: - dns:elasticsearch-logging-es-http.monitoring.svc.cluster.local - dns:elasticsearch-logging-es-http.monitoring.svc - dns:"*.monitoring.svc" - dns:"*.monitoring.svc.cluster.local"
2. The next one is Kibana: Very simple, just referencing Kibana object to Elasticsearch in a simple way.
3. Let’s log in to Kibana with the user elastic and password that we got before (http://localhost:5601), go to Analytics — Discover section and check logs: