我们考虑使用K8S来整合一个EFK集群
也就是ES logstash Kibana这些组件部署在集群上
这里我们查看ES的官方文档,看是否提供了对应的K8S安装方式
https://www.elastic.co/guide/en/cloud-on-k8s/current/index.html
其中说明了ES如何部署在K8S上,这时候我们只需要看QuickStart中如何部署的即可
在K8S中,ES创建了一个operator,
kubectl apply -f https://download.elastic.co/downloads/eck/1.6.0/all-in-one.yaml
这里面创建了一个namespace,并将operator部署在了里面
然后利用这个operator,我们可以部署ECK的其他组件,基本的guide都在这里
首先是部署一个ES集群,我们部署一个三主四从的集群
apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch metadata: name: es-cluster # 可以指定名称空间 spec: version: 7.13.1 nodeSets: – name: masters count: 3 config: node.roles: [“master”] xpack.ml.enabled: true volumeClaimTemplates: – metadata: name: es-master spec: accessModes: – ReadWriteOnce resources: requests: storage: 5Gi storageClassName: “rook-ceph-block” – name: data count: 4 config: node.roles: [“data”, “ingest”, “ml”, “transform”] volumeClaimTemplates: – metadata: name: es-node spec: accessModes: – ReadWriteOnce resources: requests: storage: 5Gi storageClassName: “rook-ceph-block” |
我们直接声明了一个node的数量,并且声明了volumeClaimTemplates, 这样我们直接apply -f 之后, operator会启动一个三主四从的es集群
Operator还会自动创建好SVC供我们访问
然后我们就可以测试访问了
不过es会自动生成密码,需要我们提前准备下
kubectl get secret es-cluster-es-elastic-user -o=jsonpath='{.data.elastic}’ | base64 –decode; echo
然后我们就进行基本的访问
Curl -u “elastic:xxxxxxx” -k https://svc:9200
对于外部的访问,我们需要部署一个Ingress
apiVersion: networking.k8s.io/v1
kind: Ingress metadata: name: elastic-ingress annotations: kubernetes.io/ingress.class: “nginx” nginx.ingress.kubernetes.io/backend-protocol: “HTTPS” nginx.ingress.kubernetes.io/server-snippet: | proxy_ssl_verify off; spec: tls: – hosts: – elastic.base.com secretName: base.com rules: – host: elastic. base.com http: paths: – path: / pathType: Prefix backend: service: name: es-cluster-es-http port: number: 9200 |
这样就可以访问ES集群了,不过我们还需要部署一个kibana
这就更加简单了,只需要
apiVersion: kibana.k8s.elastic.co/v1
kind: Kibana metadata: name: kibana spec: version: 7.13.1 count: 1 elasticsearchRef: name: es-cluster |
之后部署一个Ingress进行访问
apiVersion: networking.k8s.io/v1
kind: Ingress metadata: name: kibana-ingress annotations: kubernetes.io/ingress.class: “nginx” nginx.ingress.kubernetes.io/backend-protocol: “HTTPS” nginx.ingress.kubernetes.io/server-snippet: | proxy_ssl_verify off; spec: tls: – hosts: – kibana.base.com secretName: base.com rules: – host: kibana. base.com http: paths: – path: / pathType: Prefix backend: service: name: kibana-kb-http port: number: 5601 |
最后在内部部署一个beat,进行集群内的日志收集
apiVersion: beat.k8s.elastic.co/v1beta1
kind: Beat metadata: name: beats spec: type: filebeat version: 7.13.1 elasticsearchRef: name: es-cluster config: filebeat.inputs: – type: container paths: – /var/log/containers/*.log daemonSet: podTemplate: spec: dnsPolicy: ClusterFirstWithHostNet hostNetwork: true securityContext: runAsUser: 0 containers: – name: filebeat volumeMounts: – name: varlogcontainers mountPath: /var/log/containers – name: varlogpods mountPath: /var/log/pods – name: varlibdockercontainers mountPath: /var/lib/docker/containers volumes: – name: varlogcontainers hostPath: path: /var/log/containers – name: varlogpods hostPath: path: /var/log/pods – name: varlibdockercontainers hostPath: path: /var/lib/docker/containers |
我们部署了daemonSet
其中使用了宿主机的network
然后挂载了宿主机的 docker log目录,方便统计docker 容器的日志