我们说下,除了Deployment之外,其他的上层对象,那直接开始吧

1.      RC和RS

RC:ReplicasController 副本控制器,已经基本不再使用

RS:ReplicasSet 副本集 Deployment等对象控制的就是这个

其实使用explain来解释两者的话,二者最大的区别在选择器

RS比RC支持了标签表达式的书写,explain如下

kubectl explain rs.spec.selector

KIND:     ReplicaSet

VERSION:  apps/v1

RESOURCE: selector <Object>

DESCRIPTION:

Selector is a label query over pods that should match the replica count.

Label keys and values that must match in order to be controlled by this

replica set. It must match the pod template’s labels. More info:

https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#label-selectors

A label selector is a label query over a set of resources. The result of

matchLabels and matchExpressions are ANDed. An empty label selector matches

all objects. A null label selector matches no objects.

FIELDS:

matchExpressions     <[]Object>

matchExpressions is a list of label selector requirements. The requirements

are ANDed.

matchLabels  <map[string]string>

matchLabels is a map of {key,value} pairs. A single {key,value} in the

matchLabels map is equivalent to an element of matchExpressions, whose key

field is “key”, the operator is “In”, and the values array contains only

“value”. The requirements are ANDed.

对于selectors

比起原本的matchLabels提供一个map即可,增加了matchExpressions 这个表达式的书写方式要求提供一个对象数组

DESCRIPTION:

matchExpressions is a list of label selector requirements. The requirements

are ANDed.

A label selector requirement is a selector that contains values, a key, and

an operator that relates the key and values.

FIELDS:

key  <string> -required-

key is the label key that the selector applies to.

operator     <string> -required-

operator represents a key’s relationship to a set of values. Valid

operators are In, NotIn, Exists and DoesNotExist.

values       <[]string>

values is an array of string values. If the operator is In or NotIn, the

values array must be non-empty. If the operator is Exists or DoesNotExist,

the values array must be empty. This array is replaced during a strategic

merge patch.

对象由三个部分组成,分别是key, values,operator

一个示例如下

MatchExpression:

– key: pod-name

values:[aaa,bbb]

operator: DoesNotExist

其中的operator操作上面也写了

分别是In: value: [aaaa,bbb]必须存在,表示key指定的标签的值是这个集合内的

NotIn: value: [aaaa,bbb]必须存在,表示key指定的标签的值不是这个集合内的

Exists: 只要有key指定的标签即可,不用管值是多少

DoesNotExist: 只要Pod上没有key指定的标签,不用管值是多少

en

2.      我们说下DaemonSet

守护的pod集合,类似Java中的Daemon Thread,但在K8S中,具体表现为K8S集群中,每一个节点都需要运行一个程序,负责守护集群,而且在对应的yaml书写中,也没有replicas字段

常见的场景有:

1.      进行存储的管理:例如ceph,glusterd

2.      进行日志的收集:例如fluentd,logstash

3.      负责监控集群, 例如 Prometheus Node Exporter、Sysdig Agent、collectd、 Dynatrace OneAgent、APPDynamics Agent、Datadog agent、New Relic agent、Ganglia gmond、Instana Agent 等

一个书写的示例:

apiVersion: apps/v1

kind: DaemonSet

metadata:

name: test-deamon

spec:

selector:

matchLabels:

name: deamon

template:

metadata:

labels:

name: deamon

spec:

containers:

– name: test-deamon-pod

image: nginx

4.      statefulSet,重要的有状态应用的对象

用于给予外部一个稳定的,唯一的网络表示,暴露出来的对象

搭配Service这个对象可以做到,利用的是创建对应的DNS

并且做到配置固定的存储

对于部署,更新,也能做到有顺序的进行部署,更新

对应的书写,我们还是利用explain来进行查看

kubectl explain sts.sepc

FIELDS:

podManagementPolicy  <string>

podManagementPolicy controls how pods are created during initial scale up,

when replacing pods on nodes, or when scaling down. The default policy is

`OrderedReady`, where pods are created in increasing order (pod-0, then

pod-1, etc) and the controller will wait until each pod is ready before

continuing. When scaling down, the pods are removed in the opposite order.

The alternative policy is `Parallel` which will create pods in parallel to

match the desired scale without waiting, and on scale down will delete all

pods at once.

replicas     <integer>

replicas is the desired number of replicas of the given Template. These are

replicas in the sense that they are instantiations of the same Template,

but individual replicas also have a consistent identity. If unspecified,

defaults to 1.

revisionHistoryLimit <integer>

revisionHistoryLimit is the maximum number of revisions that will be

maintained in the StatefulSet’s revision history. The revision history

consists of all revisions not represented by a currently applied

StatefulSetSpec version. The default value is 10.

selector     <Object> -required-

selector is a label query over pods that should match the replica count. It

must match the pod template’s labels. More info:

https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#label-selectors

serviceName  <string> -required-

serviceName is the name of the service that governs this StatefulSet. This

service must exist before the StatefulSet, and is responsible for the

network identity of the set. Pods get DNS/hostnames that follow the

pattern: pod-specific-string.serviceName.default.svc.cluster.local where

“pod-specific-string” is managed by the StatefulSet controller.

template     <Object> -required-

template is the object that describes the pod that will be created if

insufficient replicas are detected. Each pod stamped out by the StatefulSet

will fulfill this Template, but have a unique identity from the rest of the

StatefulSet.

updateStrategy       <Object>

updateStrategy indicates the StatefulSetUpdateStrategy that will be

employed to update Pods in the StatefulSet when a revision is made to

Template.

volumeClaimTemplates <[]Object>

volumeClaimTemplates is a list of claims that pods are allowed to

reference. The StatefulSet controller is responsible for mapping network

identities to claims in a way that maintains the identity of a pod. Every

claim in this list must have at least one matching (by name) volumeMount in

one container in the template. A claim in this list takes precedence over

any volumes in the template, with the same name.

里面的Field中我们先书写上需要required的

apiVersion: apps/v1

kind: StatefulSet  ### 有状态副本集

metadata:

name: sts-test

namespace: default

spec:

selector:

matchLabels:

app: sts-nginx

serviceName: “nginx”

replicas: 3 # 三个副本

template: ## Pod模板

metadata:

labels:

app: sts-nginx

spec:

containers:

– name: nginx

image: nginx

这样就基本书写了一个nginx为镜像的statefulSet

然后对于上面我们描述的网络的稳定性,还需要一个headless Service来配合使用

如何做到一个headless Service呢?

其实就是在Service中的clusterIp设置为None

apiVersion: v1

kind: Service

metadata:

name: nginx  ## 和上面的serviceName

namespace: default

spec:

selector:

app: sts-nginx

clusterIP: None  ## 不要分配ClusterIP,headless service,整个集群的Pod能直接访问

# 浏览器目前不能访问,如果需要浏览器访问,考虑使用Ingress

# NodePort 这样任意的机器都可以进行访问,只要能访问到集群的任意一个机器

type: ClusterIP

ports:

– name: http

port: 80

targetPort: 80

需要注意,我们将service中的metadata.name和上面statefulSet找那个的serivceName保持一致

使用可以Wie

serviceName直接访问service

podnam.serviceName访问固定Pod

service.namespace.svc.cluster.local 访问service

pod.service.namespace.svc.cluster.loca 访问固定Pod

一个示例为: sts-test-0.nginx.default.svc.cluster.local

这样之后,我们可以看下sts下管理的Pod名字

分别是sts-test-0, sts-test-1, sts-test-2

而且即使重启了之后,也可以保证pod名字不变,从而保证了网络稳定性

之后我们说两个和Deployment有区别的字段

1.      podManagerePolicy pod管理策略

位于sts.spec之下

在官网给出了相关的文档

https://kubernetes.io/zh/docs/tutorials/stateful-application/basic-stateful-set/

默认的设置为OrderedReady

其实在启动的时候,是依次启动的,按照顺序,分别从0,1,2,3,4开始往上启动

另一种设置为Parallel,就是不按照顺序启动,直接并发的启动,一般来说没有用的

2.      updateStrategy字段 更新策略

里面传入一个object对象

分别有两个字段 rollingUpdate 和 Type

type可以设置两个类型

onDelete/RollingUpdate

onDelete是一种不常用的方式, 让K8S集群不会主动删除其中的Pod,而是必须要由用户手动删除Pod之后,才会触发升级流程

RollingUpdate是默认的滚动更新策略

rollingUpdate中可以设置partition 按分区升级

如果设置为2,那么表示sts中大于2的Pod会被升级

如果设置为1,那么大于1的会被升级

当然,升级也是按照顺序来升级的

3.      Job

我们将K8S中的一次性任务描述为Job,K8S可以创建指定数量的Pod来确保任务结束

基本的书写如下

apiVersion: batch/v1

kind: Job

metadata:

name: job-test

spec:

template:

spec:

containers:

– name: pi

image: busybox  ## job类型的pod,不要用阻塞式的。如nginx。Deployment才应该是阻塞式的

command: [“/bin/sh”,”-c”,”ping -c 10 baidu.com”]

restartPolicy: Never #Job情况下,不支持Always

需要注意的是,对于选择的image,尽量避免使用阻塞式的image,可能导致Job无法正常结束

除了此外,还有就是 restartPolicy,必须是Never,不可以是Always

而且在Job中,并不需要声明job对应的podselector的

Spec下还有一些需要注意的字段

Completions 设置需要有多少个Pod结束才算这个Job成功

Parallelism: 最大同时可以有多少个Pod在运行

BackoffLimit: 失败了多少个Pod就任务这个任务失败

activeDeadlineSeconds: 整个Job的存活时间,无论你有多少个Pod,只要到时间还没达成Completions的要求,Job立刻终止

ttlSecondsAfterFinished: 运行完成后多久自己删除,不然不会自己删除

apiVersion: batch/v1

kind: Job

metadata:

name: job-test-04

spec:

completions: 5  ## 前一次必须结束才会下一次

parallelism: 3

template:

spec:

containers:

– name: pi

image: busybox  ## job类型的pod,不要用阻塞式的。如nginx。Deployment才应该是阻塞式的

command: [“/bin/sh”,”-c”,”ping -c 10 baidu.com”]

restartPolicy: Never #Job情况下,不支持Always

# backoffLimit: 4 #任务4次都没成,认为失败

activeDeadlineSeconds: 600    ## 整个Job的存活时间,超出就自动杀死

ttlSecondsAfterFinished: 10  ### 运行完成后自己删除

4.      CronJob

其实按照固定时间去创建一个Job

然后交由Job启动Pod来执行命令

对应的Cron书写可以参考如下网址,

https://en.wikipedia.org/wiki/Cron, schedule

其中也有些需要注意的字段

concurrencyPolicy 并发的策略,

Allow 允许和下一个Job并行,

Forbid 禁止,如果上一个还没结束,这次就跳过

Replace,替换,新任务开始,老任务结束

failedJobsHistoryLimit: 记录失败数的上限

successfulJobsHistoryLimit:记录成功任务的上限

startingDeadlineSeconds:最大启动时间,在这个时间内,K8S会负责启动这个CronJob,超过就放弃了

最后,对于CronJob的使用建议

如果一个Job的执行时间比较长,

那么最好是将startingDeadlineSeconds设置一个比较大的值,并且concurrencyPolicy设置为Allow

书写的yaml如下

apiVersion: batch/v1beta1

kind: CronJob

metadata:

name: hello

spec:

schedule: “*/1 * * * *”    #分、时、日、月、周

jobTemplate:

# kind: Job

spec:  ## 以下是Job的写法

template:

spec:

containers:

– name: hello

image: busybox

args:

– /bin/sh

– -c

– date; echo Hello from the Kubernetes cluster

最后简单的说下K8S中对象的垃圾回收

某些对象,比如RS,Pod,往往是被创建出来,由更为上层的对象负责创建和管理,比如Deployment或者StatufulSet

那么如果上层的对象被删除了,那么被管理的对象往往也会被一同删除,

如果不想要一同被删除的话

那么可以通过参数—cascade来进行设置

Kubectl delete deploy xxx –cascade=orphan

避免下层对象被清除掉

发表评论

邮箱地址不会被公开。 必填项已用*标注