K8S部署StatefulSet有状态nacos集群并实现动态NFS持久化

K8S部署StatefulSet有状态nacos集群并实现动态NFS持久化

Deng YongJie's blog 302 2022-09-18

部署StatefulSet有状态nacos集群

需要按照yaml文件里创建,该模式是容器运行时,就会运行plugin插件,该插件会动态识别当前pod名字,集群以 pod名字.svc名字.namespace名字.svc.cluster.local方式互相通信

首先部署nfs或nas服务端

然后部署nfs动态挂载pv的插件

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: managed-nfs-storage
  namespace: pub-service
provisioner: fuseim.pri/ifs
parameters:
  archiveOnDelete: "false"
---
kind: Deployment
apiVersion: apps/v1
metadata:
  name: nfs-client-provisioner
  namespace: pub-service
spec:
  replicas: 1
  strategy:
    type: Recreate
  selector:
    matchLabels:
      app: nfs-client-provisioner
  template:
    metadata:
      labels:
        app: nfs-client-provisioner
    spec:
      serviceAccount: nfs-client-provisioner
      containers:
        - name: nfs-client-provisioner
          image: quay.io/external_storage/nfs-client-provisioner:latest
          volumeMounts:
            - name: nfs-client-provisioner
              mountPath: /persistentvolumes
          env:
            - name: PROVISIONER_NAME
              value: fuseim.pri/ifs
            - name: NFS_SERVER
              value: 192.168.187.10
            - name: NFS_PATH
              value: /mnt/xxxx/nacos
      volumes:
        - name: nfs-client-provisioner
          nfs:
            server: 192.168.187.10
            path: /mnt/xxx/nacos
---

然后yaml文件修改nfs挂载卷

动态nfs-client会自动绑定pv和pvc

注意:如果集群api server没用开启seiflink字段,则PVC无法自动绑定PV卷,需要手动创建好PV

PV卷需要创建3个不同挂载目录,否则会3个pod挂载到同一目录下,就会导致3个pod同时往该目录写入数据,导致报错class,类的问题。

例如:

nacos-0的pv卷,挂载到nfs里的/mnt/nacos-0目录

nacos-1的pv卷,挂载到nfs里的/mnt/nacos-1目录

nacos-2的pv卷,挂载到nfs里的/mnt/nacos-2目录

如果做了hpa自动伸缩,则以此类推,nfs创建相应的目录,给nacos客户端挂载。

plugin插件镜像,挂载的名字需要和动态nfs定义的名字一样,否则挂载失败,init-plugin初始化失败,提示not found data,找不到数据目录。就会0个pod,即1个容器都不会显示出来。

每个pod里都会有一个init-plugin插件,动态识别自身的 pod名字,组建集群。所以nfs挂载的目录不能冲突,3个PV卷不能挂载到同一个nfs的目录。

然后按顺序创建rbac-1.yml、storageclass-2.yaml、configmap-3.yml、statefulset-4.yaml、service-5.yaml

如果出现nacos无法识别pod的有序名字,识别到pod的ip地址,则是通信失败。plugin插件没有成功运行

如果nfs-client-provisioner插件长期Backoff

第一次创建没有运行成功,则需要手动清理rbac授权,重新授权nfs-client-provisioner

kubectl apply -f rbac-1.yml -n pub-service

部署nacos集群,引用nfs存储类

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: nacos
  namespace: pub-service
  labels:
    app: nacos
spec:
  serviceName: nacos-headless
  replicas: 3
  template:
    metadata:
      labels:
        app: nacos
      annotations:
        pod.alpha.kubernetes.io/initialized: "true"
    spec:
      affinity:
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            - labelSelector:
                matchExpressions:
                  - key: "app"
                    operator: In
                    values:
                      - nacos
              topologyKey: "kubernetes.io/hostname"
      serviceAccountName: nfs-client-provisioner
      initContainers:
        - name: peer-finder-plugin-install
          image: nacos/nacos-peer-finder-plugin:latest
          imagePullPolicy: IfNotPresent
          volumeMounts:
            - mountPath: /home/nacos/plugins/peer-finder
              name: data   #名字需要和下面动态nfs的挂载名一致
              subPath: peer-finder
      containers:
        - name: nacos
          imagePullPolicy: IfNotPresent
          #image: nacos/nacos-server:v1.4.3
          image: xxxxx/ops-test/nacos-server:v2.0.4
          resources:
            requests:
              memory: "2Gi"
              cpu: "500m"
          ports:
            - containerPort: 8848
              name: client-port
            - containerPort: 9848
              name: client-rpc
            - containerPort: 9849
              name: raft-rpc
            - containerPort: 7848
              name: old-raft-rpc
          env:
            - name: NACOS_REPLICAS
              value: "3"
            - name: SERVICE_NAME
              value: "nacos-headless"
            - name: DOMAIN_NAME
              value: "cluster.local"
            - name: MODE
              value: "cluster"
            - name: SPRING_DATASOURCE_PLATFORM
              value: "mysql"
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  apiVersion: v1
                  fieldPath: metadata.namespace
            - name: MYSQL_SERVICE_DB_NAME
              valueFrom:
                configMapKeyRef:
                  name: nacos-cm
                  key: mysql.db.name
            - name: MYSQL_SERVICE_PORT
              valueFrom:
                configMapKeyRef:
                  name: nacos-cm
                  key: mysql.port
            - name: MYSQL_SERVICE_USER
              valueFrom:
                configMapKeyRef:
                  name: nacos-cm
                  key: mysql.user
            - name: MYSQL_SERVICE_HOST
              valueFrom:
                configMapKeyRef:
                  name: nacos-cm
                  key: mysql.host
            - name: MYSQL_SERVICE_PASSWORD
              valueFrom:
                configMapKeyRef:
                  name: nacos-cm
                  key: mysql.password
            - name: NACOS_SERVER_PORT
              value: "8848"
            - name: NACOS_APPLICATION_PORT
              value: "8848"
            - name: PREFER_HOST_MODE
              value: "hostname"
            - name: MYSQL_SERVICE_DB_PARAM
              #value: "characterEncoding=utf8&connectTimeout=1000&socketTimeout=3000&autoReconnect=true&useSSL=false&allowPublicKeyRetrieval=true"
              value: "characterEncoding=utf8&connectTimeout=1000&socketTimeout=3000&autoReconnect=true&useUnicode=true&useSSL=false&serverTimezone=UTC"  #2.0版本的jdbc,后面要加上UTC,否则出现时区问题
            - name: MYSQL_DATABASE_NUM
              value: "1"
          volumeMounts:
            - name: data
              mountPath: /home/nacos/plugins/peer-finder
              subPath: peer-finder
            - name: data
              mountPath: /home/nacos/data
              subPath: data
            - name: data
              mountPath: /home/nacos/logs
              subPath: logs
  volumeClaimTemplates:
    - metadata:
        name: data
        annotations:
          volume.beta.kubernetes.io/storage-class: "managed-nfs-storage"  #动态nfs-client
      spec:
        accessModes: [ "ReadWriteMany" ]
        resources:
          requests:
            storage: 20Gi     #分别创建3个pod为20G的pvc,并自动关联PV
  selector:
    matchLabels:
      app: nacos