(九)手动部署k8s之-部署高可用 kube-controller-manager 集群 > 原教程来自 [github/opsnull](https://github.com/opsnull/follow-me-install-kubernetes-cluster), 现在此基础上记录自己搭建遇到的问题 该集群包含 3 个节点,启动后将通过竞争选举机制产生一个 leader 节点,其它节点为阻塞状态。当 leader 节点不可用时,阻塞的节点将再次进行选举产生新的 leader 节点,从而保证服务的可用性。 为保证通信安全,本文档先生成 x509 证书和私钥,kube-controller-manager 在如下两种情况下使用该证书 - 与 kube-apiserver 的安全端口通信; - 在安全端口(https,10252) 输出 prometheus 格式的 metrics; #### 创建 kube-controller-manager 证书和私钥 创建证书签名请求 ``` cd /opt/k8s/work cat > kube-controller-manager-csr.json <>> ${node_ip}" scp kube-controller-manager*.pem root@${node_ip}:/etc/kubernetes/cert/ done ``` #### 创建和分发 kubeconfig 文件 kube-controller-manager 使用 kubeconfig 文件访问 apiserver,该文件提供了 apiserver 地址、嵌入的 CA 证书和 kube-controller-manager 证书 ``` cd /opt/k8s/work source /opt/k8s/bin/environment.sh kubectl config set-cluster kubernetes \ --certificate-authority=/opt/k8s/work/ca.pem \ --embed-certs=true \ --server=${KUBE_APISERVER} \ --kubeconfig=kube-controller-manager.kubeconfig kubectl config set-credentials system:kube-controller-manager \ --client-certificate=kube-controller-manager.pem \ --client-key=kube-controller-manager-key.pem \ --embed-certs=true \ --kubeconfig=kube-controller-manager.kubeconfig kubectl config set-context system:kube-controller-manager \ --cluster=kubernetes \ --user=system:kube-controller-manager \ --kubeconfig=kube-controller-manager.kubeconfig kubectl config use-context system:kube-controller-manager --kubeconfig=kube-controller-manager.kubeconfig ``` 分发 kubeconfig 到所有 master 节点 ``` cd /opt/k8s/work source /opt/k8s/bin/environment.sh for node_ip in ${NODE_IPS[@]} do echo ">>> ${node_ip}" scp kube-controller-manager.kubeconfig root@${node_ip}:/etc/kubernetes/ done ``` #### 创建 kube-controller-manager systemd unit 模板文件 ``` cd /opt/k8s/work source /opt/k8s/bin/environment.sh cat > kube-controller-manager.service.template < kube-controller-manager-${NODE_IPS[i]}.service done ls kube-controller-manager*.service ``` - NODE_NAMES 和 NODE_IPS 为相同长度的 bash 数组,分别为节点名称和对应的 IP 分发到所有 master 节点 ``` cd /opt/k8s/work source /opt/k8s/bin/environment.sh for node_ip in ${NODE_IPS[@]} do echo ">>> ${node_ip}" scp kube-controller-manager-${node_ip}.service root@${node_ip}:/etc/systemd/system/kube-controller-manager.service done ``` - 文件重命名为 kube-controller-manager.service; #### 启动 kube-controller-manager 服务 ``` source /opt/k8s/bin/environment.sh for node_ip in ${NODE_IPS[@]} do echo ">>> ${node_ip}" ssh root@${node_ip} "mkdir -p ${K8S_DIR}/kube-controller-manager" ssh root@${node_ip} "systemctl daemon-reload && systemctl enable kube-controller-manager && systemctl restart kube-controller-manager" done ``` - 启动服务前必须先创建工作目录; #### 检查服务运行状态 ``` source /opt/k8s/bin/environment.sh for node_ip in ${NODE_IPS[@]} do echo ">>> ${node_ip}" ssh root@${node_ip} "systemctl status kube-controller-manager|grep Active" done ``` 确保状态为 active (running),否则查看日志,确认原因 ``` journalctl -u kube-controller-manager ``` #### kube-controller-manager 监听 10252 端口,接收 https 请求 ``` [root@ _47_ /opt/k8s/work]# netstat -lnpt | grep kube-cont tcp 0 0 192.168.1.31:10252 0.0.0.0:* LISTEN 12242/kube-controll ``` #### 查看输出的 metrics 注意:以下命令在 kube-controller-manager 节点上执行 ``` [root@ _48_ /opt/k8s/work]# curl -s --cacert /opt/k8s/work/ca.pem --cert /opt/k8s/work/admin.pem --key /opt/k8s/work/admin-key.pem https://192.168.1.31:10252/metrics |head # HELP apiserver_audit_event_total Counter of audit events generated and sent to the audit backend. # TYPE apiserver_audit_event_total counter apiserver_audit_event_total 0 # HELP apiserver_audit_requests_rejected_total Counter of apiserver requests rejected due to an error in audit logging backend. # TYPE apiserver_audit_requests_rejected_total counter apiserver_audit_requests_rejected_total 0 # HELP apiserver_client_certificate_expiration_seconds Distribution of the remaining lifetime on the certificate used to authenticate a request. # TYPE apiserver_client_certificate_expiration_seconds histogram apiserver_client_certificate_expiration_seconds_bucket{le="0"} 0 apiserver_client_certificate_expiration_seconds_bucket{le="1800"} 0 ``` #### kube-controller-manager 的权限 ClusteRole system:kube-controller-manager 的权限很小,只能创建 secret、serviceaccount 等资源对象,各 controller 的权限分散到 ClusterRole system:controller:XXX 中: ``` [root@ _49_ /opt/k8s/work]# kubectl describe clusterrole system:kube-controller-manager Name: system:kube-controller-manager Labels: kubernetes.io/bootstrapping=rbac-defaults Annotations: rbac.authorization.kubernetes.io/autoupdate: true PolicyRule: Resources Non-Resource URLs Resource Names Verbs --------- ----------------- -------------- ----- secrets [] [] [create delete get update] endpoints [] [] [create get update] serviceaccounts [] [] [create get update] events [] [] [create patch update] tokenreviews.authentication.k8s.io [] [] [create] subjectaccessreviews.authorization.k8s.io [] [] [create] configmaps [] [] [get] namespaces [] [] [get] *.* [] [] [list watch] ``` 需要在 kube-controller-manager 的启动参数中添加 --use-service-account-credentials=true 参数,这样 main controller 会为各 controller 创建对应的 ServiceAccount XXX-controller。内置的 ClusterRoleBinding system:controller:XXX 将赋予各 XXX-controller ServiceAccount 对应的 ClusterRole system:controller:XXX 权限。 ``` [root@ _50_ /opt/k8s/work]# kubectl get clusterrole|grep controller system:controller:attachdetach-controller 2d6h system:controller:certificate-controller 2d6h system:controller:clusterrole-aggregation-controller 2d6h system:controller:cronjob-controller 2d6h system:controller:daemon-set-controller 2d6h system:controller:deployment-controller 2d6h system:controller:disruption-controller 2d6h system:controller:endpoint-controller 2d6h system:controller:expand-controller 2d6h system:controller:generic-garbage-collector 2d6h system:controller:horizontal-pod-autoscaler 2d6h system:controller:job-controller 2d6h system:controller:namespace-controller 2d6h system:controller:node-controller 2d6h system:controller:persistent-volume-binder 2d6h system:controller:pod-garbage-collector 2d6h system:controller:pv-protection-controller 2d6h system:controller:pvc-protection-controller 2d6h system:controller:replicaset-controller 2d6h system:controller:replication-controller 2d6h system:controller:resourcequota-controller 2d6h system:controller:route-controller 2d6h system:controller:service-account-controller 2d6h system:controller:service-controller 2d6h system:controller:statefulset-controller 2d6h system:controller:ttl-controller 2d6h system:kube-controller-manager 2d6h ``` #### 查看当前的 leader ``` [root@ _1_ ~]# kubectl get endpoints kube-controller-manager --namespace=kube-system -o yaml apiVersion: v1 kind: Endpoints metadata: annotations: control-plane.alpha.kubernetes.io/leader: '{"holderIdentity":"slave-32_49c0e734-b401-11e9-9fe5-000c29eca111","leaseDurationSeconds":15,"acquireTime":"2019-08-01T02:09:14Z","renewTime":"2019-08-01T02:13:32Z","leaderTransitions":11}' creationTimestamp: "2019-07-31T09:21:27Z" name: kube-controller-manager namespace: kube-system resourceVersion: "46796" selfLink: /api/v1/namespaces/kube-system/endpoints/kube-controller-manager uid: 92db14af-b374-11e9-8197-000c293d1de7 ``` 可见,当前的 leader 为 slave-32 节点 #### 测试 kube-controller-manager 集群的高可用 停掉一个或两个节点的 kube-controller-manager 服务,观察其它节点的日志,看是否获取了 leader 权限。 Last modification:August 5th, 2019 at 05:10 pm © 允许规范转载 Support 如果觉得我的文章对你有用 ×Close Appreciate the author Sweeping payments