(五)手动部署k8s之-部署etcd集群 > 原教程来自 [github/opsnull](https://github.com/opsnull/follow-me-install-kubernetes-cluster), 现在此基础上记录自己搭建遇到的问题 本文档介绍部署一个三节点高可用 etcd 集群的步骤: - 下载和分发 etcd 二进制文件; - 创建 etcd 集群各节点的 x509 证书,用于加密客户端(如 etcdctl) 与 etcd 集群、etcd 集群之间的数据流; - 创建 etcd 的 systemd unit 文件,配置服务参数; - 检查集群工作状态; #### 下载和分发 etcd 二进制文件 到 etcd 的 release 页面 下载最新版本的发布包: ``` cd /opt/k8s/work wget https://github.com/etcd-io/etcd/releases/download/v3.3.13/etcd-v3.3.13-linux-amd64.tar.gz tar -xvf etcd-v3.3.13-linux-amd64.tar.gz ``` #### 创建 etcd 证书和私钥 创建证书签名请求 ``` cd /opt/k8s/work cat > etcd-csr.json <>> ${node_ip}" ssh root@${node_ip} "mkdir -p /etc/etcd/cert" scp etcd*.pem root@${node_ip}:/etc/etcd/cert/ done ``` #### 创建 etcd 的 systemd unit 模板文件 ``` cd /opt/k8s/work source /opt/k8s/bin/environment.sh cat > etcd.service.template < etcd-${NODE_IPS[i]}.service done ls *.service ``` - NODE_NAMES 和 NODE_IPS 为相同长度的 bash 数组,分别为节点名称和对应的 IP; 分发生成的 systemd unit 文件: ``` cd /opt/k8s/work source /opt/k8s/bin/environment.sh for node_ip in ${NODE_IPS[@]} do echo ">>> ${node_ip}" scp etcd-${node_ip}.service root@${node_ip}:/etc/systemd/system/etcd.service done ``` 文件重命名为 etcd.service #### 启动 etcd 服务 ``` cd /opt/k8s/work source /opt/k8s/bin/environment.sh for node_ip in ${NODE_IPS[@]} do echo ">>> ${node_ip}" ssh root@${node_ip} "mkdir -p ${ETCD_DATA_DIR} ${ETCD_WAL_DIR}" ssh root@${node_ip} "systemctl daemon-reload && systemctl enable etcd && systemctl restart etcd " & done ``` - 必须先创建 etcd 数据目录和工作目录; - etcd 进程首次启动时会等待其它节点的 etcd 加入集群,命令 systemctl start etcd 会卡住一段时间,为正常现象 #### 检查启动结果 ``` cd /opt/k8s/work source /opt/k8s/bin/environment.sh for node_ip in ${NODE_IPS[@]} do echo ">>> ${node_ip}" ssh root@${node_ip} "systemctl status etcd|grep Active" done ``` 确保状态为 active (running),否则查看日志,确认原因: ``` journalctl -u etcd ``` #### 验证服务状态 部署完 etcd 集群后,在任一 etcd 节点上执行如下命令: ``` cd /opt/k8s/work source /opt/k8s/bin/environment.sh for node_ip in ${NODE_IPS[@]} do echo ">>> ${node_ip}" ETCDCTL_API=3 /opt/k8s/bin/etcdctl \ --endpoints=https://${node_ip}:2379 \ --cacert=/etc/kubernetes/cert/ca.pem \ --cert=/etc/etcd/cert/etcd.pem \ --key=/etc/etcd/cert/etcd-key.pem endpoint health done ``` 预期输出: ``` https://192.168.1.31:2379 is healthy: successfully committed proposal: took = 2.448665ms >>> 192.168.1.32 https://192.168.1.32:2379 is healthy: successfully committed proposal: took = 3.380867ms >>> 192.168.1.33 https://192.168.1.33:2379 is healthy: successfully committed proposal: took = 3.034316ms ``` 输出均为 healthy 时表示集群服务正常 #### 查看当前的 leader ``` source /opt/k8s/bin/environment.sh ETCDCTL_API=3 /opt/k8s/bin/etcdctl \ -w table --cacert=/etc/kubernetes/cert/ca.pem \ --cert=/etc/etcd/cert/etcd.pem \ --key=/etc/etcd/cert/etcd-key.pem \ --endpoints=${ETCD_ENDPOINTS} endpoint status ``` 输出 ``` +---------------------------+------------------+---------+---------+-----------+-----------+------------+ | ENDPOINT | ID | VERSION | DB SIZE | IS LEADER | RAFT TERM | RAFT INDEX | +---------------------------+------------------+---------+---------+-----------+-----------+------------+ | https://192.168.1.31:2379 | 68afffba56612fd | 3.3.13 | 20 kB | false | 2 | 8 | | https://192.168.1.32:2379 | 22a9d61e6821c4d | 3.3.13 | 20 kB | true | 2 | 8 | | https://192.168.1.33:2379 | ff1f72bab5edb59f | 3.3.13 | 20 kB | false | 2 | 8 | +---------------------------+------------------+---------+---------+-----------+-----------+------------+ ``` 可见,当前的 leader 为192.168.1.32 Last modification:August 5th, 2019 at 05:07 pm © 允许规范转载 Support 如果觉得我的文章对你有用 ×Close Appreciate the author Sweeping payments