使用 Kubeadm 部署(单master)
使用 Kubeadm 部署(单master)
预先准备
环境:
角色 | IP |
---|---|
master | 172.16.1.129 |
node01 | 172.16.1.133 |
修改当前的主机名,确保不是 localhost
:
hostnamectl set-hostname master
为各节点添加主机名解析, 编辑/etc/hosts
:
...172.16.1 master master.agou-ops.com172.16.1 node01 node01.aogu-ops.com
确保任意节点上
Kubelet
使用的 IP 地址可互通(无需 NAT 映射即可相互访问),且没有防火墙、安全组隔离
关闭防火墙
和SElinux
:
sed -i 's/SELINUX=*/SELINUX=disabled/' /etc/selinux/config# getenforce 确定当前SElinux状态systemctl disable firewalld && systemctl stop firewalld
安装 docker
以及 kubelet
:
# --------- 在 master 和 node 节点执行 -----------
# 使用阿里云 docker 镜像加速,可以自行去控制台申请,在这里我使用公共加速镜像export REGISTRY_MIRROR=https://registry.cn-hangzhou.aliyuncs.com
# 编辑脚本快速安装,内容如下:
#!/bin/bash
# 在 master 节点和 worker 节点都要执行
# 安装 docker# 参考文档如下# https://docs.docker.com/install/linux/docker-ce/centos/# https://docs.docker.com/install/linux/linux-postinstall/
# 卸载旧版本yum remove -y docker \docker-client \docker-client-latest \docker-common \docker-latest \docker-latest-logrotate \docker-logrotate \docker-selinux \docker-engine-selinux \docker-engine
# 设置 yum repositoryyum install -y yum-utils \device-mapper-persistent-data \lvm2yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
# 安装并启动 dockeryum install -y docker-ce-19.03.8 docker-ce-cli-19.03.8 containerd.iosystemctl enable dockersystemctl start docker
# 安装 nfs-utils# 必须先安装 nfs-utils 才能挂载 nfs 网络存储yum install -y nfs-utilsyum install -y wget
# 关闭 swap ××重要××swapoff -a && echo "vm.swappiness=0" >> /etc/sysctl.conf && sysctl -p && free –h
# 修改 /etc/sysctl.conf# 如果有配置,则修改sed -i "s#^net.ipv4.ip_forward.*#net.ipv4.ip_forward=1#g" /etc/sysctl.confsed -i "s#^net.bridge.bridge-nf-call-ip6tables.*#net.bridge.bridge-nf-call-ip6tables=1#g" /etc/sysctl.confsed -i "s#^net.bridge.bridge-nf-call-iptables.*#net.bridge.bridge-nf-call-iptables=1#g" /etc/sysctl.confsed -i "s#^net.ipv6.conf.all.disable_ipv6.*#net.ipv6.conf.all.disable_ipv6=1#g" /etc/sysctl.confsed -i "s#^net.ipv6.conf.default.disable_ipv6.*#net.ipv6.conf.default.disable_ipv6=1#g" /etc/sysctl.confsed -i "s#^net.ipv6.conf.lo.disable_ipv6.*#net.ipv6.conf.lo.disable_ipv6=1#g" /etc/sysctl.confsed -i "s#^net.ipv6.conf.all.forwarding.*#net.ipv6.conf.all.forwarding=1#g" /etc/sysctl.conf# 可能没有,追加echo "net.ipv4.ip_forward = 1" >> /etc/sysctl.confecho "net.bridge.bridge-nf-call-ip6tables = 1" >> /etc/sysctl.confecho "net.bridge.bridge-nf-call-iptables = 1" >> /etc/sysctl.confecho "net.ipv6.conf.all.disable_ipv6 = 1" >> /etc/sysctl.confecho "net.ipv6.conf.default.disable_ipv6 = 1" >> /etc/sysctl.confecho "net.ipv6.conf.lo.disable_ipv6 = 1" >> /etc/sysctl.confecho "net.ipv6.conf.all.forwarding = 1" >> /etc/sysctl.conf# 执行命令以应用sysctl -p
# 配置K8S的yum源cat <<EOF > /etc/yum.repos.d/kubernetes.repo[kubernetes]name=Kubernetesbaseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64enabled=1gpgcheck=0repo_gpgcheck=0gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpgEOF
# 卸载旧版本yum remove -y kubelet kubeadm kubectl
# 安装kubelet、kubeadm、kubectl# 将 ${1} 替换为 kubernetes 版本号,例如 1.17.2yum install -y kubelet-${1} kubeadm-${1} kubectl-${1}
# 修改docker Cgroup Driver为systemd# # 将/usr/lib/systemd/system/docker.service文件中的这一行 ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock# # 修改为 ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock --exec-opt native.cgroupdriver=systemd# 如果不修改,在添加 worker 节点时可能会碰到如下错误# [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd".# Please follow the guide at https://kubernetes.io/docs/setup/cri/sed -i "s#^ExecStart=/usr/bin/dockerd.*#ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock --exec-opt native.cgroupdriver=systemd#g" /usr/lib/systemd/system/docker.service
# 设置 docker 镜像,提高 docker 镜像下载速度和稳定性# 如果您访问 https://hub.docker.io 速度非常稳定,亦可以跳过这个步骤curl -sSL https://kuboard.cn/install-script/set_mirror.sh | sh -s ${REGISTRY_MIRROR}
# 重启 docker,并启动 kubeletsystemctl daemon-reloadsystemctl restart dockersystemctl enable kubelet && systemctl start kubelet
docker version
:warning: WARNING
如果此时执行
service status kubelet
命令,将得到 kubelet 启动失败的错误提示,请忽略此错误,因为必须完成后续步骤中kubeadm init
的操作,kubelet 才能正常启动
赋予权限及运行:
chmod +x preinstall.sh./preinstall.sh 1.18.2
初始化 master 节点
关于初始化时用到的环境变量
APISERVER_NAME
不能是 master 的 hostnameAPISERVER_NAME
必须全为小写字母、数字、小数点,不能包含减号POD_SUBNET
所使用的网段不能与 master节点/worker节点 所在的网段重叠。该字段的取值为一个 CIDR 值,如果您对 CIDR 这个概念还不熟悉,请仍然执行export POD_SUBNET=10.100.0.1/16
命令,不做修改
在 master
节点上运行:
# 替换 x.x.x.x 为 master 节点的内网IP# export 命令只在当前 shell 会话中有效,开启新的 shell 窗口后,如果要继续安装过程,请重新执行此处的 export 命令export MASTER_IP=172.16.1.129# 替换 apiserver.demo 为 您想要的 dnsNameexport APISERVER_NAME=apiserver.agou.com# Kubernetes 容器组所在的网段,该网段安装完成后,由 kubernetes 创建,事先并不存在于您的物理网络中export POD_SUBNET=10.100.0.1/16echo "${MASTER_IP} ${APISERVER_NAME}" >> /etc/hosts
在master
节点上添加以下脚本:
#!/bin/bash
# 只在 master 节点执行
# 脚本出错时终止执行set -e
if [ ${#POD_SUBNET} -eq 0 ] || [ ${#APISERVER_NAME} -eq 0 ]; then echo -e "\033[31;1m请确保您已经设置了环境变量 POD_SUBNET 和 APISERVER_NAME \033[0m" echo 当前POD_SUBNET=$POD_SUBNET echo 当前APISERVER_NAME=$APISERVER_NAME exit 1fi
# 查看完整配置选项 https://godoc.org/k8s.io/kubernetes/cmd/kubeadm/app/apis/kubeadm/v1beta2rm -f ./kubeadm-config.yamlcat <<EOF > ./kubeadm-config.yamlapiVersion: kubeadm.k8s.io/v1beta2kind: ClusterConfigurationkubernetesVersion: v${1}imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containerscontrolPlaneEndpoint: "${APISERVER_NAME}:6443"networking: serviceSubnet: "10.96.0.0/16" podSubnet: "${POD_SUBNET}" dnsDomain: "cluster.local"EOF
# kubeadm init# 根据您服务器网速的情况,您需要等候 3 - 10 分钟kubeadm init --config=kubeadm-config.yaml --upload-certs
# 配置 kubectlrm -rf /root/.kube/mkdir /root/.kube/cp -i /etc/kubernetes/admin.conf /root/.kube/config
# 安装 calico 网络插件# 参考文档 https://docs.projectcalico.org/v3.13/getting-started/kubernetes/self-managed-onprem/onpremisesecho "安装calico-3.13.1"rm -f calico-3.13.1.yamlwget https://kuboard.cn/install-script/calico/calico-3.13.1.yamlkubectl apply -f calico-3.13.1.yaml
执行该脚本:
bash master_init.sh 1.18.2
检查master
初始化结果:
# 执行如下命令,等待 3-10 分钟,直到所有的容器组处于 Running 状态watch kubectl get pod -n kube-system -o wide
# 查看 master 节点初始化结果kubectl get nodes -o wide
初始化 node 节点
获得 join命令参数
在master
节点上获取join
命令行:
kubeadm token create --print-join-command
:information_source: 有效时间
该
token
的有效时间为 2 个小时,2小时内,您可以使用此 token 初始化任意数量的 worker 节点。
初始化 node 节点
编辑本地主机解析文件/etc/hosts
:
# 添加以下内容172.16.1.129 apiserver.agou.com
执行从master
节点上获取的join
命令:
kubeadm join apiserver.agou.com:6443 --token 7t8xh1.2y6pzjpgl9eai0kd --discovery-token-ca-cert-hash sha256:a57479ae585f0c0a617d890c62c15c71023d85d4d55f4dd8fffeb56d9a467ef7
检查初始化结果
在master
节点上执行:
kubectl get nodes -o wide
安装 Ingress Controller
编辑初始化文件nginx-ingress.yaml
,内容如下:
# 如果打算用于生产环境,请参考 https://github.com/nginxinc/kubernetes-ingress/blob/v1.5.5/docs/installation.md 并根据您自己的情况做进一步定制
apiVersion: v1kind: Namespacemetadata: name: nginx-ingress
---apiVersion: v1kind: ServiceAccountmetadata: name: nginx-ingress namespace: nginx-ingress
---apiVersion: v1kind: Secretmetadata: name: default-server-secret namespace: nginx-ingresstype: Opaquedata: tls.crt: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN2akNDQWFZQ0NRREFPRjl0THNhWFhEQU5CZ2txaGtpRzl3MEJBUXNGQURBaE1SOHdIUVlEVlFRRERCWk8KUjBsT1dFbHVaM0psYzNORGIyNTBjbTlzYkdWeU1CNFhEVEU0TURreE1qRTRNRE16TlZvWERUSXpNRGt4TVRFNApNRE16TlZvd0lURWZNQjBHQTFVRUF3d1dUa2RKVGxoSmJtZHlaWE56UTI5dWRISnZiR3hsY2pDQ0FTSXdEUVlKCktvWklodmNOQVFFQkJRQURnZ0VQQURDQ0FRb0NnZ0VCQUwvN2hIUEtFWGRMdjNyaUM3QlBrMTNpWkt5eTlyQ08KR2xZUXYyK2EzUDF0azIrS3YwVGF5aGRCbDRrcnNUcTZzZm8vWUk1Y2Vhbkw4WGM3U1pyQkVRYm9EN2REbWs1Qgo4eDZLS2xHWU5IWlg0Rm5UZ0VPaStlM2ptTFFxRlBSY1kzVnNPazFFeUZBL0JnWlJVbkNHZUtGeERSN0tQdGhyCmtqSXVuektURXUyaDU4Tlp0S21ScUJHdDEwcTNRYzhZT3ExM2FnbmovUWRjc0ZYYTJnMjB1K1lYZDdoZ3krZksKWk4vVUkxQUQ0YzZyM1lma1ZWUmVHd1lxQVp1WXN2V0RKbW1GNWRwdEMzN011cDBPRUxVTExSakZJOTZXNXIwSAo1TmdPc25NWFJNV1hYVlpiNWRxT3R0SmRtS3FhZ25TZ1JQQVpQN2MwQjFQU2FqYzZjNGZRVXpNQ0F3RUFBVEFOCkJna3Foa2lHOXcwQkFRc0ZBQU9DQVFFQWpLb2tRdGRPcEsrTzhibWVPc3lySmdJSXJycVFVY2ZOUitjb0hZVUoKdGhrYnhITFMzR3VBTWI5dm15VExPY2xxeC9aYzJPblEwMEJCLzlTb0swcitFZ1U2UlVrRWtWcitTTFA3NTdUWgozZWI4dmdPdEduMS9ienM3bzNBaS9kclkrcUI5Q2k1S3lPc3FHTG1US2xFaUtOYkcyR1ZyTWxjS0ZYQU80YTY3Cklnc1hzYktNbTQwV1U3cG9mcGltU1ZmaXFSdkV5YmN3N0NYODF6cFErUyt1eHRYK2VBZ3V0NHh3VlI5d2IyVXYKelhuZk9HbWhWNThDd1dIQnNKa0kxNXhaa2VUWXdSN0diaEFMSkZUUkk3dkhvQXprTWIzbjAxQjQyWjNrN3RXNQpJUDFmTlpIOFUvOWxiUHNoT21FRFZkdjF5ZytVRVJxbStGSis2R0oxeFJGcGZnPT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo= tls.key: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFcEFJQkFBS0NBUUVBdi91RWM4b1JkMHUvZXVJTHNFK1RYZUprckxMMnNJNGFWaEMvYjVyYy9XMlRiNHEvClJOcktGMEdYaVN1eE9ycXgrajlnamx4NXFjdnhkenRKbXNFUkJ1Z1B0ME9hVGtIekhvb3FVWmcwZGxmZ1dkT0EKUTZMNTdlT1l0Q29VOUZ4amRXdzZUVVRJVUQ4R0JsRlNjSVo0b1hFTkhzbysyR3VTTWk2Zk1wTVM3YUhudzFtMApxWkdvRWEzWFNyZEJ6eGc2clhkcUNlUDlCMXl3VmRyYURiUzc1aGQzdUdETDU4cGszOVFqVUFQaHpxdmRoK1JWClZGNGJCaW9CbTVpeTlZTW1hWVhsMm0wTGZzeTZuUTRRdFFzdEdNVWozcGJtdlFmazJBNnljeGRFeFpkZFZsdmwKMm82MjBsMllxcHFDZEtCRThCay90elFIVTlKcU56cHpoOUJUTXdJREFRQUJBb0lCQVFDZklHbXowOHhRVmorNwpLZnZJUXQwQ0YzR2MxNld6eDhVNml4MHg4Mm15d1kxUUNlL3BzWE9LZlRxT1h1SENyUlp5TnUvZ2IvUUQ4bUFOCmxOMjRZTWl0TWRJODg5TEZoTkp3QU5OODJDeTczckM5bzVvUDlkazAvYzRIbjAzSkVYNzZ5QjgzQm9rR1FvYksKMjhMNk0rdHUzUmFqNjd6Vmc2d2szaEhrU0pXSzBwV1YrSjdrUkRWYmhDYUZhNk5nMUZNRWxhTlozVDhhUUtyQgpDUDNDeEFTdjYxWTk5TEI4KzNXWVFIK3NYaTVGM01pYVNBZ1BkQUk3WEh1dXFET1lvMU5PL0JoSGt1aVg2QnRtCnorNTZud2pZMy8yUytSRmNBc3JMTnIwMDJZZi9oY0IraVlDNzVWYmcydVd6WTY3TWdOTGQ5VW9RU3BDRkYrVm4KM0cyUnhybnhBb0dCQU40U3M0ZVlPU2huMVpQQjdhTUZsY0k2RHR2S2ErTGZTTXFyY2pOZjJlSEpZNnhubmxKdgpGenpGL2RiVWVTbWxSekR0WkdlcXZXaHFISy9iTjIyeWJhOU1WMDlRQ0JFTk5jNmtWajJTVHpUWkJVbEx4QzYrCk93Z0wyZHhKendWelU0VC84ajdHalRUN05BZVpFS2FvRHFyRG5BYWkyaW5oZU1JVWZHRXFGKzJyQW9HQkFOMVAKK0tZL0lsS3RWRzRKSklQNzBjUis3RmpyeXJpY05iWCtQVzUvOXFHaWxnY2grZ3l4b25BWlBpd2NpeDN3QVpGdwpaZC96ZFB2aTBkWEppc1BSZjRMazg5b2pCUmpiRmRmc2l5UmJYbyt3TFU4NUhRU2NGMnN5aUFPaTVBRHdVU0FkCm45YWFweUNweEFkREtERHdObit3ZFhtaTZ0OHRpSFRkK3RoVDhkaVpBb0dCQUt6Wis1bG9OOTBtYlF4VVh5YUwKMjFSUm9tMGJjcndsTmVCaWNFSmlzaEhYa2xpSVVxZ3hSZklNM2hhUVRUcklKZENFaHFsV01aV0xPb2I2NTNyZgo3aFlMSXM1ZUtka3o0aFRVdnpldm9TMHVXcm9CV2xOVHlGanIrSWhKZnZUc0hpOGdsU3FkbXgySkJhZUFVWUNXCndNdlQ4NmNLclNyNkQrZG8wS05FZzFsL0FvR0FlMkFVdHVFbFNqLzBmRzgrV3hHc1RFV1JqclRNUzRSUjhRWXQKeXdjdFA4aDZxTGxKUTRCWGxQU05rMXZLTmtOUkxIb2pZT2pCQTViYjhibXNVU1BlV09NNENoaFJ4QnlHbmR2eAphYkJDRkFwY0IvbEg4d1R0alVZYlN5T294ZGt5OEp0ek90ajJhS0FiZHd6NlArWDZDODhjZmxYVFo5MWpYL3RMCjF3TmRKS2tDZ1lCbyt0UzB5TzJ2SWFmK2UwSkN5TGhzVDQ5cTN3Zis2QWVqWGx2WDJ1VnRYejN5QTZnbXo5aCsKcDNlK2JMRUxwb3B0WFhNdUFRR0xhUkcrYlNNcjR5dERYbE5ZSndUeThXczNKY3dlSTdqZVp2b0ZpbmNvVlVIMwphdmxoTUVCRGYxSjltSDB5cDBwWUNaS2ROdHNvZEZtQktzVEtQMjJhTmtsVVhCS3gyZzR6cFE9PQotLS0tLUVORCBSU0EgUFJJVkFURSBLRVktLS0tLQo=
---kind: ConfigMapapiVersion: v1metadata: name: nginx-config namespace: nginx-ingressdata: server-names-hash-bucket-size: "1024"
---kind: ClusterRoleapiVersion: rbac.authorization.k8s.io/v1beta1metadata: name: nginx-ingressrules:- apiGroups: - "" resources: - services - endpoints verbs: - get - list - watch- apiGroups: - "" resources: - secrets verbs: - get - list - watch- apiGroups: - "" resources: - configmaps verbs: - get - list - watch - update - create- apiGroups: - "" resources: - pods verbs: - list- apiGroups: - "" resources: - events verbs: - create - patch- apiGroups: - extensions resources: - ingresses verbs: - list - watch - get- apiGroups: - "extensions" resources: - ingresses/status verbs: - update- apiGroups: - k8s.nginx.org resources: - virtualservers - virtualserverroutes verbs: - list - watch - get
---kind: ClusterRoleBindingapiVersion: rbac.authorization.k8s.io/v1beta1metadata: name: nginx-ingresssubjects:- kind: ServiceAccount name: nginx-ingress namespace: nginx-ingressroleRef: kind: ClusterRole name: nginx-ingress apiGroup: rbac.authorization.k8s.io
---apiVersion: apps/v1kind: DaemonSetmetadata: name: nginx-ingress namespace: nginx-ingress annotations: prometheus.io/scrape: "true" prometheus.io/port: "9113"spec: selector: matchLabels: app: nginx-ingress template: metadata: labels: app: nginx-ingress spec: serviceAccountName: nginx-ingress containers: - image: nginx/nginx-ingress:1.5.5 name: nginx-ingress ports: - name: http containerPort: 80 hostPort: 80 - name: https containerPort: 443 hostPort: 443 - name: prometheus containerPort: 9113 env: - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name args: - -nginx-configmaps=$(POD_NAMESPACE)/nginx-config - -default-server-tls-secret=$(POD_NAMESPACE)/default-server-secret #- -v=3 # Enables extensive logging. Useful for troubleshooting. #- -report-ingress-status #- -external-service=nginx-ingress #- -enable-leader-election - -enable-prometheus-metrics #- -enable-custom-resources
执行初始化:
kubectl apply -f nginx-ingress.yaml
参考自:https://kuboard.cn/ 仅稍作修改。
参考链接
- kuboard documetation: https://kuboard.cn/install/install-k8s.html
- image-pull-backoff解决方案:https://kuboard.cn/learning/faq/image-pull-backoff.html