一、集群部署简介
1、kubeadm
kubeadm是一种kubernetes集群部署工具,通过kubeadm init命令创建master节点,通过 kubeadm join命令把node节点加入到集群中。
kubeadm init大约会做这些事情:
① 预检测:检查系统状态(linux cgroups、10250/10251/10252端口是否可用 ),给出警告、错误信息,并会退出 kubeadm init命令执行;
② 生成证书:生成的证书放在/etc/kubernetes/pki目录,以便访问kubernetes时使用;
③ 生成各组件yaml文件;
④ 安装集群最小可用插件。
其他node节点使用kubeadm init生成的token,执行kubeadm join命令,就可以加入集群了。node节点要先安装kubelet、kubeadm。
有关kubeadm init和kubeadm join命令的解释,参考:
https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-init/https://kubernetes.io/zh/docs/reference/setup-tools/kubeadm/kubeadm-join2、kubelet
kubelet是用来操作pod和容器的组件,运行在集群的所有节点上,需要直接安装在宿主机上。安装过程中,kubeadm调用kubelet实现kubeadm init的工作。
3、kubectl
kubectl是kubernetes集群的命令行工具,通过kubectl能够对集群本身进行管理,并能够在集群上进行容器化应用的安装部署。
二、集群安装
1、环境准备
ubuntu 18.04 lts cpu 2核 内存4g 硬盘 20g
同样规格:3台设备,hostname分别为master、node1、node2
2、master节点安装
(1)设置hostname
root@k8s:/# hostnamectl --static set-hostname masterroot@k8s:/# hostnamectl static hostname: mastertransient hostname: k8s icon name: computer-vm chassis: vm machine id: e5c0d0f18ba04c0a8722ab9fff662987 boot id: 74af5268dfe74f23b3dee608ab2afe41 virtualization: kvm operating system: ubuntu 18.04.2 lts kernel: linux 4.15.0-122-generic architecture: x86-64(2)关闭系统swap:执行命令swapoff-a 。
(3)安装docker社区版
apt-get updateapt-get -y install apt-transport-https ca-certificates curl software-properties-commoncurl -fssl http://mirrors.aliyun.com/docker-ce/linux/ubuntu/gpg | apt-key add -add-apt-repository deb [arch=amd64] http://mirrors.aliyun.com/docker-ce/linux/ubuntu $(lsb_release -cs) stableapt-get -y updateapt-get -y install docker-ce(4)安装 kubelet 、kubeadm 、kubectl工具
apt-get update && apt-get install -y apt-transport-httpscurl https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | apt-key add -cat ] 5.06k --.-kb/s in 0s 2021-12-30 20:47:49 (21.4 mb/s) - âkube-flannel.ymlâ saved [5177/5177]root@k8s:~# kubectl apply -f kube-flannel.ymlwarning: policy/v1beta1 podsecuritypolicy is deprecated in v1.21+, unavailable in v1.25+podsecuritypolicy.policy/psp.flannel.unprivileged createdclusterrole.rbac.authorization.k8s.io/flannel createdclusterrolebinding.rbac.authorization.k8s.io/flannel createdserviceaccount/flannel createdconfigmap/kube-flannel-cfg createddaemonset.apps/kube-flannel-ds createdroot@k8s:~# kubectl get pods --all-namespacesnamespace name ready status restarts agekube-system coredns-6d8c4cb4d-ghvmj 1/1 running 0 25mkube-system coredns-6d8c4cb4d-p45mv 1/1 running 0 25mkube-system etcd-master 1/1 running 0 25mkube-system kube-apiserver-master 1/1 running 0 25mkube-system kube-controller-manager-master 1/1 running 0 25mkube-system kube-flannel-ds-ql282 1/1 running 0 66skube-system kube-proxy-xswwz 1/1 running 0 25mkube-system kube-scheduler-master 1/1 running 0 25mroot@k8s:~# kubectl get nodesname status roles age versionmaster ready control-plane,master 26m v1.23.1(3)node节点加入到集群中
执行指令,将node1加入集群
kubeadm join 30.0.1.180:6443 --token e16km1.69phwhcdjaulf060 --discovery-token-ca-cert-hash sha256:2d3f77ae7598fb7709655b381af5fda8896d5a97cdf7176ff74a2aa25fca271c这个命令在master节点初始化成功的日志中,也可以在master节点执行命令获取:
kubeadm token create --print-join-command成功的日志信息:
root@k8s:~# kubeadm join 30.0.1.180:6443 --token e16km1.69phwhcdjaulf060 --discovery-token-ca-cert-hash sha256:2d3f77ae7598fb7709655b381af5fda8896d5a97cdf7176ff74a2aa25fca271c[preflight] running pre-flight checks[preflight] reading configuration from the cluster...[preflight] fyi: you can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'w1230 20:19:34.532570 26262 utils.go:69] the recommended value for resolvconf in kubeletconfiguration is: /run/systemd/resolve/resolv.conf; the provided value is: /run/systemd/resolve/resolv.conf[kubelet-start] writing kubelet configuration to file /var/lib/kubelet/config.yaml[kubelet-start] writing kubelet environment file with flags to file /var/lib/kubelet/kubeadm-flags.env[kubelet-start] starting the kubelet[kubelet-start] waiting for the kubelet to perform the tls bootstrap...this node has joined the cluster:* certificate signing request was sent to apiserver and a response was received.* the kubelet was informed of the new secure connection details.run 'kubectl get nodes' on the control-plane to see this node join the cluster.此时的节点信息为:
root@k8s:~# kubectl get nodesname status roles age versionmaster ready control-plane,master 61m v1.23.1node1 ready 13m v1.23.1root@k8s:~# kubectl label nodes node1 node-role.kubernetes.io/node=node/node1 labeledroot@k8s:~# kubectl get nodesname status roles age versionmaster ready control-plane,master 67m v1.23.1node1 ready node 18m v1.23.1可能遇到的问题:master节点的问题1,依然会遇见,采用相同的解决方法。
(3)node2执行与node1相同的操作
root@k8s:/# kubeadm join 30.0.1.180:6443 --token e16km1.69phwhcdjaulf060 --discovery-token-ca-cert-hash sha256:2d3f77ae7598fb7709655b381af5fda8896d5a97cdf7176ff74a2aa25fca271c [preflight] running pre-flight checks[preflight] reading configuration from the cluster...[preflight] fyi: you can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'w1230 23:22:10.274581 28114 utils.go:69] the recommended value for resolvconf in kubeletconfiguration is: /run/systemd/resolve/resolv.conf; the provided value is: /run/systemd/resolve/resolv.conf[kubelet-start] writing kubelet configuration to file /var/lib/kubelet/config.yaml[kubelet-start] writing kubelet environment file with flags to file /var/lib/kubelet/kubeadm-flags.env[kubelet-start] starting the kubelet[kubelet-start] waiting for the kubelet to perform the tls bootstrap...this node has joined the cluster:* certificate signing request was sent to apiserver and a response was received.* the kubelet was informed of the new secure connection details.run 'kubectl get nodes' on the control-plane to see this node join the cluster.root@k8s:~# kubectl get nodesname status roles age versionmaster ready control-plane,master 7h2m v1.23.1node1 ready node 6h13m v1.23.1node2 ready node 3m13s v1.23.1三、集群相关信息
1、kubernetes组件部署信息
# kubernetes组件基本上运行在pod中root@k8s:~# kubectl get pods -o wide --all-namespacesnamespace name ready status restarts age ip node nominated node readiness gateskube-system coredns-6d8c4cb4d-ghvmj 1/1 running 0 17h 10.244.0.2 master kube-system coredns-6d8c4cb4d-p45mv 1/1 running 0 17h 10.244.0.3 master kube-system etcd-master 1/1 running 0 17h 30.0.1.180 master kube-system kube-apiserver-master 1/1 running 0 17h 30.0.1.180 master kube-system kube-controller-manager-master 1/1 running 0 17h 30.0.1.180 master kube-system kube-flannel-ds-8qt6p 1/1 running 0 16h 30.0.1.160 node1 kube-system kube-flannel-ds-ql282 1/1 running 0 17h 30.0.1.180 master kube-system kube-flannel-ds-zkt47 1/1 running 0 10h 30.0.1.47 node2 kube-system kube-proxy-pb9gn 1/1 running 0 10h 30.0.1.47 node2 kube-system kube-proxy-xswwz 1/1 running 0 17h 30.0.1.180 master kube-system kube-proxy-zdfp5 1/1 running 0 16h 30.0.1.160 node1 kube-system kube-scheduler-master 1/1 running 0 17h 30.0.1.180 master # kublet直接安装在宿主机上,不以docker形式运行root@k8s:~# systemctl status kubelet.serviceâ kubelet.service - kubelet: the kubernetes node agent loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled) drop-in: /etc/systemd/system/kubelet.service.d ââ10-kubeadm.conf active: active (running) since thu 2021-12-30 16:23:24 cst; 17h ago docs: https://kubernetes.io/docs/home/ main pid: 6501 (kubelet) tasks: 16 (limit: 4702) cgroup: /system.slice/kubelet.service ââ6501 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/liroot@k8s:~# docker pscontainer id image command created status ports names28bfaeeaadf1 a4ca41631cc7 /coredns -conf /etc⦠17 hours ago up 17 hours k8s_coredns_coredns-6d8c4cb4d-p45mv_kube-system_4ce03d3c-1660-4975-8450-408515ec6a02_057a535a41123 a4ca41631cc7 /coredns -conf /etc⦠17 hours ago up 17 hours k8s_coredns_coredns-6d8c4cb4d-ghvmj_kube-system_1a67722e-a15f-4bf0-bbd7-e2af542d2621_07be45271357a registry.aliyuncs.com/google_containers/pause:3.6 /pause 17 hours ago up 17 hours k8s_pod_coredns-6d8c4cb4d-p45mv_kube-system_4ce03d3c-1660-4975-8450-408515ec6a02_079776dc797f4 registry.aliyuncs.com/google_containers/pause:3.6 /pause 17 hours ago up 17 hours k8s_pod_coredns-6d8c4cb4d-ghvmj_kube-system_1a67722e-a15f-4bf0-bbd7-e2af542d2621_0424b5047009f e6ea68648f0c /opt/bin/flanneld -⦠17 hours ago up 17 hours k8s_kube-flannel_kube-flannel-ds-ql282_kube-system_9cb2439b-e8f4-422f-a72d-83370e75043e_051bea3cfeef7 registry.aliyuncs.com/google_containers/pause:3.6 /pause 17 hours ago up 17 hours k8s_pod_kube-flannel-ds-ql282_kube-system_9cb2439b-e8f4-422f-a72d-83370e75043e_0e6149ade3a29 b46c42588d51 /usr/local/bin/kube⦠18 hours ago up 18 hours k8s_kube-proxy_kube-proxy-xswwz_kube-system_12dac07f-e07e-4eff-becc-7b40a92f3adb_03c365b2342a0 registry.aliyuncs.com/google_containers/pause:3.6 /pause 18 hours ago up 18 hours k8s_pod_kube-proxy-xswwz_kube-system_12dac07f-e07e-4eff-becc-7b40a92f3adb_0b60a3b02f427 25f8c7f3da61 etcd --advertise-cl⦠18 hours ago up 18 hours k8s_etcd_etcd-master_kube-system_5d83471f981b1644e30c11cc642c68f7_0abd1e3377560 b6d7abedde39 kube-apiserver --ad⦠18 hours ago up 18 hours k8s_kube-apiserver_kube-apiserver-master_kube-system_df535ce9e2ccfb931f8e46a9b80a6218_0df5e2a226999 f51846a4fd28 kube-controller-man⦠18 hours ago up 18 hours k8s_kube-controller-manager_kube-controller-manager-master_kube-system_85ff8159d8c894c53981716f8927f187_0b45d17ab969f 71d575efe628 kube-scheduler --au⦠18 hours ago up 18 hours k8s_kube-scheduler_kube-scheduler-master_kube-system_77a51208064a0e9b17209ee62638dfcd_03cf0d75ad0f0 registry.aliyuncs.com/google_containers/pause:3.6 /pause 18 hours ago up 18 hours k8s_pod_kube-apiserver-master_kube-system_df535ce9e2ccfb931f8e46a9b80a6218_06b447aa2fd93 registry.aliyuncs.com/google_containers/pause:3.6 /pause 18 hours ago up 18 hours k8s_pod_etcd-master_kube-system_5d83471f981b1644e30c11cc642c68f7_0f7f9a3cd677f registry.aliyuncs.com/google_containers/pause:3.6 /pause 18 hours ago up 18 hours k8s_pod_kube-scheduler-master_kube-system_77a51208064a0e9b17209ee62638dfcd_020e0b291d166 registry.aliyuncs.com/google_containers/pause:3.6 /pause 18 hours ago up 18 hours k8s_pod_kube-controller-manager-master_kube-system_85ff8159d8c894c53981716f8927f187_02、网段:每个kubernetes node从中分配一个子网片段。
(1)master节点
root@k8s:~# cat /run/flannel/subnet.env flannel_network=10.244.0.0/16flannel_subnet=10.244.0.1/24flannel_mtu=1400flannel_ipmasq=true(2)node1节点
root@k8s:~# cat /run/flannel/subnet.env flannel_network=10.244.0.0/16flannel_subnet=10.244.1.1/24flannel_mtu=1400flannel_ipmasq=true(3)node2节点
root@k8s:/# cat /run/flannel/subnet.env flannel_network=10.244.0.0/16flannel_subnet=10.244.2.1/24flannel_mtu=1400flannel_ipmasq=true3、kubernetes节点进程
(1)master节点
root@k8s:~# ps -el | grep kube4 s 0 6224 6152 0 80 0 - 188636 futex_ ? 00:05:00 kube-scheduler4 s 0 6275 6196 1 80 0 - 206354 ep_pol ? 00:23:02 kube-controller4 s 0 6287 6181 5 80 0 - 278080 futex_ ? 01:19:40 kube-apiserver4 s 0 6501 1 3 80 0 - 487736 futex_ ? 00:46:38 kubelet4 s 0 6846 6818 0 80 0 - 187044 futex_ ? 00:00:26 kube-proxy(2)node节点
# node1root@k8s:~# ps -el | grep kube4 s 0 22869 22845 0 80 0 - 187172 futex_ ? 00:00:23 kube-proxy4 s 0 26395 1 2 80 0 - 505977 futex_ ? 00:28:10 kubelet# node2root@k8s:/# ps -el | grep kube4 s 0 28227 1 1 80 0 - 487480 futex_ ? 00:17:26 kubelet4 s 0 28724 28696 0 80 0 - 187044 futex_ ? 00:00:17 kube-proxy
潮起潮落,新能源汽车退潮了!
比亚迪和小米的联合实验室正式落成
英特尔推出了OpenVINO
27个非常经典的设备工作流程图解
频宽支援三级跳 示波器性价比突破大
Kubernetes的集群部署
封测行业在我国的发展现状,三大上市企业引领国内封测产业发展
基于开源CNN的图像压缩算法项目介绍及实现
光控相控阵天线的发展历程及工作原理
2019年AGV将会以何种趋势发展下去
用于危化品仓库安全管理的可燃气体传感器
三星Note8上市时间,三星Note8发布会即将开启,携惊艳的全视曲面屏设计登场
眼图如何评估高速信号系统性能的优劣
专属您的镭雕车载空气净化器,其特点是怎样的
电子皮肤是一项前沿技术,具有广阔的应用前景
英飞凌宣布新8 Mbit和16 Mbit EXCELON™ F-RAM非易失性存储器已开始批量供货
是德科技为采用高通基础设施解决方案的O-RAN无线单元提供验证支持
【虹科案例】 UV-LED固化创新,让产线变得丝滑
电子工程专辑放福利啦!免费领取500本《RT-Thread设备驱动开发指南》
详解类比半导体16~32bits Σ-Δ ADC全系产品