Featured image of post 利用kubeadm离线搭建k8s

利用kubeadm离线搭建k8s

离线搭建k8s

一、基础环境配置

设置hostname

  • 主节点机器
1
$ hostnamectl set-hostname k8s-master && bash
  • 从节点机器
1
2
$ hostnamectl set-hostname k8s-node1 && bash
$ hostnamectl set-hostname k8s-node2 && bash

安装必备基础软件

  • 所有主机
1
2
# 安装必备软件(如果是离线环境可以搭建本地repo源,或者yum download-only下载下来)
$ dnf install wget tar socat conntrack ipset ipvsadm -y

配置hosts

  • 所有主机
1
2
3
4
5
6
# 这里如果有dns就更好
$ cat >> /etc/hosts <<EOF
<主节点ip> k8s-master
<从节点ip> k8s-node1
<从节点ip> k8s-node2
EOF

设置免密登录

这个是为了后面拷贝东西方便,我们把master节点作为服务端,后面子节点可以直接从主节点拷贝东西。

  • 从节点机器
1
2
3
4
5
6
7
# 生成密钥对
$ cd /root/.ssh
$ ssh-keygen

# 一路回车
$ ssh-copy-id k8s-master
# 输入密码

关闭firewalld

  • 所有主机
1
2
# 关闭firewalld(理论上其实不关,对应开端口应该也可以)
$ systemctl stop firewalld && systemctl disable firewalld

关闭selinux

  • 所有主机
1
2
# 关闭selinux,否则会产生一些不可预测的权限问题
$ setenforce 0 && sed -i 's/SELINUX=enforcing/SELINUX=permissive/g' /etc/selinux/config

开启 br_netfilter/overlay/ipvs模块

  • 所有主机
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
# 立即生效
$ modprobe br_netfilter
$ modprobe overlay
$ modprobe -- ip_vs
$ modprobe -- ip_vs_rr
$ modprobe -- ip_vs_wrr
$ modprobe -- ip_vs_sh
$ modprobe -- nf_conntrack

# 自动加载
$ cat <<EOF | tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
ip_vs
ip_vs_rr
ip_vs_wrr
ip_vs_sh
nf_conntrack
EOF

# 检查下是否加载了
$ lsmod | grep br_netfilter
$ lsmod | grep overlay
$ lsmod | grep ip_vs
$ lsmod | grep ip_vs_rr
$ lsmod | grep ip_vs_wrr
$ lsmod | grep ip_vs_sh
$ lsmod | grep nf_conntrack

设置sysctl参数

  • 所有主机
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
# 关闭swap分区(以下你也可以新建另外的配置文件,我这里图省事就直接追加了)
$ echo "vm.swappiness=0" >> /etc/sysctl.d/99-sysctl.conf
# 把max_map_count调大
$ echo "vm.max_map_count = 262144" >> /etc/sysctl.d/99-sysctl.conf
# 设置ip转发
$ cat >> /etc/sysctl.d/99-sysctl.conf <<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_timestamps = 1
EOF
# 使配置文件生效
$ sysctl -p

临时关闭swap分区

  • 所有主机
1
2
3
# 这个命令会立即生效
$ swapoff -a
# echo "vm.swappiness=0" >> /etc/sysctl.d/99-sysctl.conf (上面统一执行了)

二、安装containerd

下载contained

  • 互联网机器
1
2
3
4
5
6
# (版本可以视情况更换成新的)
# 下载containerd
$ wget https://github.com/containerd/containerd/releases/download/v1.6.18/containerd-1.6.18-linux-amd64.tar.gz

# 下载runc
$ wget https://github.com/opencontainers/runc/releases/download/v1.1.4/runc.amd64

安装containerd

  • 主节点机器
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
# 上传上一步下载的文件到主节点机器

# 解压containerd程序
$ tar xvf containerd-1.6.18-linux-amd64.tar.gz
$ mv bin/* /usr/local/bin/

# runc程序
$ mv runc.amd64 /usr/local/bin/runc
$ chmod +x /usr/local/bin/runc

# 生成配置文件
$ mkdir -p /etc/containerd
$ containerd config default | tee /etc/containerd/config.toml

# 编辑containerd的配置文件
$ sed -i "s#k8s.gcr.io#registry.cn-hangzhou.aliyuncs.com/google_containers#g"  /etc/containerd/config.toml
$ sed -i 's/SystemdCgroup = false/#SystemdCgroup = false/' /etc/containerd/config.toml
$ sed -i '/containerd.runtimes.runc.options/a\ \ \ \ \ \ \ \ \ \ \ \ SystemdCgroup = true' /etc/containerd/config.toml
$ sed -i "s#https://registry-1.docker.io#https://registry.cn-hangzhou.aliyuncs.com#g"  /etc/containerd/config.toml

# 配置containerd为系统服务
$ cat >/etc/systemd/system/containerd.service <<EOF
# Copyright The containerd Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

[Unit]
Description=containerd container runtime
Documentation=https://containerd.io
After=network.target local-fs.target

[Service]
#uncomment to enable the experimental sbservice (sandboxed) version of containerd/cri integration
#Environment="ENABLE_CRI_SANDBOXES=sandboxed"
ExecStartPre=-/sbin/modprobe overlay
ExecStart=/usr/local/bin/containerd

Type=notify
Delegate=yes
KillMode=process
Restart=always
RestartSec=5
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNPROC=infinity
LimitCORE=infinity
LimitNOFILE=infinity
# Comment TasksMax if your systemd version does not supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
OOMScoreAdjust=-999

[Install]
WantedBy=multi-user.target
EOF

# 重载systemd配置
$ systemctl daemon-reload

# 设置自启动
$ systemctl enable containerd

# 启动containerd
$ systemctl start containerd
  • 从节点机器
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
# 把containerd程序拷贝过来
$ scp -r k8s-master:/usr/local/bin/* /usr/local/bin/

# 把containerd配置文件拷贝过来
$ scp -r k8s-master:/etc/containerd /etc/

# 把containerd的systemd配置文件拷贝过来
$ scp k8s-master:/etc/systemd/system/containerd.service /etc/systemd/system/

# 重载systemd配置
$ systemctl daemon-reload

# 设置自启动
$ systemctl enable --now containerd

# 启动containerd
$ systemctl start containerd

三、安装CNI

  • 互联网机器
1
2
3
# (版本可以视情况更换成新的)
# 下载cni-plugins
$ wget https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-amd64-v1.1.1.tgz
  • 主节点机器
1
2
3
4
5
# 上传上一步下载的文件到主节点机器

# 解压cni-plugins
$ mkdir -p /opt/cni/bin/
$ tar xvf cni-plugins-linux-amd64-v1.1.1.tgz -C /opt/cni/bin/
  • 从节点机器
1
2
# 把cni-plugins程序拷贝过来
$ scp -r k8s-master:/opt/cni /opt

四、安装crictl

  • 互联网机器
1
2
3
# (版本可以视情况更换成新的)
# 下载crictl
$ wget https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.25.0/crictl-v1.25.0-linux-amd64.tar.gz
  • 主节点机器
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
# 上传上一步下载的文件到主节点机器

# 解压crictl的压缩包
$ tar xvf crictl-v1.25.0-linux-amd64.tar.gz -C /usr/local/bin/
# 创建crictl配置文件
$ cat > /etc/crictl.yaml <<EOF
runtime-endpoint: unix:///var/run/containerd/containerd.sock
image-endpoint: unix:///var/run/containerd/containerd.sock
timeout: 10
debug: false
pull-image-on-create: true
EOF
  • 从节点机器
1
2
3
4
5
# 把crictl程序拷贝过来
$ scp k8s-master:/usr/local/bin/crictl /usr/local/bin/

# 把crictl配置文件拷贝过来
$ scp k8s-master:/etc/crictl.yaml /etc/

五、安装kubeadm、kubelet、kubectl

  • 互联网机器
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
# (版本可以视情况更换成新的)
# 下载kubeadm、kubelet
$ RELEASE="$(curl -sSL https://dl.k8s.io/release/stable.txt)"
$ ARCH="amd64"
$ curl -L --remote-name-all https://dl.k8s.io/release/${RELEASE}/bin/linux/${ARCH}/{kubeadm,kubelet}

# 下载kubectl
$ wget https://storage.googleapis.com/kubernetes-release/release/${RELEASE}/bin/linux/${ARCH}/kubectl

# 下载对应的kubelet.service
RELEASE_VERSION="v0.4.0"
curl -sSL "https://raw.githubusercontent.com/kubernetes/release/${RELEASE_VERSION}/cmd/kubepkg/templates/latest/deb/kubelet/lib/systemd/system/kubelet.service" | sed "s:/usr/bin:/usr/local/bin:g" | tee kubelet.service

mkdir kubelet.service.d
curl -sSL "https://raw.githubusercontent.com/kubernetes/release/${RELEASE_VERSION}/cmd/kubepkg/templates/latest/deb/kubeadm/10-kubeadm.conf" | sed "s:/usr/bin:/usr/local/bin:g" | tee kubelet.service.d/10-kubeadm.conf
  • 主节点机器
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
# 上传上面下载的文件到主节点机器

# 给予执行权限
$ chmod +x kubelet kubectl kubeadm
# 移动到系统PATH目录
$ mv kubelet \
	 kubectl \
	 kubeadm /usr/local/bin
# systemd配置文件移动到/etc/systemd/system/目录
$ mv kubelet.service /etc/systemd/system/
$ mv kubelet.service.d /etc/systemd/system/

# 重载systemd配置
$ systemctl daemon-reload

# 设置自启动
$ systemctl enable --now kubelet

# 启动containerd
$ systemctl start kubelet
  • 从节点机器
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
# 拷贝主节点机器程序和systemd到从节点
$ scp k8s-master:/usr/local/bin/kube* /usr/local/bin
$ scp -r k8s-master:/etc/systemd/system/kubelet.service* /etc/systemd/system/

# 重载systemd配置
$ systemctl daemon-reload

# 设置自启动
$ systemctl enable --now kubelet

# 启动containerd
$ systemctl start kubelet

六、下载k8s镜像

  • 主节点机器
1
2
# 获取镜像列表
$ kubeadm config images list > images.list

registry.k8s.io/kube-apiserver:v1.26.1

registry.k8s.io/kube-controller-manager:v1.26.1

registry.k8s.io/kube-scheduler:v1.26.1

registry.k8s.io/kube-proxy:v1.26.1

registry.k8s.io/pause:3.9

registry.k8s.io/etcd:3.5.6-0

registry.k8s.io/coredns/coredns:v1.9.3

  • 互联网机器
1
2
3
4
5
6
# 上传images.list到互联网机器
# 拉取镜像
$ cat images.list | awk '{print $1}' | xargs -L1 docker pull 

# 存成tar
$ docker save -o k8s_images.tgz $(cat images.list | tr -s "\n" " ")
  • 主节点机器
1
2
3
# 上传镜像的压缩包k8s_images.tgz到主节点
# 导入镜像
$ ctr -n k8s.io image import k8s_images.tgz

七、kubeadm初始化

  • 主节点机器
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
# 输出默认初始化配置文件
$ kubeadm config print init-defaults > kubeadm.yaml

# 修改配置文件
$ sed -i "s/kubernetesVersion:/#kubernetesVersion:/" kubeadm.yaml && \
sed -i "s/advertiseAddress: 1.2.3.4/advertiseAddress: $(ip addr | awk '/^[0-9]+: / {}; /inet.*global/ {print gensub(/(.*)\/(.*)/, "\\1", "g", $2)}' | awk 'NR<2{print $1}')/" kubeadm.yaml && \
sed -i "s/name: node/name: k8s-master/" kubeadm.yaml
echo "kubernetesVersion: $(kubeadm version -o short)" >> kubeadm.yaml

# 如果你是用阿里云拉取的镜像,那就需要对应的修改imageRepository,不然会找不到镜像
# sed -i "s#imageRepository: registry.k8s.io#imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers#" kubeadm.yaml

# 执行初始化
$ kubeadm init --config kubeadm.yaml

# 等待初始化完成会输出类似如下内容:

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster. Run “kubectl apply -f [podnetwork].yaml” with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join <这是个apiserver ip>:6443 –token abcdef.0123456789abcdef
–discovery-token-ca-cert-hash sha256:cbe9bd17dbdbeaf4acbf69b485c949f5db9b9ceee00895a2eab5cc9ab54cb4d0

  • 主节点机器
1
2
3
# 设置一个环境变量
$ echo "export KUBECONFIG=/etc/kubernetes/admin.conf" > /etc/profile.d/kube.sh
$ source /etc/profile.d/kube.sh

八、安装Pod 网络附加组件

上面的操作做完coredns还是起不来,必须部署一个基于 Pod 网络插件的 容器网络接口 (CNI),以便 Pod 可以相互通信,在安装网络之前,集群 DNS (CoreDNS) 不会启动。

下载CNI的配置文件

  • 互联网机器
1
2
# 下载calico配置文件
$ wget https://raw.githubusercontent.com/projectcalico/calico/release-v3.25/manifests/calico.yaml

拉取对应的镜像

  • 互联网机器
1
2
3
4
5
6
7
8
# 获取镜像列表
$ cat calico.yaml | grep "image:" | awk "{print $1}" | awk '{print substr($0 ,18)}' | sort | uniq > calico_image.list

# 根据镜像列表拉取镜像
$ cat calico_image.list | awk '{print $1}' | xargs -L1 docker pull

# 镜像导出成压缩包
$ docker save -o cni_images.tgz $(cat calico_image.list | tr -s "\n" " ")

启动calico

  • 主节点机器
1
2
3
4
5
6
7
8
# 上传calico配置文件calico.yaml到主节点

# 上传镜像的压缩包cni_images.tgz到主节点
# 导入镜像
$ ctr -n k8s.io image import cni_images.tgz

# kubectl启动calico
$ kubectl apply -f calico.yaml

验证

  • 主节点机器
1
2
# 看看对应的pod都起来没有
$ kubectl -n kube-system get pods

九、从节点加入集群

  • 从节点机器
1
2
3
4
5
# 这是kube init的时候系统打印的
# 忘记了就去主节点用这个打印出来
# kubeadm token create --print-join-command
$ kubeadm join <这是个apiserver ip>:6443 --token abcdef.0123456789abcdef \
     --discovery-token-ca-cert-hash sha256:cbe9bd17dbdbeaf4acbf69b485c949f5db9b9ceee00895a2eab5cc9ab54cb4d0

十、安装dashboard

下载dashboard配置文件

  • 互联网机器
1
2
# 下载calico配置文件
$ wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.7.0/aio/deploy/recommended.yaml

拉取dashboard所需镜像

  • 互联网机器
1
2
3
4
5
6
7
8
# 获取镜像列表
$ cat dashboard.yaml | grep "image:" | awk "{print $1}" | awk '{print substr($0 ,18)}' | sort | uniq > dash_image.list

# 根据镜像列表拉取镜像
$ cat dash_image.list | awk '{print $1}' | xargs -L1 docker pull

# 镜像导出成压缩包
$ docker save -o dash_images.tgz $(cat dash_image.list | tr -s "\n" " ")

部署dashboard

  • 主节点机器
1
2
3
4
5
6
7
8
# 上传dashboard配置文件recommended.yaml到主节点

# 上传镜像的压缩包dash_images.tgz到主节点
# 导入镜像
$ ctr -n k8s.io image import dash_images.tgz

# 启动dashboard
$ kubectl apply -f recommended.yaml

配置dashboard访问

开放访问端口

  • 主节点机器
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
# 其实还有其他的方式,我这里写一种,配置NodePort方式访问
$ kubectl -n kubernetes-dashboard edit service kubernetes-dashboard

apiVersion: v1
kind: Service
...
...
  ports:
  - nodePort: <端口>
    port: 443
    protocol: TCP
    targetPort: 8443
  selector:
    k8s-app: kubernetes-dashboard
  sessionAffinity: None
  type: NodePort	#修改这一行即可,原为type: ClusterIP
status:
  loadBalancer: {}

配置登录密钥

  • 主节点机器
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
# 通过yaml文件创建服务用户Service Account和集群角色权限ClusterRoleBinding

$ cat > dash_accout.yaml <<EOF
# Creating a Service Account
apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kubernetes-dashboard

---
# Creating a ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kubernetes-dashboard
EOF

创建角色

  • 主节点机器
1
$ kubectl apply -f dash_accout.yaml

获取访问密钥

  • 主节点机器
1
$ kubectl -n kubernetes-dashboard get secret $(kubectl -n kubernetes-dashboard get sa/admin-user -o jsonpath="{.secrets[0].name}") -o go-template="{{.data.token | base64decode}}"

网页登录

  • 存在浏览器并与k8s网络通畅的机器
1
2
3
4
5
# <master-ip>      =>  主节点ip
# <apiserver-port> =>  上面的nodePort: <端口>,这个端口

# 访问https://<master-ip>:<apiserver-port>/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/
# 填入上一步获取的token