操作系统:macOS 12.6
Vagrant:2.3.2
VirtualBox:6.1.40r154048
创建 Vagrant 项目目录
xxxxxxxxxx
mkdir k8s-vms
进入 Vagrant 项目目录
xxxxxxxxxx
cd k8s-vms
创建 Vagrantfile
xxxxxxxxxx
Vagrant.configure("2") do |config|
config.vm.box = "generic/ubuntu1804"
vms = Array(101..102)
vms.each do |seq|
config.vm.define :"k8s-#{seq}" do |vagrant|
vagrant.vm.hostname = "k8s-#{seq}"
vagrant.vm.network "private_network", ip: "192.168.56.#{seq}"
vagrant.vm.provider "virtualbox" do |vb|
vb.customize ["modifyvm", :id, "--name", "k8s-#{seq}"]
vb.gui = false
vb.memory = "3072"
vb.cpus = "4"
end
end
end
end
启动虚拟机
xxxxxxxxxx
vagrant up
以上操作创建两台虚拟机:
其中 k8s-101 作为 Kubernetes 的 master,k8s-102 作为 node。
- master 和 node 都需要安装 Docker
- 下文假设已经切换到 root 用户
安装 Docker:
xxxxxxxxxx
apt-get install -y docker.io
验证:
xxxxxxxxxx
docker --version
# 输出:Docker version 20.10.7, build 20.10.7-0ubuntu5~18.04.3
为 Dockerd 设置代理(可选):
Docker 下载 Calico 等镜像时比较慢,可以给 dockerd 设置代理,也可以使用 registry-mirrors 的方式。我选择的是前者,我的宿主机的代理端口是 7890。
xxxxxxxxxx
mkdir -p /etc/systemd/system/docker.service.d
# proxy.conf 的名字可以是 *.conf 形式的任意值
touch /etc/systemd/system/docker.service.d/proxy.conf
向 proxy.conf 中添加如下内容:
xxxxxxxxxx
[Service]
Environment="HTTP_PROXY=http://192.168.56.1:7890"
Environment="HTTPS_PROXY=http://192.168.56.1:7890"
Environment="NO_PROXY=localhost,127.0.0.1,.example.com"
重启 Docker,并将其设置为开机启动:
xxxxxxxxxx
systemctl daemon-reload
systemctl restart docker
systemctl enable docker
- 在所有节点上执行
x
# 禁用交换分区(在旧版的 Kubernetes 中,kubelet 要求关闭 swap,但新版的 kubelet 已经支持 swap,因此这一步可以不做)
swapoff -a
# 永久禁用 swap,打开 /etc/fstab 注释掉 swap。
vim /etc/fstab
# 修改内核参数
apt-get install -y bridge-utils
modprobe br_netfilter
lsmod | grep br_netfilter
# 如果找不到包,那么先执行 apt-get update -y
# 安装基础环境
apt-get install -y ca-certificates curl software-properties-common apt-transport-https curl
curl -s https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | sudo apt-key add -
# 配置 Kubernetes 阿里云源
touch /etc/apt/sources.list.d/kubernetes.list
# 在上面的文件中加入一行:deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main
# 更新源
apt-get update -y
# 安装 kubeadm、kubectl、kubelet
apt-get install -y kubelet=1.23.1-00 kubeadm=1.23.1-00 kubectl=1.23.1-00
# 阻止自动更新,更新时需要先 unhold,更新完再 hold
apt-mark hold kubelet kubeadm kubectl
- 在 master 上执行
创建 kubeadm-config.yaml 文件,其内容如下:
xxxxxxxxxx
apiVersion kubeadm.k8s.io/v1beta3
bootstrapTokens
groups
system:bootstrappers:kubeadm:default-node-token
token abcdef.0123456789abcdef
ttl 24h0m0s
usages
signing
authentication
kind InitConfiguration
localAPIEndpoint
advertiseAddress192.168.56.101
bindPort6443
nodeRegistration
criSocket /var/run/dockershim.sock
imagePullPolicy IfNotPresent
name master
taints null
---
apiServer
timeoutForControlPlane 4m0s
apiVersion kubeadm.k8s.io/v1beta3
certificatesDir /etc/kubernetes/pki
clusterName kubernetes
controllerManager
dns
etcd
local
dataDir /var/lib/etcd
imageRepository registry.cn-hangzhou.aliyuncs.com/google_containers
kind ClusterConfiguration
kubernetesVersion1.23.1
networking
dnsDomain cluster.local
podSubnet 10.244.0.0/16
serviceSubnet 10.96.0.0/12
scheduler
---
kind KubeletConfiguration
apiVersion kubelet.config.k8s.io/v1beta1
#cgroupDriver: systemd
cgroupDriver cgroupfs
其中:
因为我已经为 dockerd 配置代理,所以直接执行下面的命令:
xxxxxxxxxx
kubeadm init --config kubeadm-config.yaml
初始化成功后得到类似下面的输出:
xxxxxxxxxx
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.56.101:6443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:6600bd855b98dd613ff260dd50c730583237b3a36c2a1bda0a5476fe82c0e9a1
按照提示依次执行:
xxxxxxxxxx
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
export KUBECONFIG=/etc/kubernetes/admin.conf
xxxxxxxxxx
kubeadm token create --print-join-command
# 输出:kubeadm join 192.168.56.101:6443 --token 26cbgg.eu062cu2px67aqmf --discovery-token-ca-cert-hash sha256:6600bd855b98dd613ff260dd50c730583237b3a36c2a1bda0a5476fe82c0e9a1
- 在 node 上执行
在 master 节点执行 kubeadm token create --print-join-command
获取加入集群的命令,然后在 node 上执行,比如:
xxxxxxxxxx
kubeadm join 192.168.56.101:6443 --token 26cbgg.eu062cu2px67aqmf --discovery-token-ca-cert-hash sha256:6600bd855b98dd613ff260dd50c730583237b3a36c2a1bda0a5476fe82c0e9a1
成功后将得到类似下面的输出:
xxxxxxxxxx
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
- 在 master 上执行
- 修改 YAML 时要注意缩进和 Tab
查看 node 状态:
xxxxxxxxxx
kubectl get nodes
得到类似下面的输出:
xxxxxxxxxx
NAME STATUS ROLES AGE VERSION
k8s-102 NotReady <none> 5m55s v1.23.1
master NotReady control-plane,master 16m v1.23.1
查看 kubelet 的日志:
xxxxxxxxxx
journalctl -xeu kubelet
看到类似下面的错误:
xxxxxxxxxx
network plugin is not ready: cni config uninitialized
下面开始安装 Calico 网络插件,以解决该问题。
xxxxxxxxxx
curl -O https://docs.tigera.io/archive/v3.25/manifests/calico.yaml
修改 CIDR,使其与 kubeadm-config.yaml 中配置的 podSubnet
一致。
将如下配置:
xxxxxxxxxx
# The default IPv4 pool to create on startup if none exists. Pod IPs will be
# chosen from this range. Changing this value after installation will have
# no effect. This should fall within `--cluster-cidr`.
# - name: CALICO_IPV4POOL_CIDR
# value: "192.168.0.0/16"
修改为:
xxxxxxxxxx
# The default IPv4 pool to create on startup if none exists. Pod IPs will be
# chosen from this range. Changing this value after installation will have
# no effect. This should fall within `--cluster-cidr`.
name CALICO_IPV4POOL_CIDR
value"10.244.0.0/16"
指定网卡,当有多块网卡时,如果不指定网卡,创建 Pod 时会报错。
在下面的配置:
xxxxxxxxxx
# Cluster type to identify the deployment type
name CLUSTER_TYPE
value"k8s,bgp"
后面添加:
xxxxxxxxxx
name IP_AUTODETECTION_METHOD
value"interface=eth1"
我的网络接口是:
xxxxxxxxxx
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 10.0.2.15 netmask 255.255.255.0 broadcast 10.0.2.255
inet6 fe80::a00:27ff:feb1:dbbc prefixlen 64 scopeid 0x20<link>
ether 08:00:27:b1:db:bc txqueuelen 1000 (Ethernet)
RX packets 378112 bytes 205793020 (205.7 MB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 314559 bytes 19151149 (19.1 MB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
eth1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.56.101 netmask 255.255.255.0 broadcast 192.168.56.255
inet6 fe80::a00:27ff:feb2:41aa prefixlen 64 scopeid 0x20<link>
ether 08:00:27:b2:41:aa txqueuelen 1000 (Ethernet)
RX packets 169203 bytes 237919632 (237.9 MB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 51270 bytes 4136713 (4.1 MB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
因此我配置的是 eth1,需要按照实际情况修改。
将所有节点自己的 IP 写进 /var/lib/calico/nodename
。在我的测试环境中:
xxxxxxxxxx
mkdir -p /var/lib/calico/ && echo "192.168.56.101" > /var/lib/calico/nodename
xxxxxxxxxx
mkdir -p /var/lib/calico/ && echo "192.168.56.102" > /var/lib/calico/nodename
xxxxxxxxxx
kubectl apply -f calico.yaml
注意查看命令的输出,如果修改后的 YAML 文件有语法错误,那么需要更正。
xxxxxxxxxx
kubectl -n kube-system get pods
等到所有 Pod 的状态都变成 Running。查看集群状态:
xxxxxxxxxx
kubectl get nodes
得到类似下面的输出:
xxxxxxxxxx
NAME STATUS ROLES AGE VERSION
k8s-102 Ready <none> 32m v1.23.1
master Ready control-plane,master 43m v1.23.1
- 在所有节点上执行
在 Ubuntu 中打开 /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
,可以看到需要在 /etc/default/kubelet
中配置 KUBELET_EXTRA_ARGS
。
将所有节点的 kubelet 的启动参数中的 node-ip
设置为其自己的 IP。在我的测试环境中:
在 master 上执行
xxxxxxxxxx
mkdir -p /etc/default/ && echo 'KUBELET_EXTRA_ARGS=--node-ip=192.168.56.101' > /etc/default/kubelet
在 node 上执行
xxxxxxxxxx
mkdir -p /etc/default/ && echo 'KUBELET_EXTRA_ARGS=--node-ip=192.168.56.102' > /etc/default/kubelet
最后重启所有节点上的 kubelet:
xxxxxxxxxx
systemctl restart kubelet
xxxxxxxxxx
kubectl create deployment nginx --image=nginx
# 输出:deployment.apps/nginx created
kubectl expose deployment nginx --port=80 --type=NodePort
# 输出:service/nginx exposed
查看状态:
xxxxxxxxxx
kubectl get pod,svc
等待到部署完成后,得到类似下面的输出:
xxxxxxxxxx
NAME READY STATUS RESTARTS AGE
pod/nginx-85b98978db-xzn79 1/1 Running 0 2m10s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 61m
service/nginx NodePort 10.107.18.122 <none> 80:32114/TCP 114s
注意 service/nginx 那列中的端口,下面将使用它。
在任意一台虚拟机或宿主机上访问:
xxxxxxxxxx
http://192.168.56.101:32114
或
xxxxxxxxxx
http://192.168.56.102:32114
可以看到 Nginx 的 Welcome 页面。
使用如下命令可以进入容器:
xxxxxxxxxx
kubectl -n default exec -it nginx-85b98978db-xzn79 -c nginx -- /bin/sh