腾讯云Ubuntu二进制搭建高可用(k8s)Kubernetes v1.24.3集群

释放双眼,带上耳机,听听看~!
随着Kubernetes版本更新迭代,在1.24开始,已经将容器运行时修改为containerd,周末闲来无事使用腾讯云的ECS+负载均衡实现对apiserver的代理。同时使用Ubuntu二进制来安装k8s 1.24.3

2022年08月02日更新相关错误

环境介绍

购买服务器

1658046118182.png

购买负载均衡

1658047129538.png

  • 系统Ubuntu Server 20.04 LTS 64位
  • K8S版本1.24.3
  • containerd v1.6.4
主机名称 IP地址 说明 服务 服务器配置
k8s-01 10.206.16.2 master节点 containerd、kube-apiserver、kube-controller-manager、kube-scheduler、etcd、kubelet、kube-proxy 2c4g
k8s-02 10.206.16.5 master节点 containerd、kube-apiserver、kube-controller-manager、kube-scheduler、etcd、kubelet、kube-proxy 2c4g
k8s-03 10.206.16.3 master节点 containerd、kube-apiserver、kube-controller-manager、kube-scheduler、etcd、kubelet、kube-proxy 2c4g

VIP为腾讯云负载均衡服务器

初始化环境

初始化步骤所有服务器执行

  • 初始化环境需要在所有服务器上操作
#配置apt源
sed -i 's/archive.ubuntu.com/mirrors.ustc.edu.cn/g' /etc/apt/sources.list

#关闭防火墙
systemctl disable --now ufw

#关闭交换分区,云服务器没有交换分区,所以我这里不执行了
sed -ri 's/.*swap.*/#&/' /etc/fstab
swapoff -a && sysctl -w vm.swappiness=0

cat /etc/fstab
# /dev/mapper/centos-swap swap                    swap    defaults        0 0
  • 在所有服务器上设置时间同步
#打开终端输入以下命令安装ntpdate工具。
apt-get install ntpdate

#再输入命令设置系统时间与网络时间同步。
ntpdate cn.pool.ntp.org

#最后输入命令将时间更新到硬件上即可。
hwclock --systohc
  • 配置ulimit
ulimit -SHn 65535
cat >> /etc/security/limits.conf <<EOF
* soft nofile 655360
* hard nofile 131072
* soft nproc 655350
* hard nproc 655350
* seft memlock unlimited
* hard memlock unlimitedd
EOF
  • 配置主机名
cat >> /etc/hosts <<EOF
10.206.16.2  k8s-01
10.206.16.5  k8s-02
10.206.16.3  k8s-03
10.206.16.14 apiserver.i4t.com
EOF
  • 配置免密

免密只需要在k8s-01节点上执行即可

apt install -y sshpass
ssh-keygen -f /root/.ssh/id_rsa -P ''
export IP="k8s-01 k8s-02 k8s-03
export SSHPASS=Haomima123..
for HOST in $IP;do
     sshpass -e ssh-copy-id -o StrictHostKeyChecking=no $HOST
done

#自行开启SSH root登陆
  • 安装ipvs (如果业务量不大,使用iptables效率也是可以)
apt install ipvsadm ipset sysstat conntrack -y

cat >> /etc/modules-load.d/ipvs.conf <<EOF 
ip_vs
ip_vs_rr
ip_vs_wrr
ip_vs_sh
nf_conntrack
ip_tables
ip_set
xt_set
ipt_set
ipt_rpfilter
ipt_REJECT
ipip
EOF

systemctl restart systemd-modules-load.service

lsmod | grep -e ip_vs -e nf_conntrack
ip_vs_sh               16384  0
ip_vs_wrr              16384  0
ip_vs_rr               16384  0
ip_vs                 155648  6 ip_vs_rr,ip_vs_sh,ip_vs_wrr
nf_conntrack          139264  1 ip_vs
nf_defrag_ipv6         24576  2 nf_conntrack,ip_vs
nf_defrag_ipv4         16384  1 nf_conntrack
libcrc32c              16384  4 nf_conntrack,btrfs,raid456,ip_vs
  • 修改内核参数
cat <<EOF > /etc/sysctl.d/k8s.conf
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-iptables = 1
fs.may_detach_mounts = 1
vm.overcommit_memory=1
vm.panic_on_oom=0
fs.inotify.max_user_watches=89100
fs.file-max=52706963
fs.nr_open=52706963
net.netfilter.nf_conntrack_max=2310720

net.ipv4.tcp_keepalive_time = 600
net.ipv4.tcp_keepalive_probes = 3
net.ipv4.tcp_keepalive_intvl =15
net.ipv4.tcp_max_tw_buckets = 36000
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_max_orphans = 327680
net.ipv4.tcp_orphan_retries = 3
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.ip_conntrack_max = 65536
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.tcp_timestamps = 0
net.core.somaxconn = 16384

net.ipv6.conf.all.disable_ipv6 = 0
net.ipv6.conf.default.disable_ipv6 = 0
net.ipv6.conf.lo.disable_ipv6 = 0
net.ipv6.conf.all.forwarding = 0
EOF

sysctl --system

所有节点安装Containerd

wget https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-amd64-v1.1.1.tgz

#abcdocker代理节点
wget https://d.frps.cn/file/kubernetes/install/1.24.3/cni-plugins-linux-amd64-v1.1.1.tgz

#创建cni插件所需目录
mkdir -p /etc/cni/net.d /opt/cni/bin 
#解压cni二进制包
tar xf cni-plugins-linux-amd64-v1.1.1.tgz -C /opt/cni/bin/

wget https://github.com/containerd/containerd/releases/download/v1.6.4/cri-containerd-cni-1.6.4-linux-amd64.tar.gz

#abcdocker代理节点
wget https://d.frps.cn/file/kubernetes/install/1.24.3/cri-containerd-cni-1.6.4-linux-amd64.tar.gz

#解压
tar -C / -xzf cri-containerd-cni-1.6.4-linux-amd64.tar.gz

#创建服务启动文件
cat > /etc/systemd/system/containerd.service <<EOF
[Unit]
Description=containerd container runtime
Documentation=https://containerd.io
After=network.target local-fs.target

[Service]
ExecStartPre=-/sbin/modprobe overlay
ExecStart=/usr/local/bin/containerd
Type=notify
Delegate=yes
KillMode=process
Restart=always
RestartSec=5
LimitNPROC=infinity
LimitCORE=infinity
LimitNOFILE=infinity
TasksMax=infinity
OOMScoreAdjust=-999

[Install]
WantedBy=multi-user.target
EOF
  • 配置Containerd所需的模块
cat <<EOF | sudo tee /etc/modules-load.d/containerd.conf
overlay
br_netfilter
EOF
  • 加载模块,设置开机启动
systemctl restart systemd-modules-load.service
  • 配置Containerd所需的内核
cat <<EOF | sudo tee /etc/sysctl.d/99-kubernetes-cri.conf
net.bridge.bridge-nf-call-iptables  = 1
net.ipv4.ip_forward                 = 1
net.bridge.bridge-nf-call-ip6tables = 1
EOF

# 加载内核

sysctl --system
  • 创建Containerd的配置文件
#创建配置文件
mkdir -p /etc/containerd
containerd config default | tee /etc/containerd/config.toml

#修改Containerd的配置文件
sed -i "s#SystemdCgroup\ \=\ false#SystemdCgroup\ \=\ true#g" /etc/containerd/config.toml

cat /etc/containerd/config.toml | grep SystemdCgroup

sed -i "s#k8s.gcr.io#registry.cn-hangzhou.aliyuncs.com/abcdocker#g" /etc/containerd/config.toml

cat /etc/containerd/config.toml | grep sandbox_image

请手动删除标记的67行systemd_cgroup = false,否则kubelet启动会提示下面的错误
https://i4t.com/5633.html

1658199678604.png

  • 设置开机启动
systemctl daemon-reload
systemctl enable --now containerd

所有节点都可以正常使用

root@VM-16-2-ubuntu:~# ctr version
Client:
  Version:  v1.6.4
  Revision: 212e8b6fa2f44b9c21b2798135fc6fb7c53efc16
  Go version: go1.17.9

Server:
  Version:  v1.6.4
  Revision: 212e8b6fa2f44b9c21b2798135fc6fb7c53efc16
  UUID: f3171aee-67b0-4e01-871b-2e93674af2ad

Kubernetes Master安装

二进制文件下载

k8s-01节点操作即可

下载安装包

wget https://dl.k8s.io/v1.24.3/kubernetes-server-linux-amd64.tar.gz
wget https://github.com/etcd-io/etcd/releases/download/v3.5.4/etcd-v3.5.4-linux-amd64.tar.gz

#abcdocker代理下载
wget https://d.frps.cn/file/kubernetes/install/1.24.3/kubernetes-server-linux-amd64.tar.gz
wget https://d.frps.cn/file/kubernetes/install/1.24.3/etcd-v3.5.4-linux-amd64.tar.gz

解压k8s安装文件

tar -xf kubernetes-server-linux-amd64.tar.gz  --strip-components=3 -C /usr/local/bin kubernetes/server/bin/kube{let,ctl,-apiserver,-controller-manager,-scheduler,-proxy}

etcd安装文件

tar -xf etcd-v3.5.4-linux-amd64.tar.gz --strip-components=1 -C /usr/local/bin etcd-v3.5.4-linux-amd64/etcd{,ctl}

检查/usr/local/bin下内容

ls /usr/local/bin/
etcd  etcdctl  kube-apiserver  kube-controller-manager  kubectl  kubelet  kube-proxy  kube-scheduler

查看kubelet和etcdctl是否获取正常

[root@k8s-01 ~]# kubelet --version
Kubernetes v1.24.1
[root@k8s-01 ~]# etcdctl version
etcdctl version: 3.5.4
API version: 3.5
[root@k8s-01 ~]# 

将刚刚解压的二进制文件拷贝到其它服务器上

for i in k8s-02 k8s-03;do
    scp /usr/local/bin/kube* root@$i:/usr/local/bin/
    scp /usr/local/bin/{etcd,etcdctl}   root@$i:/usr/local/bin/
done

下载配置cfssl证书

k8s-01上操作即可

下载配置cfssl证书工具

wget "https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssl_1.6.1_linux_amd64" -O /usr/local/bin/cfssl
wget "https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssljson_1.6.1_linux_amd64" -O /usr/local/bin/cfssljson

chmod +x /usr/local/bin/cfssl /usr/local/bin/cfssljson

创建证书配置文件

mkdir pki
cd pki
cat > admin-csr.json << EOF 
{
  "CN": "admin",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "Beijing",
      "L": "Beijing",
      "O": "system:masters",
      "OU": "Kubernetes-manual"
    }
  ]
}
EOF

cat > ca-config.json << EOF 
{
  "signing": {
    "default": {
      "expiry": "876000h"
    },
    "profiles": {
      "kubernetes": {
        "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ],
        "expiry": "876000h"
      }
    }
  }
}
EOF

cat > etcd-ca-csr.json  << EOF 
{
  "CN": "etcd",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "Beijing",
      "L": "Beijing",
      "O": "etcd",
      "OU": "Etcd Security"
    }
  ],
  "ca": {
    "expiry": "876000h"
  }
}
EOF

cat > front-proxy-ca-csr.json  << EOF 
{
  "CN": "kubernetes",
  "key": {
     "algo": "rsa",
     "size": 2048
  },
  "ca": {
    "expiry": "876000h"
  }
}
EOF

cat > kubelet-csr.json  << EOF 
{
  "CN": "system:node:\$NODE",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "L": "Beijing",
      "ST": "Beijing",
      "O": "system:nodes",
      "OU": "Kubernetes-manual"
    }
  ]
}
EOF

cat > manager-csr.json << EOF 
{
  "CN": "system:kube-controller-manager",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "Beijing",
      "L": "Beijing",
      "O": "system:kube-controller-manager",
      "OU": "Kubernetes-manual"
    }
  ]
}
EOF

cat > apiserver-csr.json << EOF 
{
  "CN": "kube-apiserver",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "Beijing",
      "L": "Beijing",
      "O": "Kubernetes",
      "OU": "Kubernetes-manual"
    }
  ]
}
EOF

cat > ca-csr.json   << EOF 
{
  "CN": "kubernetes",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "Beijing",
      "L": "Beijing",
      "O": "Kubernetes",
      "OU": "Kubernetes-manual"
    }
  ],
  "ca": {
    "expiry": "876000h"
  }
}
EOF

cat > etcd-csr.json << EOF 
{
  "CN": "etcd",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "Beijing",
      "L": "Beijing",
      "O": "etcd",
      "OU": "Etcd Security"
    }
  ]
}
EOF

cat > front-proxy-client-csr.json  << EOF 
{
  "CN": "front-proxy-client",
  "key": {
     "algo": "rsa",
     "size": 2048
  }
}
EOF

cat > kube-proxy-csr.json  << EOF 
{
  "CN": "system:kube-proxy",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "Beijing",
      "L": "Beijing",
      "O": "system:kube-proxy",
      "OU": "Kubernetes-manual"
    }
  ]
}
EOF

cat > scheduler-csr.json << EOF 
{
  "CN": "system:kube-scheduler",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "Beijing",
      "L": "Beijing",
      "O": "system:kube-scheduler",
      "OU": "Kubernetes-manual"
    }
  ]
}
EOF

cd ..
mkdir bootstrap
cd bootstrap
cat > bootstrap.secret.yaml << EOF 
apiVersion: v1
kind: Secret
metadata:
  name: bootstrap-token-c8ad9c
  namespace: kube-system
type: bootstrap.kubernetes.io/token
stringData:
  description: "The default bootstrap token generated by 'kubelet '."
  token-id: c8ad9c
  token-secret: 2e4d610cf3e7426e
  usage-bootstrap-authentication: "true"
  usage-bootstrap-signing: "true"
  auth-extra-groups:  system:bootstrappers:default-node-token,system:bootstrappers:worker,system:bootstrappers:ingress

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: kubelet-bootstrap
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:node-bootstrapper
subjects:
- apiGroup: rbac.authorization.k8s.io
  kind: Group
  name: system:bootstrappers:default-node-token
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: node-autoapprove-bootstrap
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:certificates.k8s.io:certificatesigningrequests:nodeclient
subjects:
- apiGroup: rbac.authorization.k8s.io
  kind: Group
  name: system:bootstrappers:default-node-token
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: node-autoapprove-certificate-rotation
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
subjects:
- apiGroup: rbac.authorization.k8s.io
  kind: Group
  name: system:nodes
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
  name: system:kube-apiserver-to-kubelet
rules:
  - apiGroups:
      - ""
    resources:
      - nodes/proxy
      - nodes/stats
      - nodes/log
      - nodes/spec
      - nodes/metrics
    verbs:
      - "*"
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: system:kube-apiserver
  namespace: ""
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:kube-apiserver-to-kubelet
subjects:
  - apiGroup: rbac.authorization.k8s.io
    kind: User
    name: kube-apiserver
EOF

ETCD证书

创建证书文件

for i in k8s-01 k8s-02 k8s-03;do
   ssh root@$i mkdir /etc/etcd/ssl -p
done

下方操作均在k8s-01节点

生成etcd证书和etcd证书的key

cd pki

cfssl gencert -initca etcd-ca-csr.json | cfssljson -bare /etc/etcd/ssl/etcd-ca

cfssl gencert \
   -ca=/etc/etcd/ssl/etcd-ca.pem \
   -ca-key=/etc/etcd/ssl/etcd-ca-key.pem \
   -config=ca-config.json \
   -hostname=k8s-01,k8s-02,k8s-03,10.206.16.2,10.206.16.5,10.206.16.3 \
   -profile=kubernetes \
   etcd-csr.json | cfssljson -bare /etc/etcd/ssl/etcd

将证书复制到其它节点

for i in k8s-02 k8s-03;do
    ssh $i "mkdir -p /etc/etcd/ssl"
    scp /etc/etcd/ssl/* $i:/etc/etcd/ssl/
done

k8s集群证书

k8s-01节点操作

所有证书存放目录

mkdir -p /etc/kubernetes/pki

生成一个根证书

cfssl gencert -initca ca-csr.json | cfssljson -bare /etc/kubernetes/pki/ca

10.96.0.1是service网段的第一个地址
apiserver.frps.cn高可用vip地址
10.206.16.14 高可用ip

cfssl gencert   \
-ca=/etc/kubernetes/pki/ca.pem   \
-ca-key=/etc/kubernetes/pki/ca-key.pem   \
-config=ca-config.json   \
-hostname=10.96.0.1,127.0.0.1,kubernetes,kubernetes.default,kubernetes.default.svc,kubernetes.default.svc.cluster,kubernetes.default.svc.cluster.local,k8s-01,k8s-02,k8s-03,k8s-04,k8s-05,apiserver.frps.cn,10.206.16.2,10.206.16.3,10.206.16.5   \
-profile=kubernetes   apiserver-csr.json | cfssljson -bare /etc/kubernetes/pki/apiserver

生成apiserver

cfssl gencert   -initca front-proxy-ca-csr.json | cfssljson -bare /etc/kubernetes/pki/front-proxy-ca 

cfssl gencert  \
-ca=/etc/kubernetes/pki/front-proxy-ca.pem   \
-ca-key=/etc/kubernetes/pki/front-proxy-ca-key.pem   \
-config=ca-config.json   \
-profile=kubernetes   front-proxy-client-csr.json | cfssljson -bare /etc/kubernetes/pki/front-proxy-client

生成controller-manage证书

cfssl gencert \
   -ca=/etc/kubernetes/pki/ca.pem \
   -ca-key=/etc/kubernetes/pki/ca-key.pem \
   -config=ca-config.json \
   -profile=kubernetes \
   manager-csr.json | cfssljson -bare /etc/kubernetes/pki/controller-manager

# 设置一个集群项

kubectl config set-cluster kubernetes \
     --certificate-authority=/etc/kubernetes/pki/ca.pem \
     --embed-certs=true \
     --server=https://apiserver.frps.cn:8443 \
     --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig

# 设置一个环境项,一个上下文

kubectl config set-context system:kube-controller-manager@kubernetes \
    --cluster=kubernetes \
    --user=system:kube-controller-manager \
    --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig

# 设置一个用户项

kubectl config set-credentials system:kube-controller-manager \
     --client-certificate=/etc/kubernetes/pki/controller-manager.pem \
     --client-key=/etc/kubernetes/pki/controller-manager-key.pem \
     --embed-certs=true \
     --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig

# 设置默认环境

kubectl config use-context system:kube-controller-manager@kubernetes \
     --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig

cfssl gencert \
   -ca=/etc/kubernetes/pki/ca.pem \
   -ca-key=/etc/kubernetes/pki/ca-key.pem \
   -config=ca-config.json \
   -profile=kubernetes \
   scheduler-csr.json | cfssljson -bare /etc/kubernetes/pki/scheduler

kubectl config set-cluster kubernetes \
     --certificate-authority=/etc/kubernetes/pki/ca.pem \
     --embed-certs=true \
     --server=https://apiserver.frps.cn:8443 \
     --kubeconfig=/etc/kubernetes/scheduler.kubeconfig

kubectl config set-credentials system:kube-scheduler \
     --client-certificate=/etc/kubernetes/pki/scheduler.pem \
     --client-key=/etc/kubernetes/pki/scheduler-key.pem \
     --embed-certs=true \
     --kubeconfig=/etc/kubernetes/scheduler.kubeconfig

kubectl config set-context system:kube-scheduler@kubernetes \
     --cluster=kubernetes \
     --user=system:kube-scheduler \
     --kubeconfig=/etc/kubernetes/scheduler.kubeconfig

kubectl config use-context system:kube-scheduler@kubernetes \
     --kubeconfig=/etc/kubernetes/scheduler.kubeconfig

cfssl gencert \
   -ca=/etc/kubernetes/pki/ca.pem \
   -ca-key=/etc/kubernetes/pki/ca-key.pem \
   -config=ca-config.json \
   -profile=kubernetes \
   admin-csr.json | cfssljson -bare /etc/kubernetes/pki/admin

kubectl config set-cluster kubernetes     \
  --certificate-authority=/etc/kubernetes/pki/ca.pem     \
  --embed-certs=true     \
  --server=https://apiserver.frps.cn:8443     \
  --kubeconfig=/etc/kubernetes/admin.kubeconfig

kubectl config set-credentials kubernetes-admin  \
  --client-certificate=/etc/kubernetes/pki/admin.pem     \
  --client-key=/etc/kubernetes/pki/admin-key.pem     \
  --embed-certs=true     \
  --kubeconfig=/etc/kubernetes/admin.kubeconfig

kubectl config set-context kubernetes-admin@kubernetes    \
  --cluster=kubernetes     \
  --user=kubernetes-admin     \
  --kubeconfig=/etc/kubernetes/admin.kubeconfig

kubectl config use-context kubernetes-admin@kubernetes  --kubeconfig=/etc/kubernetes/admin.kubeconfig

生成kube-proxy证书

cfssl gencert \
   -ca=/etc/kubernetes/pki/ca.pem \
   -ca-key=/etc/kubernetes/pki/ca-key.pem \
   -config=ca-config.json \
   -profile=kubernetes \
   kube-proxy-csr.json | cfssljson -bare /etc/kubernetes/pki/kube-proxy

kubectl config set-cluster kubernetes     \
  --certificate-authority=/etc/kubernetes/pki/ca.pem     \
  --embed-certs=true     \
  --server=https://apiserver.frps.cn:8443     \
  --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig

kubectl config set-credentials kube-proxy  \
  --client-certificate=/etc/kubernetes/pki/kube-proxy.pem     \
  --client-key=/etc/kubernetes/pki/kube-proxy-key.pem     \
  --embed-certs=true     \
  --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig

kubectl config set-context kube-proxy@kubernetes    \
  --cluster=kubernetes     \
  --user=kube-proxy     \
  --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig

kubectl config use-context kube-proxy@kubernetes  --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig

创建ServiceAccount Key

openssl genrsa -out /etc/kubernetes/pki/sa.key 2048
openssl rsa -in /etc/kubernetes/pki/sa.key -pubout -out /etc/kubernetes/pki/sa.pub

其他节点创建目录

for i in k8s-02 k8s-03;do
    ssh $i "mkdir  /etc/kubernetes/pki/ -p"
    scp -r /etc/kubernetes/pki $i:/etc/kubernetes/
done

查看证书

ls /etc/kubernetes/pki/
admin.csr          ca.csr                      front-proxy-ca.csr          kube-proxy.csr      scheduler-key.pem
admin-key.pem      ca-key.pem                  front-proxy-ca-key.pem      kube-proxy-key.pem  scheduler.pem
admin.pem          ca.pem                      front-proxy-ca.pem          kube-proxy.pem
apiserver.csr      controller-manager.csr      front-proxy-client.csr      sa.key
apiserver-key.pem  controller-manager-key.pem  front-proxy-client-key.pem  sa.pub
apiserver.pem      controller-manager.pem      front-proxy-client.pem      scheduler.csr

# 一共26个就对了

ls /etc/kubernetes/pki/ |wc -l
26

配置ETCD

k8s-01配置文件,请根据需求修改

  • 10.206.16.2 ETCD地址
  • k8s-01 ETCD-01节点名称
# 如果要用IPv6那么把IPv4地址修改为IPv6即可
cat > /etc/etcd/etcd.config.yml << EOF 
name: 'k8s-01'
data-dir: /var/lib/etcd
wal-dir: /var/lib/etcd/wal
snapshot-count: 5000
heartbeat-interval: 100
election-timeout: 1000
quota-backend-bytes: 0
listen-peer-urls: 'https://10.206.16.2:2380'
listen-client-urls: 'https://10.206.16.2:2379,http://127.0.0.1:2379'
max-snapshots: 3
max-wals: 5
cors:
initial-advertise-peer-urls: 'https://10.206.16.2:2380'
advertise-client-urls: 'https://10.206.16.2:2379'
discovery:
discovery-fallback: 'proxy'
discovery-proxy:
discovery-srv:
initial-cluster: 'k8s-01=https://10.206.16.2:2380,k8s-02=https://10.206.16.5:2380,k8s-03=https://10.206.16.3:2380'
initial-cluster-token: 'etcd-k8s-cluster'
initial-cluster-state: 'new'
strict-reconfig-check: false
enable-v2: true
enable-pprof: true
proxy: 'off'
proxy-failure-wait: 5000
proxy-refresh-interval: 30000
proxy-dial-timeout: 1000
proxy-write-timeout: 5000
proxy-read-timeout: 0
client-transport-security:
  cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'
  key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'
  client-cert-auth: true
  trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'
  auto-tls: true
peer-transport-security:
  cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'
  key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'
  peer-client-cert-auth: true
  trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'
  auto-tls: true
debug: false
log-package-levels:
log-outputs: [default]
force-new-cluster: false
EOF

k8s-02配置文件,请根据需求修改

# 如果要用IPv6那么把IPv4地址修改为IPv6即可
cat > /etc/etcd/etcd.config.yml << EOF 
name: 'k8s-02'
data-dir: /var/lib/etcd
wal-dir: /var/lib/etcd/wal
snapshot-count: 5000
heartbeat-interval: 100
election-timeout: 1000
quota-backend-bytes: 0
listen-peer-urls: 'https://10.206.16.5:2380'
listen-client-urls: 'https://10.206.16.5:2379,http://127.0.0.1:2379'
max-snapshots: 3
max-wals: 5
cors:
initial-advertise-peer-urls: 'https://10.206.16.5:2380'
advertise-client-urls: 'https://10.206.16.5:2379'
discovery:
discovery-fallback: 'proxy'
discovery-proxy:
discovery-srv:
initial-cluster: 'k8s-01=https://10.206.16.2:2380,k8s-02=https://10.206.16.5:2380,k8s-03=https://10.206.16.3:2380'
initial-cluster-token: 'etcd-k8s-cluster'
initial-cluster-state: 'new'
strict-reconfig-check: false
enable-v2: true
enable-pprof: true
proxy: 'off'
proxy-failure-wait: 5000
proxy-refresh-interval: 30000
proxy-dial-timeout: 1000
proxy-write-timeout: 5000
proxy-read-timeout: 0
client-transport-security:
  cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'
  key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'
  client-cert-auth: true
  trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'
  auto-tls: true
peer-transport-security:
  cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'
  key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'
  peer-client-cert-auth: true
  trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'
  auto-tls: true
debug: false
log-package-levels:
log-outputs: [default]
force-new-cluster: false
EOF

k8s-03配置文件,请根据需求修改

# 如果要用IPv6那么把IPv4地址修改为IPv6即可
cat > /etc/etcd/etcd.config.yml << EOF 
name: 'k8s-03'
data-dir: /var/lib/etcd
wal-dir: /var/lib/etcd/wal
snapshot-count: 5000
heartbeat-interval: 100
election-timeout: 1000
quota-backend-bytes: 0
listen-peer-urls: 'https://10.206.16.3:2380'
listen-client-urls: 'https://10.206.16.3:2379,http://127.0.0.1:2379'
max-snapshots: 3
max-wals: 5
cors:
initial-advertise-peer-urls: 'https://10.206.16.3:2380'
advertise-client-urls: 'https://10.206.16.3:2379'
discovery:
discovery-fallback: 'proxy'
discovery-proxy:
discovery-srv:
initial-cluster: 'k8s-01=https://10.206.16.2:2380,k8s-02=https://10.206.16.5:2380,k8s-03=https://10.206.16.3:2380'
initial-cluster-token: 'etcd-k8s-cluster'
initial-cluster-state: 'new'
strict-reconfig-check: false
enable-v2: true
enable-pprof: true
proxy: 'off'
proxy-failure-wait: 5000
proxy-refresh-interval: 30000
proxy-dial-timeout: 1000
proxy-write-timeout: 5000
proxy-read-timeout: 0
client-transport-security:
  cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'
  key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'
  client-cert-auth: true
  trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'
  auto-tls: true
peer-transport-security:
  cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'
  key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'
  peer-client-cert-auth: true
  trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'
  auto-tls: true
debug: false
log-package-levels:
log-outputs: [default]
force-new-cluster: false
EOF

创建etcd启动服务(需要在所有master节点操作)

cat > /usr/lib/systemd/system/etcd.service << EOF

[Unit]
Description=Etcd Service
Documentation=https://coreos.com/etcd/docs/latest/
After=network.target

[Service]
Type=notify
ExecStart=/usr/local/bin/etcd --config-file=/etc/etcd/etcd.config.yml
Restart=on-failure
RestartSec=10
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
Alias=etcd3.service

EOF

拷贝ETCD证书

mkdir /etc/kubernetes/pki/etcd
ln -s /etc/etcd/ssl/* /etc/kubernetes/pki/etcd/
systemctl daemon-reload
systemctl enable --now etcd

查看etcd状态

# 如果要用IPv6那么把IPv4地址修改为IPv6即可
export ETCDCTL_API=3
etcdctl --endpoints="k8s-01:2379,k8s-02:2379,k8s-03:2379" --cacert=/etc/kubernetes/pki/etcd/etcd-ca.pem --cert=/etc/kubernetes/pki/etcd/etcd.pem --key=/etc/kubernetes/pki/etcd/etcd-key.pem  endpoint status --write-out=table
+-------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
|  ENDPOINT   |        ID        | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS |
+-------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
| k8s-01:2379 | 7f38447ea06fe963 |   3.5.4 |   20 kB |      true |      false |         2 |          9 |                  9 |        |
| k8s-02:2379 | 7074c298e1728385 |   3.5.4 |   20 kB |     false |      false |         2 |          9 |                  9 |        |
| k8s-03:2379 | ca5c37bfc23da20f |   3.5.4 |   20 kB |     false |      false |         2 |          9 |                  9 |        |
+-------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+

配置腾讯云负载均衡

负载均衡是通信集群信息的节点,所以我这里将负载均衡IP添加了一个hosts解析。解析域名为apiserver.frps.cn

配置监听器
1658063839151.png

  • 负载均衡端口为8443
  • apiserver服务端口为6443

1658063926007.png

1658064079012.png

1658064101426.png

绑定后端节点

1658064132035.png

1658064178500.png

最后展示
1658064200367.png

在k8s-01节点上执行for循环,将apiserver IP解析为apiserver.frps.cn域名

for i in k8s-01 k8s-02 k8s-03;do
    ssh $i "echo "10.206.16.14       apiserver.frps.cn" >>/etc/hosts"
done

ApiServer 配置

创建apiserver服务启动文件

k8s-01节点

需要在每台节点自行修改对应的信息

  • --advertise-address 当前节点IP
  • --etcd-servers ETCD节点信息
  • --secure-port apiserver端口号
cat > /usr/lib/systemd/system/kube-apiserver.service << EOF

[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=network.target

[Service]
ExecStart=/usr/local/bin/kube-apiserver \
      --v=2  \
      --logtostderr=true  \
      --allow-privileged=true  \
      --bind-address=0.0.0.0  \
      --secure-port=6443  \
      --advertise-address=10.206.16.3 \
      --service-cluster-ip-range=10.96.0.0/12,fd00::/108  \
      --feature-gates=IPv6DualStack=true  \
      --service-node-port-range=30000-32767  \
      --etcd-servers=https://k8s-01:2379,https://k8s-02:2379,https://k8s-03:2379 \
      --etcd-cafile=/etc/etcd/ssl/etcd-ca.pem  \
      --etcd-certfile=/etc/etcd/ssl/etcd.pem  \
      --etcd-keyfile=/etc/etcd/ssl/etcd-key.pem  \
      --client-ca-file=/etc/kubernetes/pki/ca.pem  \
      --tls-cert-file=/etc/kubernetes/pki/apiserver.pem  \
      --tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem  \
      --kubelet-client-certificate=/etc/kubernetes/pki/apiserver.pem  \
      --kubelet-client-key=/etc/kubernetes/pki/apiserver-key.pem  \
      --service-account-key-file=/etc/kubernetes/pki/sa.pub  \
      --service-account-signing-key-file=/etc/kubernetes/pki/sa.key  \
      --service-account-issuer=https://kubernetes.default.svc.cluster.local \
      --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname  \
      --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota  \
      --authorization-mode=Node,RBAC  \
      --enable-bootstrap-token-auth=true  \
      --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem  \
      --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.pem  \
      --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client-key.pem  \
      --requestheader-allowed-names=aggregator  \
      --requestheader-group-headers=X-Remote-Group  \
      --requestheader-extra-headers-prefix=X-Remote-Extra-  \
      --requestheader-username-headers=X-Remote-User \
      --enable-aggregator-routing=true
      # --token-auth-file=/etc/kubernetes/token.csv

Restart=on-failure
RestartSec=10s
LimitNOFILE=65535

[Install]
WantedBy=multi-user.target

EOF

k8s-02节点

cat > /usr/lib/systemd/system/kube-apiserver.service << EOF
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=network.target

[Service]
ExecStart=/usr/local/bin/kube-apiserver \
      --v=2  \
      --logtostderr=true  \
      --allow-privileged=true  \
      --bind-address=0.0.0.0  \
      --secure-port=6443  \
      --advertise-address=10.206.16.5 \
      --service-cluster-ip-range=10.96.0.0/12,fd00::/108  \
            --feature-gates=IPv6DualStack=true \
      --service-node-port-range=30000-32767  \
      --etcd-servers=https://k8s-03:2379,https://k8s-02:2379,https://k8s-03:2379 \
      --etcd-cafile=/etc/etcd/ssl/etcd-ca.pem  \
      --etcd-certfile=/etc/etcd/ssl/etcd.pem  \
      --etcd-keyfile=/etc/etcd/ssl/etcd-key.pem  \
      --client-ca-file=/etc/kubernetes/pki/ca.pem  \
      --tls-cert-file=/etc/kubernetes/pki/apiserver.pem  \
      --tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem  \
      --kubelet-client-certificate=/etc/kubernetes/pki/apiserver.pem  \
      --kubelet-client-key=/etc/kubernetes/pki/apiserver-key.pem  \
      --service-account-key-file=/etc/kubernetes/pki/sa.pub  \
      --service-account-signing-key-file=/etc/kubernetes/pki/sa.key  \
      --service-account-issuer=https://kubernetes.default.svc.cluster.local \
      --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname  \
      --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota  \
      --authorization-mode=Node,RBAC  \
      --enable-bootstrap-token-auth=true  \
      --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem  \
      --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.pem  \
      --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client-key.pem  \
      --requestheader-allowed-names=aggregator  \
      --requestheader-group-headers=X-Remote-Group  \
      --requestheader-extra-headers-prefix=X-Remote-Extra-  \
      --requestheader-username-headers=X-Remote-User \
      --enable-aggregator-routing=true
      # --token-auth-file=/etc/kubernetes/token.csv

Restart=on-failure
RestartSec=10s
LimitNOFILE=65535

[Install]
WantedBy=multi-user.target

EOF

k8s-03节点

cat > /usr/lib/systemd/system/kube-apiserver.service  << EOF

[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=network.target

[Service]
ExecStart=/usr/local/bin/kube-apiserver \
      --v=2  \
      --logtostderr=true  \
      --allow-privileged=true  \
      --bind-address=0.0.0.0  \
      --secure-port=6443  \
      --advertise-address=10.206.16.2 \
      --service-cluster-ip-range=10.96.0.0/12  \
            --feature-gates=IPv6DualStack=true \
      --service-node-port-range=30000-32767  \
      --etcd-servers=https://k8s-01:2379,https://k8s-02:2379,https://k8s-03:2379 \
      --etcd-cafile=/etc/etcd/ssl/etcd-ca.pem  \
      --etcd-certfile=/etc/etcd/ssl/etcd.pem  \
      --etcd-keyfile=/etc/etcd/ssl/etcd-key.pem  \
      --client-ca-file=/etc/kubernetes/pki/ca.pem  \
      --tls-cert-file=/etc/kubernetes/pki/apiserver.pem  \
      --tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem  \
      --kubelet-client-certificate=/etc/kubernetes/pki/apiserver.pem  \
      --kubelet-client-key=/etc/kubernetes/pki/apiserver-key.pem  \
      --service-account-key-file=/etc/kubernetes/pki/sa.pub  \
      --service-account-signing-key-file=/etc/kubernetes/pki/sa.key  \
      --service-account-issuer=https://kubernetes.default.svc.cluster.local \
      --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname  \
      --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota  \
      --authorization-mode=Node,RBAC  \
      --enable-bootstrap-token-auth=true  \
      --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem  \
      --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.pem  \
      --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client-key.pem  \
      --requestheader-allowed-names=aggregator  \
      --requestheader-group-headers=X-Remote-Group  \
      --requestheader-extra-headers-prefix=X-Remote-Extra-  \
      --requestheader-username-headers=X-Remote-User \
      --enable-aggregator-routing=true

Restart=on-failure
RestartSec=10s
LimitNOFILE=65535

[Install]
WantedBy=multi-user.target

EOF

启动apiserver(所有master节点)

for i in k8s-01 k8s-02 k8s-03;do
    ssh $i "systemctl daemon-reload && systemctl enable --now kube-apiserver"
    echo "$i"
    sleep 5
    ssh $i "systemctl status kube-apiserver"

done

启动完成后,我们到腾讯云SLB就可以看到后端健康已经正常

1658082817319.png

Controller-Manage

只在k8s-01操作,然后拷贝到其它节点

172.16.0.0/12为pod网段,按需求设置你自己的网段

cat > /usr/lib/systemd/system/kube-controller-manager.service << EOF

[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
After=network.target

[Service]
ExecStart=/usr/local/bin/kube-controller-manager \
      --v=2 \
      --logtostderr=true \
      --bind-address=127.0.0.1 \
      --root-ca-file=/etc/kubernetes/pki/ca.pem \
      --cluster-signing-cert-file=/etc/kubernetes/pki/ca.pem \
      --cluster-signing-key-file=/etc/kubernetes/pki/ca-key.pem \
      --service-account-private-key-file=/etc/kubernetes/pki/sa.key \
      --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig \
      --leader-elect=true \
      --use-service-account-credentials=true \
      --node-monitor-grace-period=40s \
      --node-monitor-period=5s \
      --pod-eviction-timeout=2m0s \
      --controllers=*,bootstrapsigner,tokencleaner \
      --allocate-node-cidrs=true \
      --feature-gates=IPv6DualStack=true \
      --service-cluster-ip-range=10.96.0.0/12,fd00::/108 \
      --cluster-cidr=172.16.0.0/12,fc00::/48 \
      --node-cidr-mask-size-ipv4=24 \
      --node-cidr-mask-size-ipv6=64 \
      --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem 

Restart=always
RestartSec=10s

[Install]
WantedBy=multi-user.target

EOF

配置文件拷贝到其它节点

for i in k8s-02 k8s-03;do
    scp /usr/lib/systemd/system/kube-controller-manager.service  $i:/usr/lib/systemd/system/
    scp /etc/kubernetes/controller-manager.kubeconfig  $i:/etc/kubernetes/
done

启动所有节点服务

for i in k8s-01 k8s-02 k8s-03;do
      ssh $i "systemctl daemon-reload && systemctl enable --now kube-controller-manager && systemctl  status kube-controller-manager"
done

Scheduler

只在k8s-01操作,然后拷贝到其它节点

cat > /usr/lib/systemd/system/kube-scheduler.service << EOF

[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
After=network.target

[Service]
ExecStart=/usr/local/bin/kube-scheduler \
      --v=2 \
      --logtostderr=true \
      --bind-address=127.0.0.1 \
      --leader-elect=true \
      --kubeconfig=/etc/kubernetes/scheduler.kubeconfig

Restart=always
RestartSec=10s

[Install]
WantedBy=multi-user.target

EOF

配置文件拷贝到其它节点

for i in k8s-02 k8s-03;do
    scp /usr/lib/systemd/system/kube-scheduler.service  $i:/usr/lib/systemd/system/
    scp /etc/kubernetes/scheduler.kubeconfig $i:/etc/kubernetes/
done

启动所有节点服务

for i in k8s-01 k8s-02 k8s-03;do
      ssh $i "systemctl daemon-reload && systemctl enable --now kube-scheduler && systemctl  status kube-scheduler"
done

上下文配置

只在k8s-01操作

cd /root/bootstrap

kubectl config set-cluster kubernetes     \
--certificate-authority=/etc/kubernetes/pki/ca.pem     \
--embed-certs=true     --server=https://apiserver.frps.cn:8443     \
--kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig

kubectl config set-credentials tls-bootstrap-token-user     \
--token=c8ad9c.2e4d610cf3e7426e \
--kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig

kubectl config set-context tls-bootstrap-token-user@kubernetes     \
--cluster=kubernetes     \
--user=tls-bootstrap-token-user     \
--kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig

kubectl config use-context tls-bootstrap-token-user@kubernetes     \
--kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig

# token的位置在bootstrap.secret.yaml,如果修改的话到这个文件修改

mkdir -p /root/.kube ; cp /etc/kubernetes/admin.kubeconfig /root/.kube/config

查看集群状态

root@VM-16-2-ubuntu:~/bootstrap# kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS    MESSAGE                         ERROR
scheduler            Healthy   ok                              
controller-manager   Healthy   ok                              
etcd-2               Healthy   {"health":"true","reason":""}   
etcd-0               Healthy   {"health":"true","reason":""}   
etcd-1               Healthy   {"health":"true","reason":""}  

# 切记执行,别忘记!!!

kubectl create -f bootstrap.secret.yaml

Kubelet

只在k8s-01操作,然后拷贝到其它节点

创建 kubelet启动文件

cat > /usr/lib/systemd/system/kubelet.service << EOF

[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/kubernetes/kubernetes
After=containerd.service
Requires=containerd.service

[Service]
ExecStart=/usr/local/bin/kubelet \
    --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig  \
    --kubeconfig=/etc/kubernetes/kubelet.kubeconfig \
    --config=/etc/kubernetes/kubelet-conf.yml \
    --container-runtime=remote  \
    --runtime-request-timeout=15m  \
    --container-runtime-endpoint=unix:///run/containerd/containerd.sock  \
    --cgroup-driver=systemd \
    --node-labels=node.kubernetes.io/node='' \
    --feature-gates=IPv6DualStack=true

[Install]
WantedBy=multi-user.target
EOF

提示: Centos需要将--node-labels=node.kubernetes.io/node=后面的单引号删除

创建kubelet配置文件

cat > /etc/kubernetes/kubelet-conf.yml <<EOF
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
address: 0.0.0.0
port: 10250
readOnlyPort: 10255
authentication:
  anonymous:
    enabled: false
  webhook:
    cacheTTL: 2m0s
    enabled: true
  x509:
    clientCAFile: /etc/kubernetes/pki/ca.pem
authorization:
  mode: Webhook
  webhook:
    cacheAuthorizedTTL: 5m0s
    cacheUnauthorizedTTL: 30s
cgroupDriver: systemd
cgroupsPerQOS: true
clusterDNS:
- 10.96.0.10
clusterDomain: cluster.local
containerLogMaxFiles: 5
containerLogMaxSize: 10Mi
contentType: application/vnd.kubernetes.protobuf
cpuCFSQuota: true
cpuManagerPolicy: none
cpuManagerReconcilePeriod: 10s
enableControllerAttachDetach: true
enableDebuggingHandlers: true
enforceNodeAllocatable:
- pods
eventBurst: 10
eventRecordQPS: 5
evictionHard:
  imagefs.available: 15%
  memory.available: 100Mi
  nodefs.available: 10%
  nodefs.inodesFree: 5%
evictionPressureTransitionPeriod: 5m0s
failSwapOn: true
fileCheckFrequency: 20s
hairpinMode: promiscuous-bridge
healthzBindAddress: 127.0.0.1
healthzPort: 10248
httpCheckFrequency: 20s
imageGCHighThresholdPercent: 85
imageGCLowThresholdPercent: 80
imageMinimumGCAge: 2m0s
iptablesDropBit: 15
iptablesMasqueradeBit: 14
kubeAPIBurst: 10
kubeAPIQPS: 5
makeIPTablesUtilChains: true
maxOpenFiles: 1000000
maxPods: 110
nodeStatusUpdateFrequency: 10s
oomScoreAdj: -999
podPidsLimit: -1
registryBurst: 10
registryPullQPS: 5
resolvConf: /etc/resolv.conf
rotateCertificates: true
runtimeRequestTimeout: 2m0s
serializeImagePulls: true
staticPodPath: /etc/kubernetes/manifests
streamingConnectionIdleTimeout: 4h0m0s
syncFrequency: 1m0s
volumeStatsAggPeriod: 1m0s
EOF

拷贝证书到其它节点

for i in k8s-01 k8s-02 k8s-03;do
   ssh $i "mkdir -p /var/lib/kubelet /var/log/kubernetes  /etc/kubernetes/manifests/"
   scp /etc/kubernetes/kubelet-conf.yml $i:/etc/kubernetes/
   scp /usr/lib/systemd/system/kubelet.service  $i:/usr/lib/systemd/system/
   scp /etc/kubernetes/bootstrap-kubelet.kubeconfig $i:/etc/kubernetes/
done

启动服务

所有节点

for i in k8s-01 k8s-02 k8s-03;do
    ssh $i "systemctl daemon-reload"
    ssh $i "systemctl enable --now kubelet"
    sleep 3
    ssh $i "systemctl status kubelet"

done
ssh $i "systemctl restart kubelet"

查看集群

root@VM-16-2-ubuntu:~# kubectl get node
NAME             STATUS   ROLES    AGE   VERSION
vm-16-2-ubuntu   Ready    <none>   15m   v1.24.3
vm-16-3-ubuntu   Ready    <none>   15s   v1.24.3
vm-16-5-ubuntu   Ready    <none>   20s   v1.24.3

Kube-Proxy

创建systemd启动文件

cat >  /usr/lib/systemd/system/kube-proxy.service << EOF
[Unit]
Description=Kubernetes Kube Proxy
Documentation=https://github.com/kubernetes/kubernetes
After=network.target

[Service]
ExecStart=/usr/local/bin/kube-proxy \
  --config=/etc/kubernetes/kube-proxy.yaml \
  --v=2

Restart=always
RestartSec=10s

[Install]
WantedBy=multi-user.target

EOF

创建kube-proxy配置文件

cat > /etc/kubernetes/kube-proxy.yaml << EOF
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 0.0.0.0
clientConnection:
  acceptContentTypes: ""
  burst: 10
  contentType: application/vnd.kubernetes.protobuf
  kubeconfig: /etc/kubernetes/kube-proxy.kubeconfig
  qps: 5
clusterCIDR: 172.16.0.0/12,fc00::/48 
configSyncPeriod: 15m0s
conntrack:
  max: null
  maxPerCore: 32768
  min: 131072
  tcpCloseWaitTimeout: 1h0m0s
  tcpEstablishedTimeout: 24h0m0s
enableProfiling: false
healthzBindAddress: 0.0.0.0:10256
hostnameOverride: ""
iptables:
  masqueradeAll: false
  masqueradeBit: 14
  minSyncPeriod: 0s
  syncPeriod: 30s
ipvs:
  masqueradeAll: true
  minSyncPeriod: 5s
  scheduler: "rr"
  syncPeriod: 30s
kind: KubeProxyConfiguration
metricsBindAddress: 127.0.0.1:10249
mode: "ipvs"
nodePortAddresses: null
oomScoreAdj: -999
portRange: ""
udpIdleTimeout: 250ms

EOF

前面已经生成了kube-proxy.kubeconfig文件,接下来我们拷贝到其它master节点

for i in k8s-02 k8s-03;do
    scp /etc/kubernetes/kube-proxy.kubeconfig $i:/etc/kubernetes/
    scp /etc/kubernetes/kube-proxy.yaml $i:/etc/kubernetes/
    scp  /usr/lib/systemd/system/kube-proxy.service $i:/usr/lib/systemd/system/
done

批量启动服务

for i in k8s-01 k8s-02 k8s-03;do
      ssh $i "systemctl daemon-reload"
      ssh $i "systemctl restart kube-proxy"
     ssh $i "systemctl enable --now kube-proxy"
done

检查服务状态

for i in k8s-01 k8s-02 k8s-03;do
   echo "$i"
   sleep 2
    ssh $i "systemctl status kube-proxy"
done

Calico 安装

Calico网段

  • service:10.96.0.0/12
  • pod:172.16.0.0/12
wget http://down.i4t.com/k8s1.24/calico.yaml
kubectl apply -f calico.yaml

检查calico

root@VM-16-2-ubuntu:~# kubectl get pod -n kube-system
NAME                                       READY   STATUS    RESTARTS      AGE
calico-kube-controllers-56cdb7c587-7ckj6   1/1     Running   0             10m
calico-node-6xlkh                          1/1     Running   0             10m
calico-node-n66nr                          1/1     Running   0             10m
calico-node-t27pp                          1/1     Running   0             10m
calico-typha-6775694657-4m28f              1/1     Running   0             10m
coredns-6d86b45487-fpvd6                   1/1     Running   0             35m
metrics-server-6d6549d5d4-kxkxn            1/1     Running   1 (37m ago)   72m

CoreDNS 安装

mkdir /root/coredns && cd /root/coredns

cat > coredns.yaml << EOF 
apiVersion: v1
kind: ServiceAccount
metadata:
  name: coredns
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
  name: system:coredns
rules:
  - apiGroups:
    - ""
    resources:
    - endpoints
    - services
    - pods
    - namespaces
    verbs:
    - list
    - watch
  - apiGroups:
    - discovery.k8s.io
    resources:
    - endpointslices
    verbs:
    - list
    - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
  name: system:coredns
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:coredns
subjects:
- kind: ServiceAccount
  name: coredns
  namespace: kube-system
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: coredns
  namespace: kube-system
data:
  Corefile: |
    .:53 {
        errors
        health {
          lameduck 5s
        }
        ready
        kubernetes cluster.local in-addr.arpa ip6.arpa {
          fallthrough in-addr.arpa ip6.arpa
        }
        prometheus :9153
        forward . /etc/resolv.conf {
          max_concurrent 1000
        }
        cache 30
        loop
        reload
        loadbalance
    }
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: coredns
  namespace: kube-system
  labels:
    k8s-app: kube-dns
    kubernetes.io/name: "CoreDNS"
spec:
  # replicas: not specified here:
  # 1. Default is 1.
  # 2. Will be tuned in real time if DNS horizontal auto-scaling is turned on.
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 1
  selector:
    matchLabels:
      k8s-app: kube-dns
  template:
    metadata:
      labels:
        k8s-app: kube-dns
    spec:
      priorityClassName: system-cluster-critical
      serviceAccountName: coredns
      tolerations:
        - key: "CriticalAddonsOnly"
          operator: "Exists"
      nodeSelector:
        kubernetes.io/os: linux
      affinity:
         podAntiAffinity:
           preferredDuringSchedulingIgnoredDuringExecution:
           - weight: 100
             podAffinityTerm:
               labelSelector:
                 matchExpressions:
                   - key: k8s-app
                     operator: In
                     values: ["kube-dns"]
               topologyKey: kubernetes.io/hostname
      containers:
      - name: coredns
        image: registry.cn-beijing.aliyuncs.com/abcdocker/coredns:1.8.6 
        imagePullPolicy: IfNotPresent
        resources:
          limits:
            memory: 170Mi
          requests:
            cpu: 100m
            memory: 70Mi
        args: [ "-conf", "/etc/coredns/Corefile" ]
        volumeMounts:
        - name: config-volume
          mountPath: /etc/coredns
          readOnly: true
        ports:
        - containerPort: 53
          name: dns
          protocol: UDP
        - containerPort: 53
          name: dns-tcp
          protocol: TCP
        - containerPort: 9153
          name: metrics
          protocol: TCP
        securityContext:
          allowPrivilegeEscalation: false
          capabilities:
            add:
            - NET_BIND_SERVICE
            drop:
            - all
          readOnlyRootFilesystem: true
        livenessProbe:
          httpGet:
            path: /health
            port: 8080
            scheme: HTTP
          initialDelaySeconds: 60
          timeoutSeconds: 5
          successThreshold: 1
          failureThreshold: 5
        readinessProbe:
          httpGet:
            path: /ready
            port: 8181
            scheme: HTTP
      dnsPolicy: Default
      volumes:
        - name: config-volume
          configMap:
            name: coredns
            items:
            - key: Corefile
              path: Corefile
---
apiVersion: v1
kind: Service
metadata:
  name: kube-dns
  namespace: kube-system
  annotations:
    prometheus.io/port: "9153"
    prometheus.io/scrape: "true"
  labels:
    k8s-app: kube-dns
    kubernetes.io/cluster-service: "true"
    kubernetes.io/name: "CoreDNS"
spec:
  selector:
    k8s-app: kube-dns
  clusterIP: 10.96.0.10 
  ports:
  - name: dns
    port: 53
    protocol: UDP
  - name: dns-tcp
    port: 53
    protocol: TCP
  - name: metrics
    port: 9153
    protocol: TCP
EOF

创建coredns

root@VM-16-2-ubuntu:~/coredns# kubectl apply -f coredns.yaml
serviceaccount/coredns created
clusterrole.rbac.authorization.k8s.io/system:coredns created
clusterrolebinding.rbac.authorization.k8s.io/system:coredns created
configmap/coredns created
deployment.apps/coredns created
service/kube-dns created

检查dns服务

root@VM-16-2-ubuntu:~# kubectl get pod -n kube-system
NAME                                       READY   STATUS    RESTARTS      AGE
calico-kube-controllers-56cdb7c587-7ckj6   1/1     Running   0             10m
calico-node-6xlkh                          1/1     Running   0             10m
calico-node-n66nr                          1/1     Running   0             10m
calico-node-t27pp                          1/1     Running   0             10m
calico-typha-6775694657-4m28f              1/1     Running   0             10m
coredns-6d86b45487-fpvd6                   1/1     Running   0             35m

集群验证

等kube-system命名空间下的Pod都为Running,这里先测试一下dns是否正常

cat<<EOF | kubectl apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx
spec:
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - image: nginx:alpine
        name: nginx
        ports:
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: nginx
spec:
  selector:
    app: nginx
  type: NodePort
  ports:
    - protocol: TCP
      port: 80
      targetPort: 80
      nodePort: 30001
---
apiVersion: v1
kind: Pod
metadata:
  name: busybox
  namespace: default
spec:
  containers:
  - name: busybox
    image: abcdocker9/centos:v1
    command:
      - sleep
      - "3600"
    imagePullPolicy: IfNotPresent
  restartPolicy: Always
EOF

创建后Pod我们进行检查

root@VM-16-2-ubuntu:~/metrics-server# kubectl get pod,svc
NAME                         READY   STATUS    RESTARTS   AGE
pod/busybox                  1/1     Running   0          50s
pod/nginx-6fb79bc456-vskg6   1/1     Running   0          50s

NAME                 TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
service/kubernetes   ClusterIP   10.96.0.1       <none>        443/TCP        35h
service/nginx        NodePort    10.109.148.61   <none>        80:30001/TCP   50s
  • 测试DNS解析
root@VM-16-2-ubuntu:~/metrics-server# kubectl exec -ti busybox -- nslookup kubernetes
Server:     10.96.0.10
Address:    10.96.0.10#53

Name:   kubernetes.default.svc.cluster.local
Address: 10.96.0.1
  • 测试Nginx pod_ip svc_ip
for i in k8s-01 k8s-02 k8s-03;do
   ssh root@$i curl -s 10.109.148.61  #nginx svc ip
   ssh root@$i curl -s 10.88.0.2   #pod ip
done

1658209915430.png

  • 测试Node Port

在任意一台节点访问ip:30001端口,测试nginx
1658209961965.png

如果安全组没有打开,需要开启一下哈~

1658210301817.png

1658214305424.png

Metrics Server


mkdir /root/metrics-server
cd /root/metrics-server
cat > metrics-server.yaml << EOF 
apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    k8s-app: metrics-server
    rbac.authorization.k8s.io/aggregate-to-admin: "true"
    rbac.authorization.k8s.io/aggregate-to-edit: "true"
    rbac.authorization.k8s.io/aggregate-to-view: "true"
  name: system:aggregated-metrics-reader
rules:
- apiGroups:
  - metrics.k8s.io
  resources:
  - pods
  - nodes
  verbs:
  - get
  - list
  - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    k8s-app: metrics-server
  name: system:metrics-server
rules:
- apiGroups:
  - ""
  resources:
  - pods
  - nodes
  - nodes/stats
  - namespaces
  - configmaps
  verbs:
  - get
  - list
  - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server-auth-reader
  namespace: kube-system
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: extension-apiserver-authentication-reader
subjects:
- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server:system:auth-delegator
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:auth-delegator
subjects:
- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    k8s-app: metrics-server
  name: system:metrics-server
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:metrics-server
subjects:
- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system
---
apiVersion: v1
kind: Service
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server
  namespace: kube-system
spec:
  ports:
  - name: https
    port: 443
    protocol: TCP
    targetPort: https
  selector:
    k8s-app: metrics-server
---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server
  namespace: kube-system
spec:
  selector:
    matchLabels:
      k8s-app: metrics-server
  strategy:
    rollingUpdate:
      maxUnavailable: 0
  template:
    metadata:
      labels:
        k8s-app: metrics-server
    spec:
      containers:
      - args:
        - --cert-dir=/tmp
        - --secure-port=4443
        - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
        - --kubelet-use-node-status-port
        - --metric-resolution=15s
        - --kubelet-insecure-tls
        - --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem # change to front-proxy-ca.crt for kubeadm
        - --requestheader-username-headers=X-Remote-User
        - --requestheader-group-headers=X-Remote-Group
        - --requestheader-extra-headers-prefix=X-Remote-Extra-
        image: registry.cn-beijing.aliyuncs.com/abcdocker/metrics-server:0.5.0
        imagePullPolicy: IfNotPresent
        livenessProbe:
          failureThreshold: 3
          httpGet:
            path: /livez
            port: https
            scheme: HTTPS
          periodSeconds: 10
        name: metrics-server
        ports:
        - containerPort: 4443
          name: https
          protocol: TCP
        readinessProbe:
          failureThreshold: 3
          httpGet:
            path: /readyz
            port: https
            scheme: HTTPS
          initialDelaySeconds: 20
          periodSeconds: 10
        resources:
          requests:
            cpu: 100m
            memory: 200Mi
        securityContext:
          readOnlyRootFilesystem: true
          runAsNonRoot: true
          runAsUser: 1000
        volumeMounts:
        - mountPath: /tmp
          name: tmp-dir
        - name: ca-ssl
          mountPath: /etc/kubernetes/pki
      nodeSelector:
        kubernetes.io/os: linux
      priorityClassName: system-cluster-critical
      serviceAccountName: metrics-server
      volumes:
      - emptyDir: {}
        name: tmp-dir
      - name: ca-ssl
        hostPath:
          path: /etc/kubernetes/pki

---
apiVersion: apiregistration.k8s.io/v1
kind: APIService
metadata:
  labels:
    k8s-app: metrics-server
  name: v1beta1.metrics.k8s.io
spec:
  group: metrics.k8s.io
  groupPriorityMinimum: 100
  insecureSkipTLSVerify: true
  service:
    name: metrics-server
    namespace: kube-system
  version: v1beta1
  versionPriority: 100
EOF

创建metrics-server

root@VM-16-2-ubuntu:~/metrics-server# ls
metrics-server.yaml
root@VM-16-2-ubuntu:~/metrics-server# kubectl apply -f metrics-server.yaml
serviceaccount/metrics-server created
clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created
clusterrole.rbac.authorization.k8s.io/system:metrics-server created
rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created
clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created
clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created
service/metrics-server created
deployment.apps/metrics-server created
apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created

稍等几分钟我们就可以通过kubectl top node查看到各个节点的资源使用情况

root@VM-16-2-ubuntu:~/metrics-server# kubectl top node
NAME             CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%
vm-16-2-ubuntu   81m          4%     1704Mi          51%
vm-16-3-ubuntu   91m          4%     1625Mi          48%
vm-16-5-ubuntu   63m          3%     1637Mi          49%

Pod也是可以看到

root@VM-16-2-ubuntu:~/metrics-server# kubectl top pod -n kube-system
NAME                              CPU(cores)   MEMORY(bytes)
coredns-6d86b45487-2rctn          1m           11Mi
kube-flannel-ds-52b25             2m           12Mi
kube-flannel-ds-mbc64             2m           11Mi
kube-flannel-ds-qzw95             2m           11Mi
metrics-server-6d6549d5d4-kxkxn   3m           13Mi

Dashboard 安装

wget https://d.frps.cn/file/kubernetes/dashboard/dashboard.yaml
wget https://d.frps.cn/file/kubernetes/dashboard/dashboard-user.yaml

kubectl  apply -f dashboard.yaml
kubectl  apply -f dashboard-user.yaml

更改dashboard的svc为NodePort

kubectl edit svc kubernetes-dashboard -n kubernetes-dashboard
  type: NodePort

查看nodeport 端口号

root@VM-16-2-ubuntu:~# kubectl get svc kubernetes-dashboard -n kubernetes-dashboard
NAME                   TYPE       CLUSTER-IP      EXTERNAL-IP   PORT(S)         AGE
kubernetes-dashboard   NodePort   10.108.54.145   <none>        443:31816/TCP   38s

创建token访问

root@VM-16-2-ubuntu:~# kubectl -n kubernetes-dashboard create token admin-user
eyJhbGciOiJSUzI1NiIsImtpZCI6IkN3b3lVOVRmM0owZEktNVZ6b0l1a293aGlDRzRzcDg2Y1AwNDV2elRmQUkifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwiXSwiZXhwIjoxNjU4MjE0NTY5LCJpYXQiOjE2NTgyMTA5NjksImlzcyI6Imh0dHBzOi8va3ViZXJuZXRlcy5kZWZhdWx0LnN2Yy5jbHVzdGVyLmxvY2FsIiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsInNlcnZpY2VhY2NvdW50Ijp7Im5hbWUiOiJhZG1pbi11c2VyIiwidWlkIjoiMmUwOWYwNTctZDgxOS00YTliLWE5NzYtNWJmMzlhOWI3N2M5In19LCJuYmYiOjE2NTgyMTA5NjksInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlcm5ldGVzLWRhc2hib2FyZDphZG1pbi11c2VyIn0.w1BnhhMJZ6BAKDm5HT7PZECM5o7cCe_bcJXpnkIqqxgDPyExBX8ax7jZpZ4EbLP5RvjUzdDfiHrc1A6XGwUBqupMmqk6kXxRQwSmnugTn7LSXNw1PEgc9VgbbWhcapov3gF_aASIqAGO2ER1fxvbDBoOc33HxTp1rsl49GsDqRFPH1hUaCf284oG5zkhA4lh7ubvtYq9wYKLzIM1HomBCGWFKbssh0jYHdhoASYIFnRbUX_B7ADTJS2AQd13xF7mlup9cETe51_KG_f6M1nEbqhMihNU11lkO-67cz0Jv632Uvh-qfTj96SnaFEg4kgH69KKrquZ1929_8u5qnMu2Q

使用浏览器访问

任意Node IP:31816 (31816端口号是需要通过kubectl get svc获取到,端口号为随机生成,也可以直接指定)

1658211070208.png

1658211090144.png

1658214188869.png

下面填写上面token内容
1658214257413.png

给TA买糖
共{{data.count}}人
人已赞赏
Kubernetes

Kubernetes集群使用Kubent检查是否使用了已弃用的 API

2022-7-6 1:13:34

CephKubernetes

Kubernetes(k8s) 1.23.5 csi-ceph cephfs使用手册

2022-7-26 0:43:39

26 条回复 A文章作者 M管理员
  1. bright

    为啥Ubuntu的部署步骤看起来比前面的CentOS 7繁琐那么多?

    • 新闻联播老司机

      centos7用的是脚本化,后期基本上不会在更新二进制安装了。这次写的稍微详细一点,最详细的就是1.11二进制安装。大部分步骤可以看一下,基本上没啥变化。 无非是脚本化省略了一些步骤。

  2. 七里香pro

    到这里cfssl gencert -initca etcd-ca-csr.json | cfssljson -bare /etc/etcd/ssl/etcd-ca
    生成etcd证书那里为啥一直生成不了文件啊,etcd-ca.pem 和etcd-ca-key.pem都没有出来

    • 七里香pro

      最后是 [INFO] signed certificate with serial number 637985514960070859755298862828136500426425403854
      Segmentation fault
      最后的Segmentation fault 是溢出了么

    • 七里香pro

      感谢,还有个问题就是 kubectl返回还有延迟,连续执行几次会有一次卡顿,telnet apiserver的slb地址时会偶发不通

    • 新闻联播老司机

      去slb看看你后端,是不是哪个apiserver健康监测超时了。 可以在slb后台一台一台测试一下apiserver

    • 七里香pro

      测试发现有一个节点拿掉就正常了,但是单独看这个节点apiserver进程正常,也能telnet通,服务也重启了,重新加回到slb后依然出现之前的问题

    • 新闻联播老司机

      不行这台重装,用其他两台测试一下。然后重新加入节点就行了

    • 七里香pro

      好的,重新装下吧,看了日志没有报错,这个版本按照上面方式部署可以用在生产环境么

  3. 七里香pro

    排查上面kubectl返回延迟超时的故障时候发现一个问题,就是如果用四层slb来负载的,后端在某一个节点不停的发出请求的话,会有一个时间点是同时作为客户端和服务端的,而目前四层负载均衡不支持同时作为客户端和服务端,所以会出现上述apiserver超时的问题,可能那个时间段请求方和响应方都是自己

    • 新闻联播老司机

      或者这样,尝试一下使用nginx节点,用slb–>nginx–>apiserver 这样请求我认为会比直接slb–>apiserver好一些

    • 七里香pro

      添加node时,node节点kubelet起来了,但是会报错kubelet.go:2424] “Error getting node” err=”node “nodex” not found”,master端看不到该节点 也获取不到csr,在kubelet的启动文件里面指定了–hostname-override也还是不行,请问这个是版本的问题么?

    • 新闻联播老司机

      host解析都添加了吗? 你看看node节点中kubelet有什么错误日志,以及apiserver。这个问题日志会有提示

  4. 七里香pro

    host解析添加了的,apiserver端没啥异常,node节点的kubelet报这个错“”kubelet.go:2424] “Error getting node” err=”node “nodex” not found”“”

个人中心
购物车
优惠劵
今日签到
有新私信 私信列表
搜索