二进制方式部署k8s集群

时间:2022-07-22
本文章向大家介绍二进制方式部署k8s集群,主要内容包括其使用实例、应用技巧、基本知识点总结和需要注意事项,具有一定的参考价值,需要的朋友可以参考一下。

这次部署是使用的二进制方式进行安装,部署的版本是v1.13.1,使用了三台机器做的k8s集群,没有对master做成集群,表如下:

简单介绍下各服务的作用

  • ETCD 是一款用于共享配置和服务发现的高效KV存储系统,具有分布式、强一致性等特点。在Kubernetes环境中主要用于存储所有需要持久化的数据
  • kube-apiserver Kubernetes 主要负责暴露Kubernetes API,不管是kubectl还是HTTP调用来操作Kubernetes集群各种资源,都是通过kube-apiserver提供的接口进行操作的
  • kube-controller-manager 管理控制器负责整个Kubernetes的管理工作,保证集群中各种资源的状态处于期望状态,当监控到集群中某个资源状态不正常时,管理控制器会触发对应的调度操作,主要由以下几部分组成
    • 节点控制器: 当节点移除时,负责注意和相应
    • 副本控制器: 负责维护系统中每个副本控制器对象正确数量的pod
    • 端点控制器:填充端点对象
    • 服务账户和令牌控制器:为新的命名空间创建默认账户和API访问令牌
  • kube-scheduler  调度器负责Kubernetes集群的具体调度工作,接收来自于管理控制器(kube-controller-manager)触发的调度操作请求,然后根据请求规格、调度约束、整体资源情况等因素进行调度计算,最后将任务发送到目标节点的kubelet组件执行
  • kube-proxy 主要负责Service Endpoint到pod实例的请求转发及负载均衡的规则管理
  • kubelet 是Node节点上最重要的核心组件,负责Kubernetes集群具体的计算任务,具体功能包括:
    • 监听Scheduler组件的任务分配
    • 挂载POD所需Volume
    • 下载POD所需Secrets
    • 通过与docker daemon的交互运行docker容器
    • 定期执行容器健康检查
    • 监控、报告POD状态到kube-controller-manager组件
    • 监控、报告Node状态到kube-controller-manager组件
  • Flannel 是由ConreOS主导设计的用于容器技术的覆盖网络(Overlay Network),在Flannel管理的容器网络中,每一个宿主机都会拥有一个独立子网,用于分配给其上的容器使用。通信方式是基于隧道协议的UDP和VXLAN等方式封包、解包及传输

下载所需的二进制包

master机器上

wget https://dl.k8s.io/v1.13.1/kubernetes-server-linux-amd64.tar.gz
wget https://dl.k8s.io/v1.13.1/kubernetes-client-linux-amd64.tar.gz
wget https://github.com/etcd-io/etcd/releases/download/v3.3.10/etcd-v3.3.10-linux-amd64.tar.gz
wget https://github.com/coreos/flannel/releases/download/v0.10.0/flannel-v0.10.0-linux-amd64.tar.gz

node机器上

wget https://dl.k8s.io/v1.13.1/kubernetes-node-linux-amd64.tar.gz
wget https://github.com/etcd-io/etcd/releases/download/v3.3.10/etcd-v3.3.10-linux-amd64.tar.gz
wget https://github.com/coreos/flannel/releases/download/v0.10.0/flannel-v0.10.0-linux-amd64.tar.gz

生成证书

1、安装cfssl工具

wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
chmod +x cfssl_linux-amd64 cfssljson_linux-amd64 cfssl-certinfo_linux-amd64
mv cfssl_linux-amd64 /usr/local/bin/cfssl
mv cfssljson_linux-amd64 /usr/local/bin/cfssljson
mv cfssl-certinfo_linux-amd64 /usr/bin/cfssl-certinfo

2、创建所需目录(三台机器都要创建同样的目录)

mkdir /k8s/etcd/{bin,cfg,ssl} -p
mkdir /k8s/kubernetes/{bin,cfg,ssl} -p

生成ETCD相关的证书

首先切换到etcd的证书目录:

cd /k8s/etcd/ssl/

1、生成etcd的ca配置

cat << EOF | tee ca-config.json
{
  "signing": {
    "default": {
      "expiry": "87600h"
    },
    "profiles": {
      "etcd": {
         "expiry": "87600h",
         "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ]
      }
    }
  }
}
EOF

2、生成etcd的ca证书

cat << EOF | tee ca-csr.json
{
    "CN": "etcd CA",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Beijing",
            "ST": "Beijing"
        }
    ]
}
EOF

3、生产etcd的server证书

cat << EOF | tee server-csr.json
{
    "CN": "etcd",
    "hosts": [
    "10.0.3.99",
    "10.0.3.41",
    "10.0.3.44"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Beijing",
            "ST": "Beijing"
        }
    ]
}
EOF

4、初始化ca,生成证书和私钥

cfssl gencert -initca ca-csr.json | cfssljson -bare ca 
会有几个以ca开头的文件
ca-config.json  ca-csr.json  server-csr.json

5、生成server证书

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=etcd server-csr.json | cfssljson -bare server   
会生成几个以server开头的文件
server.csr  server-csr.json  server-key.pem  server.pem

生成kubernets证书与私钥

1、切换到kubernetes的存放证书的目录

cd /k8s/kubernetes/ssl

2、生成kubernetes的ca证书

cat << EOF | tee ca-config.json
{
  "signing": {
    "default": {
      "expiry": "87600h"
    },
    "profiles": {
      "kubernetes": {
         "expiry": "87600h",
         "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ]
      }
    }
  }
}
EOF

cat << EOF | tee ca-csr.json
{
    "CN": "kubernetes",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Beijing",
            "ST": "Beijing",
            "O": "k8s",
            "OU": "System"
        }
    ]
}
EOF


# cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
最后会生成几个ca开头的文件
ca-config.json  ca.csr  ca-csr.json  ca-key.pem  ca.pem

3、制作apiserver证书

cat << EOF | tee server-csr.json
{
    "CN": "kubernetes",
    "hosts": [
      "10.254.0.1",
      "127.0.0.1",
      "10.0.3.99",
	  "10.0.3.41",
	  "10.0.3.44",
      "kubernetes",
      "kubernetes.default",
      "kubernetes.default.svc",
      "kubernetes.default.svc.cluster",
      "kubernetes.default.svc.cluster.local"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Beijing",
            "ST": "Beijing",
            "O": "k8s",
            "OU": "System"
        }
    ]
}
EOF

# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server
会生成几个以server开头的证书
server.csr  server-csr.json  server-key.pem  server.pem

4、制作kube-proxy证书

cat << EOF | tee kube-proxy-csr.json
{
  "CN": "system:kube-proxy",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "L": "Beijing",
      "ST": "Beijing",
      "O": "k8s",
      "OU": "System"
    }
  ]
}
EOF

# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy
会生成几个以kube-proxy开头的文件
kube-proxy-csr.json  kube-proxy.pem kube-proxy.csr  kube-proxy-key.pem

以上是关于证书的操作只需要在master机器上生成即可,生成之后,在依次把关于etcd和kubernetes的相关证书拷贝到node1和node2上,目录要对应上!

ETCD部署(三台机器都需要部署)

1、解压并将二进制文件拷贝到之前创建好的目录中

tar -xvf etcd-v3.3.10-linux-amd64.tar.gz
cd etcd-v3.3.10-linux-amd64/
cp etcd etcdctl /k8s/etcd/bin/

2、配置ETCD主文件

vim /k8s/etcd/cfg/etcd.conf 
#[Member]
ETCD_NAME="etcd01"    #etcd节点的名字
ETCD_DATA_DIR="/data1/etcd"   #存放etcd数据的目录
ETCD_LISTEN_PEER_URLS="https://10.0.3.99:2380"  
ETCD_LISTEN_CLIENT_URLS="https://10.0.3.99:2379"

#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://10.0.3.99:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://10.0.3.99:2379"
ETCD_INITIAL_CLUSTER="etcd01=https://10.0.3.99:2380,etcd02=https://10.0.3.41:2380,etcd03=https://10.0.3.44:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"

#[Security]
ETCD_CERT_FILE="/k8s/etcd/ssl/server.pem"
ETCD_KEY_FILE="/k8s/etcd/ssl/server-key.pem"
ETCD_TRUSTED_CA_FILE="/k8s/etcd/ssl/ca.pem"
ETCD_CLIENT_CERT_AUTH="true"
ETCD_PEER_CERT_FILE="/k8s/etcd/ssl/server.pem"
ETCD_PEER_KEY_FILE="/k8s/etcd/ssl/server-key.pem"
ETCD_PEER_TRUSTED_CA_FILE="/k8s/etcd/ssl/ca.pem"
ETCD_PEER_CLIENT_CERT_AUTH="true"

3、配置etcd的启动文件

这里使用systemd来管理etcd

# cat /usr/lib/systemd/system/etcd.service 
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target

[Service]
Type=notify
WorkingDirectory=/data1/etcd/
EnvironmentFile=-/k8s/etcd/cfg/etcd.conf
# set GOMAXPROCS to number of processors
ExecStart=/bin/bash -c "GOMAXPROCS=$(nproc) /k8s/etcd/bin/etcd --name="${ETCD_NAME}" --data-dir="${ETCD_DATA_DIR}" --listen-client-urls="${ETCD_LISTEN_CLIENT_URLS}" --listen-peer-urls="${ETCD_LISTEN_PEER_URLS}" --advertise-client-urls="${ETCD_ADVERTISE_CLIENT_URLS}" --initial-cluster-token="${ETCD_INITIAL_CLUSTER_TOKEN}" --initial-cluster="${ETCD_INITIAL_CLUSTER}" --initial-cluster-state="${ETCD_INITIAL_CLUSTER_STATE}" --cert-file="${ETCD_CERT_FILE}" --key-file="${ETCD_KEY_FILE}" --trusted-ca-file="${ETCD_TRUSTED_CA_FILE}" --client-cert-auth="${ETCD_CLIENT_CERT_AUTH}" --peer-cert-file="${ETCD_PEER_CERT_FILE}" --peer-key-file="${ETCD_PEER_KEY_FILE}" --peer-trusted-ca-file="${ETCD_PEER_TRUSTED_CA_FILE}" --peer-client-cert-auth="${ETCD_PEER_CLIENT_CERT_AUTH}""
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

4、加入开机启动并启动(启动etcd1之前需要在node1和node2上配置好,否则会启动失败)

systemctl daemon-reload
systemctl enable etcd
systemctl start etcd

在三台机器上执行同样的操作,唯一不同的是IP要修改成各机器对应的IP地址

5、验证服务是否正常

# /k8s/etcd/bin/etcdctl --ca-file=/k8s/etcd/ssl/ca.pem --cert-file=/k8s/etcd/ssl/server.pem --key-file=/k8s/etcd/ssl/server-key.pem --endpoints="https://10.0.3.99:2379,https://10.0.3.41:2379,https://10.0.3.44:2379" cluster-health
member 1f6e58fb396ee915 is healthy: got healthy result from https://10.0.3.41:2379
member c10e4e3f55c5daba is healthy: got healthy result from https://10.0.3.44:2379
member c18567296e3b5878 is healthy: got healthy result from https://10.0.3.99:2379
cluster is healthy

至此ETCD安装完毕!

部署kubernetes server

1、解压

tar -zxvf kubernetes-server-linux-amd64.tar.gz 
cd kubernetes/server/bin/
cp kube-scheduler kube-apiserver kube-controller-manager kubectl /k8s/kubernetes/bin/

2、部署kube-apiserver组件 创建TLS Bootstrapping Token

# head -c 16 /dev/urandom | od -An -t x | tr -d ' '
7f08cbe36bec26a1a33923ae062c32de

创建token

# cat /k8s/kubernetes/cfg/token.csv 
08e0de89211cb24b2a96a6d9ba011773,kubelet-bootstrap,10001,"system:kubelet-bootstrap"

创建Apiserver配置文件

# cat /k8s/kubernetes/cfg/kube-apiserver 
KUBE_APISERVER_OPTS="--logtostderr=true 
--v=4 
--etcd-servers=https://10.0.3.99:2379,https://10.0.3.41:2379,https://10.0.3.44:2379 
--bind-address=10.0.3.99 
--secure-port=6443 
--advertise-address=10.0.3.99 
--allow-privileged=true 
--service-cluster-ip-range=10.254.0.0/16 
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction 
--authorization-mode=RBAC,Node 
--enable-bootstrap-token-auth 
--token-auth-file=/k8s/kubernetes/cfg/token.csv 
--service-node-port-range=30000-50000 
--tls-cert-file=/k8s/kubernetes/ssl/server.pem  
--tls-private-key-file=/k8s/kubernetes/ssl/server-key.pem 
--client-ca-file=/k8s/kubernetes/ssl/ca.pem 
--service-account-key-file=/k8s/kubernetes/ssl/ca-key.pem 
--etcd-cafile=/k8s/etcd/ssl/ca.pem 
--etcd-certfile=/k8s/etcd/ssl/server.pem 
--etcd-keyfile=/k8s/etcd/ssl/server-key.pem"

创建apiserver systemd文件

# cat /usr/lib/systemd/system/kube-apiserver.service 
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
 
[Service]
EnvironmentFile=-/k8s/kubernetes/cfg/kube-apiserver
ExecStart=/k8s/kubernetes/bin/kube-apiserver $KUBE_APISERVER_OPTS
Restart=on-failure
 
[Install]
WantedBy=multi-user.target

启动kube-apiserver

systemctl daemon-reload
systemctl enable kube-apiserver
systemctl start kube-apiserver
# ps -ef | grep kube-apiserver
root     15963     1  1 Mar18 ?        00:34:36 /k8s/kubernetes/bin/kube-apiserver --logtostderr=true --v=4 --etcd-servers=https://10.0.3.99:2379,https://10.0.3.41:2379,https://10.0.3.44:2379 --bind-address=10.0.3.99 --secure-port=6443 --advertise-address=10.0.3.99 --allow-privileged=true --service-cluster-ip-range=10.254.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction --authorization-mode=RBAC,Node --enable-bootstrap-token-auth --token-auth-file=/k8s/kubernetes/cfg/token.csv --service-node-port-range=30000-50000 --tls-cert-file=/k8s/kubernetes/ssl/server.pem --tls-private-key-file=/k8s/kubernetes/ssl/server-key.pem --client-ca-file=/k8s/kubernetes/ssl/ca.pem --service-account-key-file=/k8s/kubernetes/ssl/ca-key.pem --etcd-cafile=/k8s/etcd/ssl/ca.pem --etcd-certfile=/k8s/etcd/ssl/server.pem --etcd-keyfile=/k8s/etcd/ssl/server-key.pem

# netstat -ntplu | grep kube-apiserve
tcp        0      0 10.0.3.99:6443          0.0.0.0:*               LISTEN      15963/kube-apiserve 
tcp        0      0 127.0.0.1:8080          0.0.0.0:*               LISTEN      15963/kube-apiserve

3、部署kube-scheduler组件 创建kube-scheduler配置文件

# cat /k8s/kubernetes/cfg/kube-scheduler 
KUBE_SCHEDULER_OPTS="--logtostderr=true --v=4 --master=127.0.0.1:8080 --leader-elect"

–address:在 127.0.0.1:10251 端口接收 http /metrics 请求;
kube-scheduler 目前还不支持接收 https 请求; 
–kubeconfig:指定 kubeconfig 文件路径,kube-scheduler 使用它连接和验证 kube-apiserver; 
–leader-elect=true:集群运行模式,启用选举功能;被选为 leader 的节点负责处理工作,其它节点为阻塞状态;

创建kube-scheduler systemd文件

# cat /k8s/kubernetes/cfg/kube-scheduler 
KUBE_SCHEDULER_OPTS="--logtostderr=true --v=4 --master=127.0.0.1:8080 --leader-elect"

[root@master ~]# cat /usr/lib/systemd/system/kube-scheduler.service 
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
 
[Service]
EnvironmentFile=-/k8s/kubernetes/cfg/kube-scheduler
ExecStart=/k8s/kubernetes/bin/kube-scheduler $KUBE_SCHEDULER_OPTS
Restart=on-failure
 
[Install]
WantedBy=multi-user.target

启动kube-scheduler

systemctl daemon-reload
systemctl enable kube-scheduler.service 
systemctl start kube-scheduler.service

4、部署kube-controller-manager组件 创建kube-controller-manager配置文件

# cat /k8s/kubernetes/cfg/kube-controller-manager 
KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=true 
--v=4 
--master=127.0.0.1:8080 
--leader-elect=true 
--address=127.0.0.1 
--service-cluster-ip-range=10.254.0.0/16 
--cluster-name=kubernetes 
--cluster-signing-cert-file=/k8s/kubernetes/ssl/ca.pem 
--cluster-signing-key-file=/k8s/kubernetes/ssl/ca-key.pem  
--root-ca-file=/k8s/kubernetes/ssl/ca.pem 
--service-account-private-key-file=/k8s/kubernetes/ssl/ca-key.pem"

创建kube-controller-manager systemd文件

# cat /usr/lib/systemd/system/kube-controller-manager.service 
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
 
[Service]
EnvironmentFile=-/k8s/kubernetes/cfg/kube-controller-manager
ExecStart=/k8s/kubernetes/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_OPTS
Restart=on-failure
 
[Install]
WantedBy=multi-user.target

启动kube-controller-manager

systemctl daemon-reload
systemctl enable kube-controller-manager
systemctl start kube-controller-manager

5、验证部署情况

设置环境变量

# tail /etc/profile

PATH=/k8s/kubernetes/bin:$PATH    #add

#source /etc/profile
# kubectl get cs,nodes
NAME                                 STATUS    MESSAGE             ERROR
componentstatus/controller-manager   Healthy   ok                  
componentstatus/scheduler            Healthy   ok                  
componentstatus/etcd-2               Healthy   {"health":"true"}   
componentstatus/etcd-0               Healthy   {"health":"true"}   
componentstatus/etcd-1               Healthy   {"health":"true"}

以上部署kubernetes在master机器上部署完成,需要部署kube-apiserver,kube-scheduler,kube-controller-manager三个服务,每部署完一个服务都要验证下是否启动正常!

node节点部署

1、安装docker

yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
yum list docker-ce --showduplicates | sort -r
yum install docker-ce -y
systemctl start docker && systemctl enable docker

2、部署kubelet组件

安装

wget https://dl.k8s.io/v1.13.1/kubernetes-node-linux-amd64.tar.gz
tar zxvf kubernetes-node-linux-amd64.tar.gz
cd kubernetes/node/bin/
cp kube-proxy kubelet kubectl /k8s/kubernetes/bin/

复制相关证书到node节点

# cd /k8s/kubernetes/ssl/
# scp *.pem node1:/k8s/kubernetes/ssl/

创建kubelet bootstrap kubeconfig文件 通过脚本实现

# cat /k8s/kubernetes/cfg/environment.sh 
#!/bin/bash
#创建kubelet bootstrapping kubeconfig 
BOOTSTRAP_TOKEN=08e0de89211cb24b2a96a6d9ba011773
KUBE_APISERVER="https://10.0.3.99:6443"
#设置集群参数
kubectl config set-cluster kubernetes 
  --certificate-authority=/k8s/kubernetes/ssl/ca.pem 
  --embed-certs=true 
  --server=${KUBE_APISERVER} 
  --kubeconfig=bootstrap.kubeconfig
 
#设置客户端认证参数
kubectl config set-credentials kubelet-bootstrap 
  --token=${BOOTSTRAP_TOKEN} 
  --kubeconfig=bootstrap.kubeconfig
 
# 设置上下文参数
kubectl config set-context default 
  --cluster=kubernetes 
  --user=kubelet-bootstrap 
  --kubeconfig=bootstrap.kubeconfig
 
# 设置默认上下文
kubectl config use-context default --kubeconfig=bootstrap.kubeconfig
 
#----------------------
 
# 创建kube-proxy kubeconfig文件
 
kubectl config set-cluster kubernetes 
  --certificate-authority=/k8s/kubernetes/ssl/ca.pem 
  --embed-certs=true 
  --server=${KUBE_APISERVER} 
  --kubeconfig=kube-proxy.kubeconfig
 
kubectl config set-credentials kube-proxy 
  --client-certificate=/k8s/kubernetes/ssl/kube-proxy.pem 
  --client-key=/k8s/kubernetes/ssl/kube-proxy-key.pem 
  --embed-certs=true 
  --kubeconfig=kube-proxy.kubeconfig
 
kubectl config set-context default 
  --cluster=kubernetes 
  --user=kube-proxy 
  --kubeconfig=kube-proxy.kubeconfig
 
kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig

执行脚本

# sh environment.sh 
Cluster "kubernetes" set.
User "kubelet-bootstrap" set.
Context "default" created.
Switched to context "default".
Cluster "kubernetes" set.
User "kube-proxy" set.
Context "default" created.
Switched to context "default".

会生成几个文件
bootstrap.kubeconfig  environment.sh  kube-proxy.kubeconfig

创建kubelet参数配置模板文件

# cat /k8s/kubernetes/cfg/kubelet.config 
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: 10.0.3.44    #这里的地址要写成本机所在IP
port: 10250
readOnlyPort: 10255
cgroupDriver: cgroupfs
clusterDNS: ["10.254.0.10"]
clusterDomain: cluster.local.
failSwapOn: false
authentication:
  anonymous:
    enabled: true

创建kubelet配置文件

# cat /k8s/kubernetes/cfg/kubelet
KUBELET_OPTS="--logtostderr=true 
--v=4 
--hostname-override=10.0.3.44 
--kubeconfig=/k8s/kubernetes/cfg/kubelet.kubeconfig 
--bootstrap-kubeconfig=/k8s/kubernetes/cfg/bootstrap.kubeconfig 
--config=/k8s/kubernetes/cfg/kubelet.config 
--cert-dir=/k8s/kubernetes/ssl 
--pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0"

创建kubelet systemd文件

# cat /usr/lib/systemd/system/kubelet.service 
[Unit]
Description=Kubernetes Kubelet
After=docker.service
Requires=docker.service
 
[Service]
EnvironmentFile=/k8s/kubernetes/cfg/kubelet
ExecStart=/k8s/kubernetes/bin/kubelet $KUBELET_OPTS
Restart=on-failure
KillMode=process
 
[Install]
WantedBy=multi-user.target

将kubelet-bootstrap用户绑定到系统集群角色

这一步是在master机器上执行
# kubectl create clusterrolebinding kubelet-bootstrap 
>   --clusterrole=system:node-bootstrapper 
>   --user=kubelet-bootstrap

启动服务kubelet

systemctl daemon-reload 
systemctl enable kubelet 
systemctl start kubelet

接受node

这里是在master上操作!!!
Master接受kubelet CSR请求 可以手动或自动 approve CSR 请求。推荐使用自动的方式,因为从 v1.8 版本开始,可以自动轮转approve csr 后生成的证书,如下是手动 approve CSR请求操作方法 查看CSR列表
# kubectl get csr
NAME                                                   AGE    REQUESTOR           CONDITION
node-csr-ij3py9j-yi-eoa8sOHMDs7VeTQtMv0N3Efj3ByZLMdc   102s   kubelet-bootstrap   Pending

# kubectl certificate approve node-csr-ij3py9j-yi-eoa8sOHMDs7VeTQtMv0N3Efj3ByZLMdc
certificatesigningrequest.certificates.k8s.io/node-csr-ij3py9j-yi-eoa8sOHMDs7VeTQtMv0N3Efj3ByZLMdc approved

# kubectl get csr
NAME                                                   AGE     REQUESTOR           CONDITION
node-csr-ij3py9j-yi-eoa8sOHMDs7VeTQtMv0N3Efj3ByZLMdc   5m13s   kubelet-bootstrap   Approved,Issued

3、部署kube-proxy组件

kube-proxy 运行在所有 node节点上,它监听 apiserver 中 service 和 Endpoint 的变化情况,创建路由规则来进行服务负载均衡

创建 kube-proxy 配置文件

# cat /k8s/kubernetes/cfg/kube-proxy
KUBE_PROXY_OPTS="--logtostderr=true 
--v=4 
--hostname-override=10.0.3.44 
--cluster-cidr=10.254.0.0/16 
--kubeconfig=/k8s/kubernetes/cfg/kube-proxy.kubeconfig"

创建kube-proxy systemd文件

# cat /k8s/kubernetes/cfg/kube-proxy
KUBE_PROXY_OPTS="--logtostderr=true 
--v=4 
--hostname-override=10.0.3.44 
--cluster-cidr=10.254.0.0/16 
--kubeconfig=/k8s/kubernetes/cfg/kube-proxy.kubeconfig"
[root@node2 /k8s/kubernetes/ssl]# cat /usr/lib/systemd/system/kube-proxy.service 
[Unit]
Description=Kubernetes Proxy
After=network.target
 
[Service]
EnvironmentFile=-/k8s/kubernetes/cfg/kube-proxy
ExecStart=/k8s/kubernetes/bin/kube-proxy $KUBE_PROXY_OPTS
Restart=on-failure
 
[Install]
WantedBy=multi-user.target

启动kube-proxy服务

systemctl daemon-reload 
systemctl enable kube-proxy 
systemctl start kube-proxy

4、查看集群状态

# kubectl get nodes
NAME        STATUS   ROLES    AGE   VERSION
10.0.3.41   Ready    <none>   46h   v1.13.1
10.0.3.44   Ready    <none>   46h   v1.13.1

Flanneld网络部署

默认没有flanneld网络,Node节点间的pod不能通信,只能Node内通信,为了部署步骤简洁明了,故flanneld放在后面安装 flannel服务需要先于docker启动。flannel服务启动时主要做了以下几步的工作: 从etcd中获取network的配置信息 划分subnet,并在etcd中进行注册 将子网信息记录到/run/flannel/subnet.env中

1、etcd注册网段

两个node节点都要执行
# /k8s/etcd/bin/etcdctl --ca-file=/k8s/etcd/ssl/ca.pem --cert-file=/k8s/etcd/ssl/server.pem --key-file=/k8s/etcd/ssl/server-key.pem --endpoints="https://10.0.3.99:2379,https://10.0.3.41:2379,https://10.0.3.44:2379"  set /k8s/network/config  '{ "Network": "10.254.0.0/16", "Backend": {"Type": "vxlan"}}'

2、安装flanneld

tar -xvf flannel-v0.10.0-linux-amd64.tar.gz
mv flanneld mk-docker-opts.sh /k8s/kubernetes/bin/

3、配置flanneld

# cat /k8s/kubernetes/cfg/flanneld 
FLANNEL_OPTIONS="--etcd-endpoints=https://10.0.3.99:2379,https://10.0.3.41:2379,https://10.0.3.44:2379 -etcd-cafile=/k8s/etcd/ssl/ca.pem -etcd-certfile=/k8s/etcd/ssl/server.pem -etcd-keyfile=/k8s/etcd/ssl/server-key.pem -etcd-prefix=/k8s/network"

创建flanneld systemd文件

# cat /usr/lib/systemd/system/flanneld.service 
[Unit]
Description=Flanneld overlay address etcd agent
After=network-online.target network.target
Before=docker.service
 
[Service]
Type=notify
EnvironmentFile=/k8s/kubernetes/cfg/flanneld
ExecStart=/k8s/kubernetes/bin/flanneld --ip-masq $FLANNEL_OPTIONS
ExecStartPost=/k8s/kubernetes/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.env
Restart=on-failure
 
[Install]
WantedBy=multi-user.target

4、配置Docker启动指定子网 修改EnvironmentFile=/run/flannel/subnet.env,ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS即可

# cat /usr/lib/systemd/system/docker.service 
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket

[Service]
Type=notify
# the default is not to use systemd for cgroups because the delegate issues still
# exists and systemd currently does not support the cgroup feature set required
# for containers run by docker
#ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
EnvironmentFile=/run/flannel/subnet.env           #只需要修改这两行文件即可
ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS
ExecReload=/bin/kill -s HUP $MAINPID
TimeoutSec=0
RestartSec=2
Restart=always

# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
# Both the old, and new location are accepted by systemd 229 and up, so using the old location
# to make them work for either version of systemd.
StartLimitBurst=3

# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
# this option work for either version of systemd.
StartLimitInterval=60s

# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

# Comment TasksMax if your systemd version does not supports it.
# Only systemd 226 and above support this option.
TasksMax=infinity

# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes

# kill only the docker process, not all processes in the cgroup
KillMode=process

[Install]
WantedBy=multi-user.target

5、启动服务

注意启动flannel前要关闭docker及相关的kubelet这样flannel才会覆盖docker0网桥

systemctl daemon-reload
systemctl stop docker
systemctl start flanneld
systemctl enable flanneld
systemctl start docker
systemctl restart kubelet
systemctl restart kube-proxy

6、验证服务

# cat /run/flannel/subnet.env 
DOCKER_OPT_BIP="--bip=10.254.97.1/24"
DOCKER_OPT_IPMASQ="--ip-masq=false"
DOCKER_OPT_MTU="--mtu=1450"
DOCKER_NETWORK_OPTIONS=" --bip=10.254.97.1/24 --ip-masq=false --mtu=1450"

网卡也相应发生了变化

# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 08:00:27:7e:6d:a0 brd ff:ff:ff:ff:ff:ff
    inet 10.0.3.44/24 brd 10.0.3.255 scope global noprefixroute enp0s3
       valid_lft forever preferred_lft forever
    inet 10.0.3.42/23 brd 10.0.3.255 scope global dynamic enp0s3
       valid_lft 581645sec preferred_lft 581645sec
    inet6 fe80::40a3:99c:2118:ee95/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever
3: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default 
    link/ether 02:42:3a:4d:00:fe brd ff:ff:ff:ff:ff:ff
    inet 10.254.97.1/24 brd 10.254.97.255 scope global docker0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:3aff:fe4d:fe/64 scope link 
       valid_lft forever preferred_lft forever
4: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN group default 
    link/ether c2:58:f2:91:bd:57 brd ff:ff:ff:ff:ff:ff
    inet 10.254.97.0/32 scope global flannel.1
       valid_lft forever preferred_lft forever
    inet6 fe80::c058:f2ff:fe91:bd57/64 scope link 
       valid_lft forever preferred_lft forever
6: veth61df0b0@if5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master docker0 state UP group default 
    link/ether a2:56:f5:e2:6e:de brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet6 fe80::a056:f5ff:fee2:6ede/64 scope link 
       valid_lft forever preferred_lft forever

至此整个k8s集群安装完毕!

参考文档:https://www.kubernetes.org.cn/5025.html

我便是按照此安装文档一步一步进行安装,中间没有出现问题,一遍通过,本篇是在参考文档的基础上写的。