战士上战场,还不会部署kubernetes集群?

时间:2022-07-24
本文章向大家介绍战士上战场,还不会部署kubernetes集群?,主要内容包括其使用实例、应用技巧、基本知识点总结和需要注意事项,具有一定的参考价值,需要的朋友可以参考一下。

想要告诉你,kubernetes的搭建并不难,动手试试!战士上战场,还不会部署kubernetes集群怎么行!本文用kubeadm进行集群初始化。

目录

1. 什么是kubernetes

2. 环境准备

3. 关闭防火墙

4. 关闭selinux

5. 同步时钟

6. 关闭 swap 分区

7. 网桥过滤

8. 开启ipvs

9. kubernetes 软件安装

10. kubelet的配置

11. kubernetes 核心组件安装

12. kubernetes 集群初始化

13. 初始化成功之后的操作

14. 结语

1

什么是kubernetes

Docker 容器技术可以让我们更好地利用资源,那么容器的编排调度工作就由Kubernetes 简称k8s完成。像internationalization (国际化)简称 i18n,因为在i和n之间还有18个字符,localization(本地化 ),简称L10n。Kubernetes中的K与s之间有8个字符,简称k8s。Docker 安装请参考战士上战场,还不会使用Docker?

什么是Kubernetes ?请移步:十分钟带你理解Kubernetes核心概念http://www.dockone.io/article/932

2

环境准备

# 3 台主机,1 台master作为集群的管理机器,2 台 worker 机器
192.168.229.129  master
192.168.229.130  node2
192.168.229.131  node1

前提:已参考上文 战士上战场,还不会使用Docker? 安装好了Docker环境。

在master机器上部署了Docker 怎么才能快速复制两台worker 节点的机器 ?

本文使用 vmware 15 pro ,事先只有一台master 
1.复制 master的vmware 文件夹,分别重命名为 node1,node2
2.使用vmware 分别打开 node1,node2
3. 修改node1,node2 的主机名称及IP 地址
#1 在node1 ,node2 节点分别执行

hostnamectl set-hostname node1
hostnamectl set-hostname node2

然后 reboot 之后 ,hostname 就会改变,
当然也可以是使用 systemctl restart network 
#2 修改 ip地址,分别在node1,node2 上操作
[root@node1 ~]# vi /etc/sysconfig/network-scripts/ifcfg-ens33

TYPE=Ethernet
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=static
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
IPV6_ADDR_GEN_MODE=stable-privacy
NAME=ens33
UUID=8a5c3f1d-668c-46a4-a385-cb4c4f77f9b1
DEVICE=ens33
ONBOOT=yes
IPADDR=192.168.229.131   # 修改这里#
GATEWAY=192.168.229.2
NETMASK=255.255.255.0
DNS1=119.29.29.29
~
修改之后,systemctl restart network 重启网卡


#3 修改 host文件,在3 台主机上均添加以下内容

[root@node1 ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

192.168.229.129 master
192.168.229.130 node2
192.168.229.131 node1
# 4 测试解析,保证master, node1 ,node2 可以互相ping通

[root@master ~]# ping node1
PING node1 (192.168.229.131) 56(84) bytes of data.
64 bytes from node1 (192.168.229.131): icmp_seq=1 ttl=64 time=0.647 ms
64 bytes from node1 (192.168.229.131): icmp_seq=2 ttl=64 time=0.354 ms
^[64 bytes from node1 (192.168.229.131): icmp_seq=3 ttl=64 time=0.576 ms
:64 bytes from node1 (192.168.229.131): icmp_seq=4 ttl=64 time=0.397 ms
64 bytes from node1 (192.168.229.131): icmp_seq=5 ttl=64 time=0.647 ms

3

关闭防火墙

centos 7 用的是firewalld,之前用的是iptables。3台主机操作相同。

  • 查看防火墙状态,active状态显示开启
[root@master ~]# systemctl status firewalld
● firewalld.service - firewalld - dynamic firewall daemon
   Loaded: loaded (/usr/lib/systemd/system/firewalld.service; enabled; vendor preset: enabled)
   Active: active (running) since Tue 2020-04-07 10:17:36 EDT; 53min ago
     Docs: man:firewalld(1)
Main PID: 752 (firewalld)
   CGroup: /system.slice/firewalld.service
           └─752 /usr/bin/python2 -Es /usr/sbin/firewalld --nofork --nopid


Apr 07 10:17:34 localhost.localdomain systemd[1]: Starting firewalld - dynami....
Apr 07 10:17:36 localhost.localdomain systemd[1]: Started firewalld - dynamic....
Hint: Some lines were ellipsized, use -l to show in full.
[root@master ~]#
  • 关闭防火墙
[root@master ~]# systemctl stop firewalld
[root@master ~]#
  • 设置开机禁用
[root@master ~]# systemctl disable firewalld
Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
[root@master ~]#
  • 检测firewalld 的运行状态
[root@master ~]# firewall-cmd --state
not running
[root@master ~]#

4

关闭SElinux模块

SELinux 是Linux内置的安全模块,也需要关闭。master,node1,node2 三台服务器上均需执行相同操作。SELinux 主要作用就是最大限度地减小系统中服务进程可访问的资源(最小权限原则)。3台主机操作相同。

  • 查看SELinux status 状态
[root@master ~]# sestatus
SELinux status:                 enabled
SELinuxfs mount:                /sys/fs/selinux
SELinux root directory:         /etc/selinux
Loaded policy name:             targeted
Current mode:                   enforcing
Mode from config file:          enforcing
Policy MLS status:              enabled
Policy deny_unknown status:     allowed
Max kernel policy version:      31
[root@master ~]#
  • 查看selinux 配置
[root@node2 ~]# cat /etc/selinux/config


# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
#     enforcing - SELinux security policy is enforced.
#     permissive - SELinux prints warnings instead of enforcing.
#     disabled - No SELinux policy is loaded.
SELINUX=enforcing
# SELINUXTYPE= can take one of three two values:
#     targeted - Targeted processes are protected,
#     minimum - Modification of targeted policy. Only selected processes are protected.
#     mls - Multi Level Security protection.
SELINUXTYPE=targeted
[root@node2 ~]#
  • 修改配置 使用 sed 命令 将 SELINUX=enforcing替换为SELINUX=disbaled

[root@node1 ~]# sed -ri 's/SELINUX=enforcing/SELINUX=disabled/' /etc/selinux/config
[root@node1 ~]#reboot

  • 再次查看是否关闭selinux
再次查看是否关闭selinux
[root@master ~]# getenforce
Disabled

5

同步时钟

为保证整个集群的时间一直,需要进行时钟同步,这里使用的是阿里云的时钟源 time1.aliyun.com,3台主机操作相同

  • 安装时钟同步工具ntpdate
[root@master ~]# yum -y install ntpdate
  • 同步时钟
使用阿里云时钟源
[root@master ~]# ntpdate time1.aliyun.com
8 Apr 07:49:22 ntpdate[3382]: adjust time server 203.107.6.88 offset 0.001845 sec
[root@master ~]#
  • 设置每隔一小时同步一次时钟,使用crontab 定时
[root@master ~]# crontab -e
no crontab for root - using an empty one
crontab: installing new crontab


#插入定时任务
0 */1 * * * ntpdate time1.aliyun.com

6

关闭swap分区

由于采用kubeadmin部署。kubeadmin 要求必须关闭swap 分区,3台主机操作相同。

  • 修改swap 配置,注释掉/dev/mapper/centos-swap swap ,重启系统
[root@node1 ~]# vi /etc/fstab


#
# /etc/fstab
# Created by anaconda on Thu Sep  6 11:22:21 2018
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/mapper/centos-root /                       xfs     defaults        0 0
UUID=ac09ee1e-ad38-4b52-bbde-ebb1fa9b0720 /boot                   xfs     defaults        0 0
/dev/mapper/centos-swap swap                    swap    defaults        0 0   # 注释掉此项
  • 查看结果,已关闭
[root@node1 ~]# free -m
              total        used        free      shared  buff/cache   available
Mem:           1819         120        1395           9         303        1545
Swap:          0          0        0
[root@node1 ~]#

7

网桥过滤

集群内部网络复杂,需要转发过滤,3 台主机均需要相同操作。

  • 添加网桥过滤及地址转发
[root@master ~]# vi /etc/sysctl.d/k8s.conf
[root@master ~]#
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
vm.swappiness = 0
  • 加载 br_netfilter 模块
[root@master ~]# modprobe br_netfilter
[root@master ~]#
  • 查看模块是否加载
[root@master ~]# lsmod |grep br_netfilter
br_netfilter           22256  0
bridge                151336  1 br_netfilter
[root@master ~]#

[root@node1 ~]# lsmod |grep br_netfilter
br_netfilter           22256  0
bridge                151336  1 br_netfilter
[root@node1 ~]#

[root@node2 ~]# lsmod |grep br_netfilter
br_netfilter           22256  0
bridge                151336  1 br_netfilter
[root@node2 ~]#
  • 加载网桥过滤配置文件
[root@master ~]# sysctl -p /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
vm.swappiness = 0
[root@master ~]#


[root@node1 ~]# sysctl -p /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
vm.swappiness = 0
[root@node1 ~]#
[root@node1 ~]#

[root@node2 ~]# sysctl -p /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
vm.swappiness = 0
[root@node2 ~]#

8

开启ipvs

ipvs (IP Virtual Server) 实现了传输层负载均衡,也就是我们常说的4层LAN交换,作为 Linux 内核的一部分。ipvs运行在主机上,在真实服务器集群前充当负载均衡器。ipvs可以将基于TCPUDP的服务请求转发到真实服务器上,并使真实服务器的服务在单个 IP 地址上显示为虚拟服务。3台服务器均需安装。

  • 安装 ipset ipvsadm
[root@master ~]# yum -y install ipset ipvsadm
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
* base: mirrors.huaweicloud.com
* extras: mirrors.nju.edu.cn
* updates: mirrors.huaweicloud.com
base                                                      | 3.6 kB  00:00:00
docker-ce-stable                                          | 3.5 kB  00:00:00
extras                                                    | 2.9 kB  00:00:00
updates                                                   | 2.9 kB  00:00:00
Package ipset-7.1-1.el7.x86_64 already installed and latest version
Resolving Dependencies
--> Running transaction check
---> Package ipvsadm.x86_64 0:1.27-7.el7 will b
  • 添加需要加载的模块
[root@master ~]# cat > /etc/sysconfig/modules/ipvs.modules <<EOF
> #!/bin/bash
> modprobe -- ip_vs
> modprobe -- ip_vs_rr
> modprobe -- ip_vs_wrr
> modprobe -- ip_vs_sh
> modprobe -- nf_conntrack_ipv4
> EOF
  • 授权运行,检查是否加载
[root@master modules]# chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod|grep -e ip_vs -e nf_conntrack_ipv4
nf_conntrack_ipv4      15053  0
nf_defrag_ipv4         12729  1 nf_conntrack_ipv4
ip_vs_sh               12688  0
ip_vs_wrr              12697  0
ip_vs_rr               12600  0
ip_vs                 145497  6 ip_vs_rr,ip_vs_sh,ip_vs_wrr
nf_conntrack          139224  2 ip_vs,nf_conntrack_ipv4
libcrc32c              12644  3 xfs,ip_vs,nf_conntrack
[root@master modules]#

[root@node1 modules]# chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod|grep -e ip_vs -e nf_conntrack_ipv4
nf_conntrack_ipv4      15053  0
nf_defrag_ipv4         12729  1 nf_conntrack_ipv4
ip_vs_sh               12688  0
ip_vs_wrr              12697  0
ip_vs_rr               12600  0
ip_vs                 145497  6 ip_vs_rr,ip_vs_sh,ip_vs_wrr
nf_conntrack          139224  2 ip_vs,nf_conntrack_ipv4
libcrc32c              12644  3 xfs,ip_vs,nf_conntrack
[root@node1 modules]#


[root@node2 modules]# chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod|grep -e ip_vs -e nf_conntrack_ipv4
nf_conntrack_ipv4      15053  0
nf_defrag_ipv4         12729  1 nf_conntrack_ipv4
ip_vs_sh               12688  0
ip_vs_wrr              12697  0
ip_vs_rr               12600  0
ip_vs                 145497  6 ip_vs_rr,ip_vs_sh,ip_vs_wrr
nf_conntrack          139224  2 ip_vs,nf_conntrack_ipv4
libcrc32c              12644  3 xfs,ip_vs,nf_conntrack
[root@node2 modules]#

9

kubernetes组件安装

kubeadm :初始化集群,管理集群

kubectl :kubernetes的命令行工具

kubelet:用于接收apiserver命令,对pod生命周期进行管理

  • 新建kubernetes yum源
[root@master]# cat <<EOF > k8s.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg 
       https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
[root@master]# kubernetes.repo /etc/yum.repos.d/
  • yum list 并导入GPG key,输入yes 导入。
[root@node1 yum.repos.d]# yum list|grep kubeadm
Importing GPG key 0xA7317B0F:
Userid     : "Google Cloud Packages Automatic Signing Key <gc-team@google.com>"
Fingerprint: d0bc 747f d8ca f711 7500 d6fa 3746 c208 a731 7b0f
From       : https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
y
kubeadm.x86_64                              1.18.1-0                   kubernetes
[root@node1 yum.repos.d]#

可以看到kubeadm的版本是 1.18.1-0

  • k8s.repo 复制到其他两台主机
### 将yum 源文件 k8s.repo 复制到其他两台机器上
[root@node1 yum.repos.d]# scp k8s.repo master:/etc/yum.
yum.conf     yum.repos.d/
[root@node1 yum.repos.d]# scp k8s.repo master:/etc/yum.
yum.conf     yum.repos.d/
[root@node1 yum.repos.d]# scp k8s.repo master:/etc/yum.repos.d/
The authenticity of host 'master (192.168.229.129)' can't be established.
ECDSA key fingerprint is SHA256:nwOdDM46149aQR0Z1xWI28SdUx7Pt1k46JCquIUYhKs.
ECDSA key fingerprint is MD5:ef:af:7b:62:fa:f7:fe:3c:86:87:bb:9a:f2:2a:46:d8.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'master,192.168.229.129' (ECDSA) to the list of known hosts.
root@master's password:
k8s.repo                                                                                                                           100%  283   306.7KB/s   00:00
[root@node1 yum.repos.d]# scp k8s.repo node2:/etc/yum.repos.d/
The authenticity of host 'node2 (192.168.229.130)' can't be established.
ECDSA key fingerprint is SHA256:nwOdDM46149aQR0Z1xWI28SdUx7Pt1k46JCquIUYhKs.
ECDSA key fingerprint is MD5:ef:af:7b:62:fa:f7:fe:3c:86:87:bb:9a:f2:2a:46:d8.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'node2,192.168.229.130' (ECDSA) to the list of known hosts.
root@node2's password:
k8s.repo
  • 3台服务器都安装kubeadm kubelet kubectl
[root@node1 yum.repos.d]# yum -y install kubeadm kubelet kubectl
Loaded plugins: fastestmirror

[root@node2 ~]# yum -y install kubeadm kubelet kubectl
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
* base: mirrors.163.com

[root@master ~]# yum -y install kubeadm kubelet kubectl
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
* base: mirrors.huaweicloud.com
* extras: mirrors.nju.edu.cn
* updates: mirrors.huaweicloud.com
Resolving Dependencies

10

kubelet设置

kubeadm kubelet kubectl 安装完成后,需要首先对kubelet进行设置,先不要使用kubeadm进行初始化。

  • 将docker的cgroup driver 与kubelet 的cgroup driver 统一设置

cgroups主要是资源限制,将docker的cgroup driver 与kubelet 的cgroup driver 统一设置为systemd。最初我的docker 的cgroup dirver 是cgroupfs,我就只将kubelet的cgroup driver 改为了 cgroupfs,但是后面再初始化时,失败了,提示我必须是systemd。

  • 修改docker的cgroup driver
# 可以使用docker info 命令来查询 Cgroup Driver


修改docker的Cgroup Driver
修改/etc/docker/daemon.json文件
{ 
"exec-opts": ["native.cgroupdriver=systemd"] 
}


重启docker:
systemctl daemon-reload 
systemctl restart docker
  • 修改kubelet的Cgroup Driver

修改 :/etc/sysconfig/kubelet

添加一句
KUBELET_EXTRA_ARGS="--cgroup-driver=systemd"

说明:
--cgroup-driver:指定Cgroup drivers用谁
事实证明 需要使用systemd
  • 设置kubelet开机启动,但不要启动,3台主机操作相同。
[root@master ~]# systemctl enable kubelet
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
[root@master ~]# systemctl status kubelet
● kubelet.service - kubelet: The Kubernetes Node Agent
   Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)
  Drop-In: /usr/lib/systemd/system/kubelet.service.d
           └─10-kubeadm.conf
   Active: inactive (dead)
     Docs: https://kubernetes.io/docs/
[root@master ~]#

11

Kubernetes核心组件安装

由于使用kubeadm部署集群,集群所有的核心组件均以pod的形式运行,需要为主机准备镜像,不同主机准备不同的镜像。

  • master 镜像安装
使用kubeadm 查看需要安装的镜像
[root@master ~]# kubeadm config images list
W0412 02:51:21.169553    4554 version.go:102] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.txt": Get https://storage.googleapis.com/kubernetes-release/release/stable-1.txt: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
W0412 02:51:21.169944    4554 version.go:103] falling back to the local client version: v1.18.1
W0412 02:51:21.170836    4554 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
k8s.gcr.io/kube-apiserver:v1.18.1
k8s.gcr.io/kube-controller-manager:v1.18.1
k8s.gcr.io/kube-scheduler:v1.18.1
k8s.gcr.io/kube-proxy:v1.18.1
k8s.gcr.io/pause:3.2
k8s.gcr.io/etcd:3.4.3-0
k8s.gcr.io/coredns:1.6.7
[root@master ~]#

k8s.gcr.io 是谷歌的网址,国内不能访问。看有没有国内的镜像地址。采用的策略是国内镜像拉取,然后给它打上k8s.gcr.io 的tag。可以写个脚本,网上找了一个阿里云的源,速度很快。

#!/bin/bash
images=(
    kube-apiserver:v1.18.1
    kube-controller-manager:v1.18.1
    kube-scheduler:v1.18.1
    kube-proxy:v1.18.1
    pause:3.2
    etcd:3.4.3-0
    coredns:1.6.7
)
for imageName in ${images[@]} ; do
    docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/${imageName}
    docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/${imageName} k8s.gcr.io/${imageName}
    docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/${imageName}
done

保存脚本kubeadm-images.sh,执行。

[root@master ~]# sh kubeadm-images.sh
v1.18.1: Pulling from google_containers/kube-apiserver
597de8ba0c30: Pull complete

结果,看到kube的相关镜像已经pull下来了:

结果:
[root@master ~]# docker images
REPOSITORY                           TAG                 IMAGE ID            CREATED             SIZE
k8s.gcr.io/kube-proxy                v1.18.1             4e68534e24f6        3 days ago          117MB
k8s.gcr.io/kube-controller-manager   v1.18.1             d1ccdd18e6ed        3 days ago          162MB
k8s.gcr.io/kube-apiserver            v1.18.1             a595af0107f9        3 days ago          173MB
k8s.gcr.io/kube-scheduler            v1.18.1             6c9320041a7b        3 days ago          95.3MB
k8s.gcr.io/pause                     3.2                 80d28bedfe5d        8 weeks ago         683kB
python                               latest              efdecc2e377a        2 months ago        933MB
ruby                                 latest              0c1ee6efe061        2 months ago        842MB
<none>                               <none>              eb05bdb8e897        2 months ago        97.8MB
k8s.gcr.io/coredns                   1.6.7               67da37a9a360        2 months ago        43.8MB
php                                  crmeb-dt            abf1fc6b60ce        2 months ago        431MB
python                               3.7-alpine          6e6836872132        2 months ago        97.8MB
ubuntu                               18.04               ccc6e87d482b        2 months ago        64.2MB
mysql                                5.7                 b598110d0fff        2 months ago        435MB
mysql                                latest              3a5e53f63281        2 months ago        465MB
phpmyadmin/phpmyadmin                latest              c24a75debb40        2 months ago        469MB
nginx                                latest              c7460dfcab50        3 months ago        126MB
redis                                5.0                 9b188f5fb1e6        3 months ago        98.2MB
redis                                latest              9b188f5fb1e6        3 months ago        98.2MB
php                                  7.3-fpm             5be5d776e10e        3 months ago        398MB
php                                  7.4-fpm             fa37bd6db22a        3 months ago        405MB
k8s.gcr.io/etcd                      3.4.3-0             303ce5db0e90        5 months ago        288MB
[root@master ~]#
  • worker 节点node1,node2 核心组件安装

worker节点需要 两个镜像 kube-proxy 与 pause,把脚本复制到node1,node2

[root@master ~]# scp kubeadm-images.sh node1:/
The authenticity of host 'node1 (192.168.229.131)' can't be established.
ECDSA key fingerprint is SHA256:nwOdDM46149aQR0Z1xWI28SdUx7Pt1k46JCquIUYhKs.
ECDSA key fingerprint is MD5:ef:af:7b:62:fa:f7:fe:3c:86:87:bb:9a:f2:2a:46:d8.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'node1,192.168.229.131' (ECDSA) to the list of known hosts.
root@node1's password:
kubeadm-images.sh                                                                                                                  100%  490   503.4KB/s   00:00
[root@master ~]# scp kubeadm-images.sh node2:/
The authenticity of host 'node2 (192.168.229.130)' can't be established.
ECDSA key fingerprint is SHA256:nwOdDM46149aQR0Z1xWI28SdUx7Pt1k46JCquIUYhKs.
ECDSA key fingerprint is MD5:ef:af:7b:62:fa:f7:fe:3c:86:87:bb:9a:f2:2a:46:d8.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'node2,192.168.229.130' (ECDSA) to the list of known hosts.
root@node2's password:
Permission denied, please try again.
root@node2's password:
kubeadm-images.sh                                                                                                                  100%  490   483.5KB/s   00:00
[root@master ~]#

kubeadm-images.sh 中 只保留 kube-proxy 与 pause,其他删除

#!/bin/bash
images=(
    kube-proxy:v1.18.1
    pause:3.2
)
for imageName in ${images[@]} ; do
    docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/${imageName}
    docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/${imageName} k8s.gcr.io/${imageName}
    docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/${imageName}
done

执行脚本,并查看结果

[root@node2 /]# sh kubeadm-images.sh
v1.18.1: Pulling from google_containers/kube-proxy
597de8ba0c30: Pull complete
3f0663684f29: Pull complete
e1f7f878905c: Pull complete
3029977cf65d: Pull complete
cc627398eeaa: Pull complete
d3609306ce38: Pull complete
492846b7a550: Pull complete
Digest: sha256:f9c0270095cdeac08d87d20828f3ddbc7cbc24b3cc6569aa9e7022e75c333d18
Status: Downloaded newer image for registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.18.1
Untagged: registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.18.1
Untagged: registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy@sha256:f9c0270095cdeac08d87d20828f3ddbc7cbc24b3cc6569aa9e7022e75c333d18
3.2: Pulling from google_containers/pause
c74f8866df09: Pull complete
Digest: sha256:4a1c4b21597c1b4415bdbecb28a3296c6b5e23ca4f9feeb599860a1dac6a0108
Status: Downloaded newer image for registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.2
Untagged: registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.2
Untagged: registry.cn-hangzhou.aliyuncs.com/google_containers/pause@sha256:4a1c4b21597c1b4415bdbecb28a3296c6b5e23ca4f9feeb599860a1dac6a0108
[root@node2 /]# docker images
REPOSITORY              TAG                 IMAGE ID            CREATED             SIZE
k8s.gcr.io/kube-proxy   v1.18.1             4e68534e24f6        3 days ago          117MB
k8s.gcr.io/pause        3.2                 80d28bedfe5d        8 weeks ago         683kB
python                  latest              efdecc2e377a        2 months ago        933MB
ruby                    latest              0c1ee6efe061        2 months ago        842MB
<none>                  <none>              eb05bdb8e897        2 months ago        97.8MB
php                     crmeb-dt            abf1fc6b60ce        2 months ago        431MB
python                  3.7-alpine          6e6836872132        2 months ago        97.8MB
ubuntu                  18.04               ccc6e87d482b        2 months ago        64.2MB
mysql                   5.7                 b598110d0fff        2 months ago        435MB
mysql                   latest              3a5e53f63281        2 months ago        465MB
phpmyadmin/phpmyadmin   latest              c24a75debb40        2 months ago        469MB
nginx                   latest              c7460dfcab50        3 months ago        126MB
redis                   5.0                 9b188f5fb1e6        3 months ago        98.2MB
redis                   latest              9b188f5fb1e6        3 months ago        98.2MB
php                     7.3-fpm             5be5d776e10e        3 months ago        398MB
php                     7.4-fpm             fa37bd6db22a        3 months ago        405MB
[root@node2 /]#


[root@node1 /]# docker images
REPOSITORY              TAG                 IMAGE ID            CREATED             SIZE
k8s.gcr.io/kube-proxy   v1.18.1             4e68534e24f6        3 days ago          117MB
k8s.gcr.io/pause        3.2                 80d28bedfe5d        8 weeks ago         683kB
python                  latest              efdecc2e377a        2 months ago        933MB

12

集群初始化

在master 节点操作,执行kubeadm init,其中--apiserver-advertise-address 为master 主机ip,--pod-network-cidr为pod网段。

[root@node2 /]# kubeadm init --kubernetes-version=v1.18.1 --pod-network-cidr=172.16.0.0/16 --apiserver-advertise-address=192.168.229.129
W0412 04:26:46.161220    4999 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.18.1
[preflight] Running pre-flight checks
        [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
        [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 18.03.1-ce. Latest validated version: 19.03
error execution phase preflight: [preflight] Some fatal errors occurred:
        [ERROR NumCPU]: the number of available CPUs 1 is less than the required 2
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher
[root@node2 /]#

发现报错,是因为我的三台虚拟主机的cpu都是1核的,给它分配2个CPU即可,后面有报了一次错,是因为docker 和kubelet的从cgroup driver 没设置为systemd 。

  • 中途出错重置,kubeadm reset
[root@master ~]# kubeadm init --kubernetes-version=v1.18.1 --pod-network-cidr=172.16.0.0/16 --apiserver-advertise-address=192.168.229.129
W0412 05:48:23.971200    3631 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.18.1
[preflight] Running pre-flight checks
        [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 18.03.1-ce. Latest validated version: 19.03
error execution phase preflight: [preflight] Some fatal errors occurred:
        [ERROR Port-6443]: Port 6443 is in use
        [ERROR Port-10259]: Port 10259 is in use
        [ERROR Port-10257]: Port 10257 is in use
        [ERROR FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml]: /etc/kubernetes/manifests/kube-apiserver.yaml already exists
        [ERROR FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml]: /etc/kubernetes/manifests/kube-controller-manager.yaml already exists
        [ERROR FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml]: /etc/kubernetes/manifests/kube-scheduler.yaml already exists
        [ERROR FileAvailable--etc-kubernetes-manifests-etcd.yaml]: /etc/kubernetes/manifests/etcd.yaml already exists
        [ERROR Port-10250]: Port 10250 is in use
        [ERROR Port-2379]: Port 2379 is in use
        [ERROR Port-2380]: Port 2380 is in use
        [ERROR DirAvailable--var-lib-etcd]: /var/lib/etcd is not empty
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher
[root@master ~]# kubeadm init --kubernetes-version=v1.18.1 --pod-network-cidr=172.16.0.0/16 --apiserver-advertise-address=192.168.229.129 --ignore-preflight-errors
flag needs an argument: --ignore-preflight-errors
To see the stack trace of this error execute with --v=5 or higher
# 重置回来
[root@master ~]# kubeadm reset
[reset] Reading configuration from the cluster...
[reset] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
W0412 05:51:31.866453    3712 reset.go:99] [reset] Unable to fetch the kubeadm-config ConfigMap from cluster: failed to get config map: configmaps "kubeadm-config" not found
[reset] WARNING: Changes made to this host by 'kubeadm init' or 'kubeadm join' will be reverted.
[reset] Are you sure you want to proceed? [y/N]: y
[preflight] Running pre-flight checks
W0412 05:51:34.412980    3712 removeetcdmember.go:79] [reset] No kubeadm config, using etcd pod spec to get data directory
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
[reset] Deleting contents of stateful directories: [/var/lib/etcd /var/lib/kubelet /var/lib/dockershim /var/run/kubernetes /var/lib/cni]


The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d


The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the "iptables" command.


If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
to reset your system's IPVS tables.


The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.
[root@master ~]#

再次执行 kubeadm init,终于初始化成功,但是并没有结束。

[root@master ~]# kubeadm init --kubernetes-version=v1.18.1 --pod-network-cidr=172.16.0.0/16 --apiserver-advertise-address=192.168.229.129
W0412 05:53:17.658412    4025 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.18.1
[preflight] Running pre-flight checks
        [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 18.03.1-ce. Latest validated version: 19.03
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.229.129]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [master localhost] and IPs [192.168.229.129 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [master localhost] and IPs [192.168.229.129 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
W0412 05:53:26.428411    4025 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[control-plane] Creating static Pod manifest for "kube-scheduler"
W0412 05:53:26.430291    4025 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 21.504178 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node master as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: pa0yb0.r1cwvqexsr4nk1t0
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy


Your Kubernetes control-plane has initialized successfully!


To start using your cluster, you need to run the following as a regular user:


  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config


You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/


Then you can join any number of worker nodes by running the following on each as root:


kubeadm join 192.168.229.129:6443 --token pa0yb0.r1cwvqexsr4nk1t0 
    --discovery-token-ca-cert-hash sha256:d0160e9bccfdf146fe8bbeb6d47ed7fd6bbb092a142c0a59ec07170b43579e1e
[root@master ~]#

13

初始化成功之后的操作

执行成功kubeadm init,返回的信息里已经明确了需要做什么。

 1. 
 
  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

 2.
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/


Then you can join any number of worker nodes by running the following on each as root:

3.
kubeadm join 192.168.229.129:6443 --token pa0yb0.r1cwvqexsr4nk1t0 
    --discovery-token-ca-cert-hash sha256:d0160e9bccfdf146fe8bbeb6d47ed7fd6bbb092a142c0a59ec07170b43579e1e
[root@master ~]#
  • A. 创建 .kube,复制配置文件
[root@master ~]# mkdir .kube
[root@master ~]# cp /etc/kubernetes/admin.conf ./.kube/config
[root@master ~]# cd .kube/
[root@master .kube]# ls
config
[root@master .kube]# ls -la
total 12
drwxr-xr-x   2 root root   20 Apr 12 09:44 .
dr-xr-x---. 11 root root 4096 Apr 12 09:43 ..
-rw-------   1 root root 5451 Apr 12 09:44 config
[root@master .kube]#
  • B. 使用calico 以插件的形式创建网络
# 需要导入的镜像

calico-cni
calico.yml
pod2daemon-flexvol
calico-node
kube-controllers

使用脚本导入所需镜像

#!/bin/bash
images=(
    calico/cni
    calico/node
    calico/kube-controllers
    calico/pod2daemon-flexvol
)
for imageName in ${images[@]} ; do
    docker pull ${imageName}
done
  • 执行脚本,3台主机操作相同
[root@master ~]# sh calico-pull.sh

完成 docker 镜像下载,3 台主机都需要这么做

[root@master ~]# scp calico-pull.sh node1:~/
root@node1's password:
calico-pull.sh                                                                                                                     100%  181    44.1KB/s   00:00
[root@master ~]#

[root@node1 ~]# scp calico-pull.sh node2:~
root@node2's password:
calico-pull.sh                                                                                                                     100%  181   131.3KB/s   00:00
[root@node1 ~]#

编辑 calico的资源清单文件

---
# Source: calico/templates/calico-config.yaml
# This ConfigMap is used to configure a self-hosted Calico installation.
kind: ConfigMap
apiVersion: v1
metadata:
  name: calico-config
  namespace: kube-system
data:
  # Typha is disabled.
  typha_service_name: "none"
  # Configure the backend to use.
  calico_backend: "bird"
  # Configure the MTU to use
  veth_mtu: "1440"
  # The CNI network configuration to install on each node.  The special
  # values in this config will be automatically populated.
  cni_network_config: |-
    {
      "name": "k8s-pod-network",
      "cniVersion": "0.3.1",
      "plugins": [
        {
          "type": "calico",
          "log_level": "info",
          "datastore_type": "kubernetes",
          "nodename": "__KUBERNETES_NODE_NAME__",
          "mtu": __CNI_MTU__,
          "ipam": {
              "type": "calico-ipam"
          },
          "policy": {
              "type": "k8s"
          },
          "kubernetes": {
              "kubeconfig": "__KUBECONFIG_FILEPATH__"
          }
        },
        {
          "type": "portmap",
          "snat": true,
          "capabilities": {"portMappings": true}
        },
        {
          "type": "bandwidth",
          "capabilities": {"bandwidth": true}
        }
      ]
    }
---
# Source: calico/templates/kdd-crds.yaml
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  name: bgpconfigurations.crd.projectcalico.org
spec:
  scope: Cluster
  group: crd.projectcalico.org
  version: v1
  names:
    kind: BGPConfiguration
    plural: bgpconfigurations
    singular: bgpconfiguration
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  name: bgppeers.crd.projectcalico.org
spec:
  scope: Cluster
  group: crd.projectcalico.org
  version: v1
  names:
    kind: BGPPeer
    plural: bgppeers
    singular: bgppeer
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  name: blockaffinities.crd.projectcalico.org
spec:
  scope: Cluster
  group: crd.projectcalico.org
  version: v1
  names:
    kind: BlockAffinity
    plural: blockaffinities
    singular: blockaffinity
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  name: clusterinformations.crd.projectcalico.org
spec:
  scope: Cluster
  group: crd.projectcalico.org
  version: v1
  names:
    kind: ClusterInformation
    plural: clusterinformations
    singular: clusterinformation
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  name: felixconfigurations.crd.projectcalico.org
spec:
  scope: Cluster
  group: crd.projectcalico.org
  version: v1
  names:
    kind: FelixConfiguration
    plural: felixconfigurations
    singular: felixconfiguration
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  name: globalnetworkpolicies.crd.projectcalico.org
spec:
  scope: Cluster
  group: crd.projectcalico.org
  version: v1
  names:
    kind: GlobalNetworkPolicy
    plural: globalnetworkpolicies
    singular: globalnetworkpolicy
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  name: globalnetworksets.crd.projectcalico.org
spec:
  scope: Cluster
  group: crd.projectcalico.org
  version: v1
  names:
    kind: GlobalNetworkSet
    plural: globalnetworksets
    singular: globalnetworkset
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  name: hostendpoints.crd.projectcalico.org
spec:
  scope: Cluster
  group: crd.projectcalico.org
  version: v1
  names:
    kind: HostEndpoint
    plural: hostendpoints
    singular: hostendpoint
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  name: ipamblocks.crd.projectcalico.org
spec:
  scope: Cluster
  group: crd.projectcalico.org
  version: v1
  names:
    kind: IPAMBlock
    plural: ipamblocks
    singular: ipamblock
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  name: ipamconfigs.crd.projectcalico.org
spec:
  scope: Cluster
  group: crd.projectcalico.org
  version: v1
  names:
    kind: IPAMConfig
    plural: ipamconfigs
    singular: ipamconfig
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  name: ipamhandles.crd.projectcalico.org
spec:
  scope: Cluster
  group: crd.projectcalico.org
  version: v1
  names:
    kind: IPAMHandle
    plural: ipamhandles
    singular: ipamhandle
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  name: ippools.crd.projectcalico.org
spec:
  scope: Cluster
  group: crd.projectcalico.org
  version: v1
  names:
    kind: IPPool
    plural: ippools
    singular: ippool
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  name: networkpolicies.crd.projectcalico.org
spec:
  scope: Namespaced
  group: crd.projectcalico.org
  version: v1
  names:
    kind: NetworkPolicy
    plural: networkpolicies
    singular: networkpolicy
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  name: networksets.crd.projectcalico.org
spec:
  scope: Namespaced
  group: crd.projectcalico.org
  version: v1
  names:
    kind: NetworkSet
    plural: networksets
    singular: networkset
---
---
# Source: calico/templates/rbac.yaml
# Include a clusterrole for the kube-controllers component,
# and bind it to the calico-kube-controllers serviceaccount.
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: calico-kube-controllers
rules:
  # Nodes are watched to monitor for deletions.
  - apiGroups: [""]
    resources:
      - nodes
    verbs:
      - watch
      - list
      - get
  # Pods are queried to check for existence.
  - apiGroups: [""]
    resources:
      - pods
    verbs:
      - get
  # IPAM resources are manipulated when nodes are deleted.
  - apiGroups: ["crd.projectcalico.org"]
    resources:
      - ippools
    verbs:
      - list
  - apiGroups: ["crd.projectcalico.org"]
    resources:
      - blockaffinities
      - ipamblocks
      - ipamhandles
    verbs:
      - get
      - list
      - create
      - update
      - delete
  # Needs access to update clusterinformations.
  - apiGroups: ["crd.projectcalico.org"]
    resources:
      - clusterinformations
    verbs:
      - get
      - create
      - update
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: calico-kube-controllers
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: calico-kube-controllers
subjects:
- kind: ServiceAccount
  name: calico-kube-controllers
  namespace: kube-system
---
# Include a clusterrole for the calico-node DaemonSet,
# and bind it to the calico-node serviceaccount.
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: calico-node
rules:
  # The CNI plugin needs to get pods, nodes, and namespaces.
  - apiGroups: [""]
    resources:
      - pods
      - nodes
      - namespaces
    verbs:
      - get
  - apiGroups: [""]
    resources:
      - endpoints
      - services
    verbs:
      # Used to discover service IPs for advertisement.
      - watch
      - list
      # Used to discover Typhas.
      - get
  # Pod CIDR auto-detection on kubeadm needs access to config maps.
  - apiGroups: [""]
    resources:
      - configmaps
    verbs:
      - get
  - apiGroups: [""]
    resources:
      - nodes/status
    verbs:
      # Needed for clearing NodeNetworkUnavailable flag.
      - patch
      # Calico stores some configuration information in node annotations.
      - update
  # Watch for changes to Kubernetes NetworkPolicies.
  - apiGroups: ["networking.k8s.io"]
    resources:
      - networkpolicies
    verbs:
      - watch
      - list
  # Used by Calico for policy information.
  - apiGroups: [""]
    resources:
      - pods
      - namespaces
      - serviceaccounts
    verbs:
      - list
      - watch
  # The CNI plugin patches pods/status.
  - apiGroups: [""]
    resources:
      - pods/status
    verbs:
      - patch
  # Calico monitors various CRDs for config.
  - apiGroups: ["crd.projectcalico.org"]
    resources:
      - globalfelixconfigs
      - felixconfigurations
      - bgppeers
      - globalbgpconfigs
      - bgpconfigurations
      - ippools
      - ipamblocks
      - globalnetworkpolicies
      - globalnetworksets
      - networkpolicies
      - networksets
      - clusterinformations
      - hostendpoints
      - blockaffinities
    verbs:
      - get
      - list
      - watch
  # Calico must create and update some CRDs on startup.
  - apiGroups: ["crd.projectcalico.org"]
    resources:
      - ippools
      - felixconfigurations
      - clusterinformations
    verbs:
      - create
      - update
  # Calico stores some configuration information on the node.
  - apiGroups: [""]
    resources:
      - nodes
    verbs:
      - get
      - list
      - watch
  # These permissions are only requried for upgrade from v2.6, and can
  # be removed after upgrade or on fresh installations.
  - apiGroups: ["crd.projectcalico.org"]
    resources:
      - bgpconfigurations
      - bgppeers
    verbs:
      - create
      - update
  # These permissions are required for Calico CNI to perform IPAM allocations.
  - apiGroups: ["crd.projectcalico.org"]
    resources:
      - blockaffinities
      - ipamblocks
      - ipamhandles
    verbs:
      - get
      - list
      - create
      - update
      - delete
  - apiGroups: ["crd.projectcalico.org"]
    resources:
      - ipamconfigs
    verbs:
      - get
  # Block affinities must also be watchable by confd for route aggregation.
  - apiGroups: ["crd.projectcalico.org"]
    resources:
      - blockaffinities
    verbs:
      - watch
  # The Calico IPAM migration needs to get daemonsets. These permissions can be
  # removed if not upgrading from an installation using host-local IPAM.
  - apiGroups: ["apps"]
    resources:
      - daemonsets
    verbs:
      - get
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: calico-node
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: calico-node
subjects:
- kind: ServiceAccount
  name: calico-node
  namespace: kube-system
---
# Source: calico/templates/calico-node.yaml
# This manifest installs the calico-node container, as well
# as the CNI plugins and network config on
# each master and worker node in a Kubernetes cluster.
kind: DaemonSet
apiVersion: apps/v1
metadata:
  name: calico-node
  namespace: kube-system
  labels:
    k8s-app: calico-node
spec:
  selector:
    matchLabels:
      k8s-app: calico-node
  updateStrategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 1
  template:
    metadata:
      labels:
        k8s-app: calico-node
      annotations:
        # This, along with the CriticalAddonsOnly toleration below,
        # marks the pod as a critical add-on, ensuring it gets
        # priority scheduling and that its resources are reserved
        # if it ever gets evicted.
        scheduler.alpha.kubernetes.io/critical-pod: ''
    spec:
      nodeSelector:
        kubernetes.io/os: linux
      hostNetwork: true
      tolerations:
        # Make sure calico-node gets scheduled on all nodes.
        - effect: NoSchedule
          operator: Exists
        # Mark the pod as a critical add-on for rescheduling.
        - key: CriticalAddonsOnly
          operator: Exists
        - effect: NoExecute
          operator: Exists
      serviceAccountName: calico-node
      # Minimize downtime during a rolling upgrade or deletion; tell Kubernetes to do a "force
      # deletion": https://kubernetes.io/docs/concepts/workloads/pods/pod/#termination-of-pods.
      terminationGracePeriodSeconds: 0
      priorityClassName: system-node-critical
      initContainers:
        # This container performs upgrade from host-local IPAM to calico-ipam.
        # It can be deleted if this is a fresh installation, or if you have already
        # upgraded to use calico-ipam.
        - name: upgrade-ipam
          image: calico/cni:v3.13.2
          command: ["/opt/cni/bin/calico-ipam", "-upgrade"]
          env:
            - name: KUBERNETES_NODE_NAME
              valueFrom:
                fieldRef:
                  fieldPath: spec.nodeName
            - name: CALICO_NETWORKING_BACKEND
              valueFrom:
                configMapKeyRef:
                  name: calico-config
                  key: calico_backend
          volumeMounts:
            - mountPath: /var/lib/cni/networks
              name: host-local-net-dir
            - mountPath: /host/opt/cni/bin
              name: cni-bin-dir
          securityContext:
            privileged: true
        # This container installs the CNI binaries
        # and CNI network config file on each node.
        - name: install-cni
          image: calico/cni:v3.13.2
          command: ["/install-cni.sh"]
          env:
            # Name of the CNI config file to create.
            - name: CNI_CONF_NAME
              value: "10-calico.conflist"
            # The CNI network config to install on each node.
            - name: CNI_NETWORK_CONFIG
              valueFrom:
                configMapKeyRef:
                  name: calico-config
                  key: cni_network_config
            # Set the hostname based on the k8s node name.
            - name: KUBERNETES_NODE_NAME
              valueFrom:
                fieldRef:
                  fieldPath: spec.nodeName
            # CNI MTU Config variable
            - name: CNI_MTU
              valueFrom:
                configMapKeyRef:
                  name: calico-config
                  key: veth_mtu
            # Prevents the container from sleeping forever.
            - name: SLEEP
              value: "false"
          volumeMounts:
            - mountPath: /host/opt/cni/bin
              name: cni-bin-dir
            - mountPath: /host/etc/cni/net.d
              name: cni-net-dir
          securityContext:
            privileged: true
        # Adds a Flex Volume Driver that creates a per-pod Unix Domain Socket to allow Dikastes
        # to communicate with Felix over the Policy Sync API.
        - name: flexvol-driver
          image: calico/pod2daemon-flexvol:v3.13.2
          volumeMounts:
          - name: flexvol-driver-host
            mountPath: /host/driver
          securityContext:
            privileged: true
      containers:
        # Runs calico-node container on each Kubernetes node.  This
        # container programs network policy and routes on each
        # host.
        - name: calico-node
          image: calico/node:v3.13.2
          env:
            # Use Kubernetes API as the backing datastore.
            - name: DATASTORE_TYPE
              value: "kubernetes"
            # Wait for the datastore.
            - name: WAIT_FOR_DATASTORE
              value: "true"
            # Set based on the k8s node name.
            - name: NODENAME
              valueFrom:
                fieldRef:
                  fieldPath: spec.nodeName
            # Choose the backend to use.
            - name: CALICO_NETWORKING_BACKEND
              valueFrom:
                configMapKeyRef:
                  name: calico-config
                  key: calico_backend
            # Cluster type to identify the deployment type
            - name: CLUSTER_TYPE
              value: "k8s,bgp"
            # Auto-detect the BGP IP address.
            - name: IP
              value: "autodetect"
            # Enable IPIP
            - name: CALICO_IPV4POOL_IPIP
              value: "Always"
            # Set MTU for tunnel device used if ipip is enabled
            - name: FELIX_IPINIPMTU
              valueFrom:
                configMapKeyRef:
                  name: calico-config
                  key: veth_mtu
            # The default IPv4 pool to create on startup if none exists. Pod IPs will be
            # chosen from this range. Changing this value after installation will have
            # no effect. This should fall within `--cluster-cidr`.
            # - name: CALICO_IPV4POOL_CIDR
            #   value: "192.168.0.0/16"
            # Disable file logging so `kubectl logs` works.
            - name: CALICO_DISABLE_FILE_LOGGING
              value: "true"
            # Set Felix endpoint to host default action to ACCEPT.
            - name: FELIX_DEFAULTENDPOINTTOHOSTACTION
              value: "ACCEPT"
            # Disable IPv6 on Kubernetes.
            - name: FELIX_IPV6SUPPORT
              value: "false"
            # Set Felix logging to "info"
            - name: FELIX_LOGSEVERITYSCREEN
              value: "info"
            - name: FELIX_HEALTHENABLED
              value: "true"
          securityContext:
            privileged: true
          resources:
            requests:
              cpu: 250m
          livenessProbe:
            exec:
              command:
              - /bin/calico-node
              - -felix-live
              - -bird-live
            periodSeconds: 10
            initialDelaySeconds: 10
            failureThreshold: 6
          readinessProbe:
            exec:
              command:
              - /bin/calico-node
              - -felix-ready
              - -bird-ready
            periodSeconds: 10
          volumeMounts:
            - mountPath: /lib/modules
              name: lib-modules
              readOnly: true
            - mountPath: /run/xtables.lock
              name: xtables-lock
              readOnly: false
            - mountPath: /var/run/calico
              name: var-run-calico
              readOnly: false
            - mountPath: /var/lib/calico
              name: var-lib-calico
              readOnly: false
            - name: policysync
              mountPath: /var/run/nodeagent
      volumes:
        # Used by calico-node.
        - name: lib-modules
          hostPath:
            path: /lib/modules
        - name: var-run-calico
          hostPath:
            path: /var/run/calico
        - name: var-lib-calico
          hostPath:
            path: /var/lib/calico
        - name: xtables-lock
          hostPath:
            path: /run/xtables.lock
            type: FileOrCreate
        # Used to install CNI.
        - name: cni-bin-dir
          hostPath:
            path: /opt/cni/bin
        - name: cni-net-dir
          hostPath:
            path: /etc/cni/net.d
        # Mount in the directory for host-local IPAM allocations. This is
        # used when upgrading from host-local to calico-ipam, and can be removed
        # if not using the upgrade-ipam init container.
        - name: host-local-net-dir
          hostPath:
            path: /var/lib/cni/networks
        # Used to create per-pod Unix Domain Sockets
        - name: policysync
          hostPath:
            type: DirectoryOrCreate
            path: /var/run/nodeagent
        # Used to install Flex Volume Driver
        - name: flexvol-driver-host
          hostPath:
            type: DirectoryOrCreate
            path: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: calico-node
  namespace: kube-system
---
# Source: calico/templates/calico-kube-controllers.yaml
# See https://github.com/projectcalico/kube-controllers
apiVersion: apps/v1
kind: Deployment
metadata:
  name: calico-kube-controllers
  namespace: kube-system
  labels:
    k8s-app: calico-kube-controllers
spec:
  # The controllers can only have a single active instance.
  replicas: 1
  selector:
    matchLabels:
      k8s-app: calico-kube-controllers
  strategy:
    type: Recreate
  template:
    metadata:
      name: calico-kube-controllers
      namespace: kube-system
      labels:
        k8s-app: calico-kube-controllers
      annotations:
        scheduler.alpha.kubernetes.io/critical-pod: ''
    spec:
      nodeSelector:
        kubernetes.io/os: linux
      tolerations:
        # Mark the pod as a critical add-on for rescheduling.
        - key: CriticalAddonsOnly
          operator: Exists
        - key: node-role.kubernetes.io/master
          effect: NoSchedule
      serviceAccountName: calico-kube-controllers
      priorityClassName: system-cluster-critical
      containers:
        - name: calico-kube-controllers
          image: calico/kube-controllers:v3.13.2
          env:
            # Choose which controllers to run.
            - name: ENABLED_CONTROLLERS
              value: node
            - name: DATASTORE_TYPE
              value: kubernetes
          readinessProbe:
            exec:
              command:
              - /usr/bin/check-status
              - -r
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: calico-kube-controllers
  namespace: kube-system
---
# Source: calico/templates/calico-etcd-secrets.yaml
---
# Source: calico/templates/calico-typha.yaml
---
# Source: calico/templates/configure-canal.yaml

修改资源清单,将修改后的文件复制到node1,node2

新增615,616行

614               value: "autodetect"

615             - name: IP_AUTODETECTION_METHOD

616               value: "interface=ens.*"

修改 630 行 为我们设置的网络段,之前 629 与630 是注释掉的

  628             # no effect. This should fall within `--cluster-cidr`.

  629             - name: CALICO_IPV4POOL_CIDR

  630               value: "172.16.0.0/16"

  • 加载资源清单文件

kubectl apply -f calico.yml 这条命令只在master上执行即可

[root@master ~]# kubectl apply -f calico.yml
configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
daemonset.apps/calico-node created
serviceaccount/calico-node created
deployment.apps/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
[root@master ~]#

可以看到 网络已经建立。

  • C. 将worker 节点node1,node2加入集群。
分别在node1 ,node2 上执行

kubeadm join 192.168.229.129:6443 --token pa0yb0.r1cwvqexsr4nk1t0 
    --discovery-token-ca-cert-hash sha256:d0160e9bccfdf146fe8bbeb6d47ed7fd6bbb092a142c0a59ec07170b43579e1e


即可


[root@node1 ~]# kubeadm join 192.168.229.129:6443 --token pa0yb0.r1cwvqexsr4nk1t0 
>     --discovery-token-ca-cert-hash sha256:d0160e9bccfdf146fe8bbeb6d47ed7fd6bbb092a142c0a59ec07170b43579e1e
W0412 11:14:05.648400   15256 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
[preflight] Running pre-flight checks
        [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 18.03.1-ce. Latest validated version: 19.03
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.18" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...


This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.


Run 'kubectl get nodes' on the control-plane to see this node join the cluster.


You have new mail in /var/spool/mail/root
[root@node1 ~]#



[root@node2 ~]# kubeadm join 192.168.229.129:6443 --token pa0yb0.r1cwvqexsr4nk1t0 
>     --discovery-token-ca-cert-hash sha256:d0160e9bccfdf146fe8bbeb6d47ed7fd6bbb092a142c0a59ec07170b43579e1e
W0412 11:23:14.583015    2794 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
[preflight] Running pre-flight checks
        [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 18.03.1-ce. Latest validated version: 19.03
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.18" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...


This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.


Run 'kubectl get nodes' on the control-plane to see this node join the cluster.


[root@node2 ~]#

4. 集群检测

在master 上运行 kubectl get nodes


[root@master ~]# kubectl get nodes
NAME     STATUS   ROLES    AGE     VERSION
master   Ready    master   5h30m   v1.18.1
node1    Ready    <none>   10m     v1.18.1
node2    Ready    <none>   66s     v1.18.1
[root@master ~]#

可以看到 node1,node2已经加入了节点。

14

结语

接下来要进行自己应用的部署,同时探索下 kubeadm dashboard 以及Rancher 2.0 操作。图形化操作,他不香吗?

END