centos安装kubectl和minikube工具

时间:2022-07-23
本文章向大家介绍centos安装kubectl和minikube工具,主要内容包括其使用实例、应用技巧、基本知识点总结和需要注意事项,具有一定的参考价值,需要的朋友可以参考一下。

kubectl和minikube是部署kubernetes集群的2个重要工具,本文主要介绍如何安装这2个工具。

安装环境:centos7虚拟机

一、安装kubectl

kubectl是k8s命令行工具,用在k8s集群中,可以部署应用、查看和管理集群资源。使用kubectl版本不能跟集群版本差别太大,最好使用最新版本的kubectl。

1.下载安装包,如下2个命令,第一个下载最新版本,第二个下载指定版本

curl -LO https://storage.googleapis.com/kubernetes-release/release/`curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt`/bin/linux/amd64/kubectl
curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.17.0/bin/linux/amd64/kubectl

2.kubectl不能执行,增加执行权限

chmod +x kubectl

3.把安装包移动到path目录

mv ./kubectl /usr/local/bin/kubectl

4.查看版本是不是最新版

kubectl version --client

二、安装docker hypervisor

因为本地安装环境使用的就是虚拟机,所以不再进行虚拟了。

这里推荐安装VirtualBox

使用yum安装,安装方法:

创建/etc/yum.repos.d/virtualbox.repo文件,内容如下:

[virtualbox]name=Oracle Linux / RHEL / CentOS-$releasever / $basearch - VirtualBoxbaseurl=http://download.virtualbox.org/virtualbox/rpm/el/$releasever/$basearchenabled=1gpgcheck=1repo_gpgcheck=1gpgkey=https://www.virtualbox.org/download/oracle_vbox.asc

之后执行

yum install VirtualBox-6.0

之后一直选择“y”就完成了

三、安装Minikube

1.Minikube这个工具支持在虚拟机上运行一套单节点的k8s集群

开始安装前,先查看本地机器是否支持虚拟化,有输出就支持

grep -E --color 'vmx|svm' /proc/cpuinfo

我使用的机器是windows上使用vmware隔离出的一个虚拟机,如第二节中所示就不再安装虚拟机工具了。

虚拟机执行上面命令没有输出,可以采用下面的方法支持虚拟化。

关掉虚机,选择如下图的选型后重新开启虚拟机

2.安装minikube有3种方法,package包安装、二进制文件安装和使用homebrew安装,这里我采用二进制包方式安装,执行如下命令下载

curl -Lo minikube https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64 && chmod +x minikube

把minikube可执行文件添加到path

sudo mkdir -p /usr/local/bin/sudo install minikube /usr/local/bin/

3.启动minikube

minikube start --vm-driver=virtualbox

报错如下:

看意思就是虚拟机里面运行的话,后面的参数值等于none

minikube start --vm-driver=none

再次报错

关掉虚拟机,重新设置CPU数量

4.再次启动minikube

minikube start --vm-driver=none

更换阿里云镜像,重新执行启动命

minikube start --image-repository='registry.cn-hangzhou.aliyuncs.com/google_containers' --vm-driver=none

这个过程会下载kubectl、kubelet、kubeadm这3个镜像,大概400M,所以比较慢,耐心等待。下载后自动启动

再次报错,从报错中看出,apiserver启动失败了,用命令可以确认下:minikube status

查阅网上资料,有一些认为是swap没有关闭。swapoff -a关闭swap,继续执行minikube启动命令,依然失败。

执行minikube delete,再次执行启动命令,这次出现了很多错误日志:

/*
* 提示:该行代码过长,系统自动注释不进行高亮。一键复制会移除系统注释 
* X 开启 cluster 时出错: init failed. output: "-- stdout --n[init] Using Kubernetes version: v1.17.3n[preflight] Running pre-flight checksn[preflight] Pulling images required for setting up a Kubernetes clustern[preflight] This might take a minute or two, depending on the speed of your internet connectionn[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'n[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"n[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"n[kubelet-start] Starting the kubeletn[certs] Using certificateDir folder "/var/lib/minikube/certs"n[certs] Using existing ca certificate authorityn[certs] Using existing apiserver certificate and key on diskn[certs] Generating "apiserver-kubelet-client" certificate and keyn[certs] Generating "front-proxy-ca" certificate and keyn[certs] Generating "front-proxy-client" certificate and keyn[certs] Generating "etcd/ca" certificate and keyn[certs] Generating "etcd/server" certificate and keyn[certs] etcd/server serving cert is signed for DNS names [localhost.localdomain localhost] and IPs [192.168.59.128 127.0.0.1 ::1]n[certs] Generating "etcd/peer" certificate and keyn[certs] etcd/peer serving cert is signed for DNS names [localhost.localdomain localhost] and IPs [192.168.59.128 127.0.0.1 ::1]n[certs] Generating "etcd/healthcheck-client" certificate and keyn[certs] Generating "apiserver-etcd-client" certificate and keyn[certs] Generating "sa" key and public keyn[kubeconfig] Using kubeconfig folder "/etc/kubernetes"n[kubeconfig] Writing "admin.conf" kubeconfig filen[kubeconfig] Writing "kubelet.conf" kubeconfig filen[kubeconfig] Writing "controller-manager.conf" kubeconfig filen[kubeconfig] Writing "scheduler.conf" kubeconfig filen[control-plane] Using manifest folder "/etc/kubernetes/manifests"n[control-plane] Creating static Pod manifest for "kube-apiserver"n[control-plane] Creating static Pod manifest for "kube-controller-manager"n[control-plane] Creating static Pod manifest for "kube-scheduler"n[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"n[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0sn[kubelet-check] Initial timeout of 40s passed.n[kubelet-check] It seems like the kubelet isn't running or healthy.n[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.nnUnfortunately, an error has occurred:nttimed out waiting for the conditionnnThis error is likely caused by:nt- The kubelet is not runningnt- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)nnIf you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:nt- 'systemctl status kubelet'nt- 'journalctl -xeu kubelet'nnAdditionally, a control plane component may have crashed or exited when started by the container runtime.nTo troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.nHere is one example how you may list all Kubernetes containers running in docker:nt- 'docker ps -a | grep kube | grep -v pause'ntOnce you have found the failing container, you can inspect its logs with:nt- 'docker logs CONTAINERID'nn-- /stdout --n** stderr ** nW0227 14:11:35.372509   50864 validation.go:28] Cannot validate kube-proxy config - no validator is availablenW0227 14:11:35.372637   50864 validation.go:28] Cannot validate kubelet config - no validator is availablent[WARNING Firewalld]: firewalld is active, please ensure ports [8443 10250] are open or your cluster may not function correctlynt[WARNING FileExisting-socat]: socat not found in system pathnW0227 14:11:42.309644   50864 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"nW0227 14:11:42.313619   50864 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"nerror execution phase wait-control-plane: couldn't initialize a Kubernetes clusternTo see the stack trace of this error execute with --v=5 or highernn** /stderr **": /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.17.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification": exit status 1stdout:[init] Using Kubernetes version: v1.17.3[preflight] Running pre-flight checks[preflight] Pulling images required for setting up a Kubernetes cluster[preflight] This might take a minute or two, depending on the speed of your internet connection[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"[kubelet-start] Starting the kubelet[certs] Using certificateDir folder "/var/lib/minikube/certs"[certs] Using existing ca certificate authority[certs] Using existing apiserver certificate and key on disk[certs] Generating "apiserver-kubelet-client" certificate and key[certs] Generating "front-proxy-ca" certificate and key[certs] Generating "front-proxy-client" certificate and key[certs] Generating "etcd/ca" certificate and key[certs] Generating "etcd/server" certificate and key[certs] etcd/server serving cert is signed for DNS names [localhost.localdomain localhost] and IPs [192.168.59.128 127.0.0.1 ::1][certs] Generating "etcd/peer" certificate and key[certs] etcd/peer serving cert is signed for DNS names [localhost.localdomain localhost] and IPs [192.168.59.128 127.0.0.1 ::1][certs] Generating "etcd/healthcheck-client" certificate and key[certs] Generating "apiserver-etcd-client" certificate and key[certs] Generating "sa" key and public key[kubeconfig] Using kubeconfig folder "/etc/kubernetes"[kubeconfig] Writing "admin.conf" kubeconfig file[kubeconfig] Writing "kubelet.conf" kubeconfig file[kubeconfig] Writing "controller-manager.conf" kubeconfig file[kubeconfig] Writing "scheduler.conf" kubeconfig file[control-plane] Using manifest folder "/etc/kubernetes/manifests"[control-plane] Creating static Pod manifest for "kube-apiserver"[control-plane] Creating static Pod manifest for "kube-controller-manager"[control-plane] Creating static Pod manifest for "kube-scheduler"[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s[kubelet-check] Initial timeout of 40s passed.[kubelet-check] It seems like the kubelet isn't running or healthy.[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.
*/
Unfortunately, an error has occurred:        timed out waiting for the condition
This error is likely caused by:        - The kubelet is not running        - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:        - 'systemctl status kubelet'        - 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.Here is one example how you may list all Kubernetes containers running in docker:        - 'docker ps -a | grep kube | grep -v pause'        Once you have found the failing container, you can inspect its logs with:        - 'docker logs CONTAINERID'
stderr:W0227 14:11:35.372509   50864 validation.go:28] Cannot validate kube-proxy config - no validator is availableW0227 14:11:35.372637   50864 validation.go:28] Cannot validate kubelet config - no validator is available        [WARNING Firewalld]: firewalld is active, please ensure ports [8443 10250] are open or your cluster may not function correctly        [WARNING FileExisting-socat]: socat not found in system pathW0227 14:11:42.309644   50864 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"W0227 14:11:42.313619   50864 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"error execution phase wait-control-plane: couldn't initialize a Kubernetes clusterTo see the stack trace of this error execute with --v=5 or higher
* * 由于出错 minikube 正在退出。如果以上信息没有帮助,请提交问题反馈:  - https://github.com/kubernetes/minikube/issues/new/choose

信息量很大,但是很难一下子找出原因。结合当前现状:apiserver启动失败,看错误日志后面一句:

error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster

网上搜这句,有很多文章,有的建议关闭SELINUX,关闭SELINUX,重启,果然生效,重启后apiserver直接启动成功

参考资料:

https://kubernetes.io/docs/tasks/tools/install-minikube/
https://www.itzgeek.com/how-tos/linux/centos-how-tos/install-virtualbox-4-3-on-centos-7-rhel-7.html
https://kubernetes.io/docs/tasks/tools/install-kubectl/#download-as-part-of-the-google-cloud-sdk
https://forum.level1techs.com/t/kubeadm-for-kubernetes-chicken-and-egg-problem-during-setup-what-am-i-doing-wrong/129086/4