Kubernetes 最新版本安装过程和注意事项

本文写于 2019-02-06 已亥猪年 农历正月初二
当前最新版本为 v1.13.3

在 18 年 6 月份京东活动的时候,买了一本 Kubernetes 权威指南,一直没时间看,春节期间正好学学。由于书上使用的是 2017 年的 1.6.0 版本,我自己为了使用最新版本,特地做一个记录。

虽然买了本书,但是整个操作过程参考了很多资料,主要是 kubeadm 官方文档:

系统防火墙配置
禁用防火墙
systemctl disable firewalld
关闭防火墙
systemctl stop firewalld
禁用 SELinux,目的是让容器可以读取主机文件系统
setenforce 0
配置禁用 SELinux
vi /etc/sysconfig/selinux
修改 SELINUX 为 disabled
SELINUX=disabled
#SELINUX=enforcing

在上述第一个文档中安装 kubeadm,kubelet 和 kubectl 这一步时,文档提供的脚本中的地址都是https://packages.cloud.google.com 域名下的,由于被墙无法使用,我们可以使用阿里巴巴开源镜像站 提供的 kubernetes。

1. 安装 kubelet kubeadm kubectl

使用阿里巴巴开源镜像站

https://opsx.alibaba.com/mirror

从列表找到 kubernetes,点击帮助,显示如下信息。

我使用的最新版本的 CentOS 7:VMware 虚拟机 最小化安装 CentOS 7 的 IP 配置

Debian / Ubuntu

apt-get update && apt-get install -y apt-transport-https
curl https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | apt-key add - 
cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main
EOF  
apt-get update
apt-get install -y kubelet kubeadm kubectl

CentOS / RHEL / Fedora

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
setenforce 0
yum install -y kubelet kubeadm kubectl
systemctl enable kubelet && systemctl start kubelet

Docker CE

阿里巴巴开源镜像站还可以搜 docker-ce,帮助中给了一个地址:

https://yq.aliyun.com/articles/110806

特别注意:本文的 Kubernetes 版本为 v1.13.3,因为我使用 docker 官方脚本安装的最新版本的,所以执行 kubeadm init 时有下面的警告:

[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 18.09.0. Latest validated version: 18.06

因此使用 18.06 版本可以消除该警告,如果要指定版本,上面文档中也有说明。

关闭防火墙
查看防火墙状态: firewall-cmd --state
关闭防火墙: systemctl stop firewalld.service
禁止开机启动:systemctl disable firewalld.service
参考:https://www.linuxidc.com/Linux/2016-12/138979.htm

2. Docker 加速

由于国内下载 docker image 速度较慢,可以使用镜像进行加速。

编辑或新增 /etc/docker/daemon.json 添加如下内容:

{
  "registry-mirrors": [
    "https://dockerhub.azk8s.cn",
    "https://reg-mirror.qiniu.com"
  ]
}

参考:https://yeasy.gitbooks.io/docker_practice/install/mirror.html

3. 执行 kubeadm init

执行该命令时会出现很多问题,这里都列举出来。

执行 kubeadm init 时,会先请求 https://dl.k8s.io/release/stable-1.txt 获取最新稳定的版本号,该地址实际会跳转到 https://storage.googleapis.com/kubernetes-release/release/stable-1.txt ,在写本文时此时的返回值为 v1.13.3。由于被墙无法请求该地址,为了避免这个问题,我们可以直接指定要获取的版本,执行下面的命令:

kubeadm init --kubernetes-version=v1.13.3

执行该命令时可能会遇到下面的错误。

3.1 ERROR FileContent–proc-sys-net-bridge-bridge-nf-call-iptables

问题:

[ERROR FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables contents are not set to 1

解决方案:

echo 1 > /proc/sys/net/bridge/bridge-nf-call-iptables
echo 1 > /proc/sys/net/bridge/bridge-nf-call-ip6tables

3.2 ERROR Swap

问题:

[ERROR Swap]: running with swap on is not supported. Please disable swap

解决方案,禁用 swap 分区:

#禁用当前的 swap
sudo swapoff -a 
#同时永久禁掉swap分区,打开如下文件注释掉swap那一行
sudo vi /etc/fstab

3.3 无法下载镜像

问题(有删减):

error execution phase preflight: [preflight] Some fatal errors occurred:
	[ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-apiserver:v1.13.3
	[ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-controller-manager:v1.13.3
	[ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-scheduler:v1.13.3
	[ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-proxy:v1.13.3
	[ERROR ImagePull]: failed to pull image k8s.gcr.io/pause:3.1
	[ERROR ImagePull]: failed to pull image k8s.gcr.io/etcd:3.2.24
	[ERROR ImagePull]: failed to pull image k8s.gcr.io/coredns:1.2.6

由于 gcr.io 被墙无法下载,我们可以先通过其他渠道下载,然后在继续执行命令。

4. 预先下载镜像

根据前面错误信息来看我们需要下载的镜像。就当前来说,用户 mirrorgooglecontainers 在 docker hub 同步了所有 k8s 最新的镜像,先从这儿下载,然后修改 tag 即可。

docker pull mirrorgooglecontainers/kube-apiserver:v1.13.3
docker pull mirrorgooglecontainers/kube-controller-manager:v1.13.3
docker pull mirrorgooglecontainers/kube-scheduler:v1.13.3
docker pull mirrorgooglecontainers/kube-proxy:v1.13.3
docker pull mirrorgooglecontainers/pause:3.1
docker pull mirrorgooglecontainers/etcd:3.2.24
docker pull coredns/coredns:1.2.6

下载完成后,通过 docker images 查看如下:

REPOSITORY                                       TAG                 IMAGE ID            CREATED             SIZE
mirrorgooglecontainers/kube-apiserver            v1.13.3             fe242e556a99        5 days ago          181MB
mirrorgooglecontainers/kube-controller-manager   v1.13.3             0482f6400933        5 days ago          146MB
mirrorgooglecontainers/kube-proxy                v1.13.3             98db19758ad4        5 days ago          80.3MB
mirrorgooglecontainers/kube-scheduler            v1.13.3             3a6f709e97a0        5 days ago          79.6MB
coredns/coredns                                  1.2.6               f59dcacceff4        3 months ago        40MB
mirrorgooglecontainers/etcd                      3.2.24              3cab8e1b9802        4 months ago        220MB
mirrorgooglecontainers/pause                     3.1                 da86e6ba6ca1        13 months ago       742kB

分别修改上述镜像的标签。

docker tag mirrorgooglecontainers/kube-apiserver:v1.13.3 k8s.gcr.io/kube-apiserver:v1.13.3
docker tag mirrorgooglecontainers/kube-controller-manager:v1.13.3 k8s.gcr.io/kube-controller-manager:v1.13.3
docker tag mirrorgooglecontainers/kube-scheduler:v1.13.3 k8s.gcr.io/kube-scheduler:v1.13.3
docker tag mirrorgooglecontainers/kube-proxy:v1.13.3 k8s.gcr.io/kube-proxy:v1.13.3
docker tag mirrorgooglecontainers/pause:3.1 k8s.gcr.io/pause:3.1
docker tag mirrorgooglecontainers/etcd:3.2.24 k8s.gcr.io/etcd:3.2.24
docker tag coredns/coredns:1.2.6 k8s.gcr.io/coredns:1.2.6

配置好镜像后继续执行前面的命令:

kubeadm init --kubernetes-version=v1.13.3

5. 安装成功

前面命令执行输出的日志如下(保存好这段日志):

[root@k8s-master ~]# kubeadm init --kubernetes-version=v1.13.3
[init] Using Kubernetes version: v1.13.3
[preflight] Running pre-flight checks
	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 18.09.0. Latest validated version: 18.06
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.200.131 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.200.131 127.0.0.1 ::1]
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.200.131]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 21.507393 seconds
[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.13" in namespace kube-system with the configuration for the kubelets in the cluster
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "k8s-master" as an annotation
[mark-control-plane] Marking the node k8s-master as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node k8s-master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: 7j01ut.pbdh60q732m1kd4v
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node
as root:

  kubeadm join 192.168.200.131:6443 --token 7j01ut.pbdh60q732m1kd4v --discovery-token-ca-cert-hash sha256:de1dc033ae5cc27607b0f271655dd884c0bf6efb458957133dd9f50681fa2723

6. 特别注意上面提示的后续操作(一)

上面要求执行下面的操作:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

有了上面配置后,后续才能使用 kubectl 执行命令。

如果你系统重启过,执行 kubectl 时你可能会遇到类似下面这样的问题:

[root@k8s-master ~]# kubectl get pods
The connection to the server 192.168.200.131:6443 was refused - did you specify the right host or port?

我不清楚这里具体的原因,但是找到了问题的根源就是 swap,在前面 3.2 中,如果没有彻底禁用 swap,重启后会仍然启用,此时的 k8s 就会出现上面的错误。因为这个原因,所以建议直接禁用:

#永久禁掉swap分区,打开如下文件注释掉swap那一行
sudo vi /etc/fstab

7. 特别注意上面提示的后续操作(二)

上面要求执行下面的操作:

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

参考Kubernetes 权威指南(书上用的 v1.6),这里安装 Weave 插件,文档地址:

https://www.weave.works/docs/net/latest/kubernetes/kube-addon/

按照文档,执行下面的命令:

kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"

输出的内容如下:

serviceaccount/weave-net created
clusterrole.rbac.authorization.k8s.io/weave-net created
clusterrolebinding.rbac.authorization.k8s.io/weave-net created
role.rbac.authorization.k8s.io/weave-net created
rolebinding.rbac.authorization.k8s.io/weave-net created
daemonset.extensions/weave-net created

8. 特别注意上面提示的后续操作(三)

上面要求在NODE节点的主机上,以 root 用户执行下面的操作,

kubeadm join 192.168.200.131:6443 --token 7j01ut.pbdh60q732m1kd4v --discovery-token-ca-cert-hash sha256:de1dc033ae5cc27607b0f271655dd884c0bf6efb458957133dd9f50681fa2723

注意,别复制这里的,看你自己上面安装后输出的内容。
这段代码中的 token 可能存在有效期 1 天,如果过期,可以参考下面地址获取最新的 token
https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/#join-nodes

如果你使用的多个主机组建的集群,可以在其他主机执行上面的命令加入到集群中。

因为我这儿是实验用,所以我用的单机集群方式。

9. 单机集群

参考官方文档:https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/

先执行下面的命令查看当前 pods 状态:

[root@k8s-master ~]# kubectl get pods --all-namespaces
NAMESPACE     NAME                                 READY   STATUS    RESTARTS   AGE
kube-system   coredns-86c58d9df4-9s65p             0/1     Error     0          59m
kube-system   coredns-86c58d9df4-dvg7b             0/1     Error     0          59m
kube-system   etcd-k8s-master                      1/1     Running   3          58m
kube-system   kube-apiserver-k8s-master            1/1     Running   3          58m
kube-system   kube-controller-manager-k8s-master   1/1     Running   3          58m
kube-system   kube-proxy-5p4d8                     1/1     Running   3          59m
kube-system   kube-scheduler-k8s-master            1/1     Running   3          58m
kube-system   weave-net-j87km                      1/2     Running   2          16m

此时看到两个 Error,不清楚原因,然后执行单机集群的命令:

kubectl taint nodes --all node-role.kubernetes.io/master-

再次查看状态:

[root@k8s-master ~]# kubectl get pods --all-namespaces
NAMESPACE     NAME                                 READY   STATUS    RESTARTS   AGE
kube-system   coredns-86c58d9df4-9s65p             1/1     Running   1          60m
kube-system   coredns-86c58d9df4-dvg7b             1/1     Running   1          60m
kube-system   etcd-k8s-master                      1/1     Running   3          59m
kube-system   kube-apiserver-k8s-master            1/1     Running   3          59m
kube-system   kube-controller-manager-k8s-master   1/1     Running   3          59m
kube-system   kube-proxy-5p4d8                     1/1     Running   3          60m
kube-system   kube-scheduler-k8s-master            1/1     Running   3          59m
kube-system   weave-net-j87km                      2/2     Running   3          16m

现在都正常了。

退出系统,创建快照(在本文操作过程中,我创建了 3 个不同阶段的快照)。

10. 虚拟机备份下载

为了方便自己和他人使用,提供一个虚拟机的备份方便直接使用(如果安装遇到各种莫名其妙错误想要直接跳过安装的)。

虚拟机版本:15.0.0 build-10134415
虚拟机备份链接: https://pan.baidu.com/s/1s3FZtcvONgFXAmz1AUU9_w
提取码: tbi2

系统登陆用户:root
系统登陆密码:jj
有关虚拟机的 IP 信息看这里:VMware 虚拟机 最小化安装 CentOS 7 的 IP 配置

如果想要修改 IP 应该怎么做?

  1. 首先改 IP,重启 network 服务。
  2. 重置 k8s,执行 kubeadm reset
  3. 执行安装 kubeadm init --kubernetes-version=v1.13.3
  4. 你可能还会遇到 3.1 中的问题,按照上面配置即可。
  5. 先删除6中创建的 $HOME/.kube 目录,执行 rm -rf $HOME/.kube
  6. 执行 6. 特别注意上面提示的后续操作(一) 中的操作。
  7. 执行7. 特别注意上面提示的后续操作(二) 中的操作。
  8. 如果是单机集群方式,还需执行kubectl taint nodes --all node-role.kubernetes.io/master-
  9. 好了,又满血复活了,执行 kubectl get pods --all-namespaces 查看状态。

11. 小结

本文是一边写一边操作验证写完的,写完的时候,我自己的配置都没问题了,从头下来虽然花的时间比较长,但是还算顺利,本文只是配置了 Kubernetes 的实验环境,一切才刚刚开始!

12. 补充常见问题

根据我自己后续操作遇到的问题,都列举在此。

12.1 Pod 一直是 Pending

例如:

[root@k8s-master chapter01]# kubectl get pods
NAME          READY   STATUS    RESTARTS   AGE
mysql-4wvzz   0/1     Pending   0          4m1s

此时可以通过下面命令查看 Pod 状态:

[root@k8s-master chapter01]# kubectl describe pods mysql-4wvzz
Name:               mysql-4wvzz
Namespace:          default
Priority:           0
PriorityClassName:  <none>
Node:               <none>
Labels:             app=mysql
Annotations:        <none>
Status:             Pending
IP:                 
Controlled By:      ReplicationController/mysql
Containers:
  mysql:
    Image:      mysql
    Port:       3306/TCP
    Host Port:  0/TCP
    Environment:
      MYSQL_ROOT_PASSWORD:  123456
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-rksdn (ro)
Conditions:
  Type           Status
  PodScheduled   False 
Volumes:
  default-token-rksdn:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-rksdn
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason            Age                 From               Message
  ----     ------            ----                ----               -------
  Warning  FailedScheduling  28s (x3 over 103s)  default-scheduler  0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.

此时发现一个警告:0/1 nodes are available: 1 node(s) had taints that the pod didn’t tolerate.

该问题是因为没有可用的节点NODE,所以无法部署,如果你使用的单机集群方式,你可能是忘了执行下面的命令:

kubectl taint nodes --all node-role.kubernetes.io/master-

如果是多机集群,通过 kubectl get nodes 查看节点状态,保证有可用节点。

问题解决后,再次查看 Pod 状态,此时的事件部分输出如下:

Events:
  Type     Reason            Age                 From                 Message
  ----     ------            ----                ----                 -------
  Warning  FailedScheduling  51s (x9 over 6m6s)  default-scheduler    0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.
  Normal   Scheduled         10s                 default-scheduler    Successfully assigned default/mysql-4wvzz to k8s-master
  Normal   Pulling           9s                  kubelet, k8s-master  pulling image "mysql"
  Normal   Pulled            7s                  kubelet, k8s-master  Successfully pulled image "mysql"
  Normal   Created           7s                  kubelet, k8s-master  Created container
  Normal   Started           7s                  kubelet, k8s-master  Started container

此时问题已经解决。

12.2 weave cni 无效,改用 flannel

isea533 CSDN认证博客专家 运维开发 系统架构
《MyBatis从入门到精通》作者,MyBatis分页插件PageHelper作者,通用Mapper作者,个人网站:https://mybatis.io
已标记关键词 清除标记
<div><pre><code>bash [root-1 ~]# sealos init --passwd 123456 \ > --master 172.20.0.11 \ > --node 172.20.0.12 \ > --pkg-url /root/kube1.16.0.tar.gz \ > --version v1.16.0 2019-12-17 02:37:03 [CRIT] [github.com/fanux/sealos/install/check.go:21] [172.20.0.11] ------------ check ok 2019-12-17 02:37:03 [CRIT] [github.com/fanux/sealos/install/check.go:22] [172.20.0.11] ------------ session[0xc000188000] 2019-12-17 02:37:03 [CRIT] [github.com/fanux/sealos/install/check.go:21] [172.20.0.12] ------------ check ok 2019-12-17 02:37:03 [CRIT] [github.com/fanux/sealos/install/check.go:22] [172.20.0.12] ------------ session[0xc000188090] 2019-12-17 02:37:03 [INFO] [github.com/fanux/sealos/install/print.go:13] [globals]sealos config is: {"Hosts":["172.20.0.11","172.20.0.12"]} 2019-12-17 02:37:03 [DEBG] [github.com/fanux/sealos/install/utils.go:320] [172.20.0.12]please wait for tar zxvf exec 2019-12-17 02:37:03 [INFO] [github.com/fanux/sealos/install/utils.go:98] [172.20.0.12]exec cmd is : ls -l /root | grep kube1.16.0.tar.gz | wc -l 2019-12-17 02:37:03 [DEBG] [github.com/fanux/sealos/install/utils.go:320] [172.20.0.11]please wait for tar zxvf exec 2019-12-17 02:37:03 [INFO] [github.com/fanux/sealos/install/utils.go:98] [172.20.0.11]exec cmd is : ls -l /root | grep kube1.16.0.tar.gz | wc -l 2019-12-17 02:37:03 [DEBG] [github.com/fanux/sealos/install/utils.go:111] [172.20.0.11]command result is: 1 2019-12-17 02:37:03 [WARN] [github.com/fanux/sealos/install/utils.go:322] [172.20.0.11]SendPackage: file is exist 2019-12-17 02:37:03 [INFO] [github.com/fanux/sealos/install/utils.go:98] [172.20.0.11]exec cmd is : cd /root && rm -rf kube && tar zxvf kube1.16.0.tar.gz 2019-12-17 02:37:03 [DEBG] [github.com/fanux/sealos/install/utils.go:111] [172.20.0.12]command result is: 0 2019-12-17 02:37:05 [ALRT] [github.com/fanux/sealos/install/utils.go:190] [172.20.0.12]transfer total size is: 100MB 2019-12-17 02:37:06 [ALRT] [github.com/fanux/sealos/install/utils.go:190] [172.20.0.12]transfer total size is: 200MB 2019-12-17 02:37:07 [ALRT] [github.com/fanux/sealos/install/utils.go:190] [172.20.0.12]transfer total size is: 300MB 2019-12-17 02:37:08 [ALRT] [github.com/fanux/sealos/install/utils.go:190] [172.20.0.12]transfer total size is: 400MB 2019-12-17 02:37:09 [ALRT] [github.com/fanux/sealos/install/utils.go:190] [172.20.0.12]transfer total size is: 500MB 2019-12-17 02:37:10 [ALRT] [github.com/fanux/sealos/install/utils.go:190] [172.20.0.12]transfer total size is: 540MB 2019-12-17 02:37:10 [INFO] [github.com/fanux/sealos/install/utils.go:98] [172.20.0.12]exec cmd is : cd /root && rm -rf kube && tar zxvf kube1.16.0.tar.gz 2019-12-17 02:37:23 [DEBG] [github.com/fanux/sealos/install/utils.go:111] [172.20.0.11]command result is: kube/ kube/shell/ kube/shell/init.sh kube/shell/master.sh kube/shell/docker.sh kube/README.md kube/bin/ kube/bin/kubelet kube/bin/sealos kube/bin/kubectl kube/bin/crictl kube/bin/kubeadm kube/bin/kubelet-pre-start.sh kube/conf/ kube/conf/docker.service kube/conf/kubeadm.yaml kube/conf/kubelet.service kube/conf/calico.yaml kube/conf/10-kubeadm.conf kube/conf/net/ kube/conf/net/calico.yaml kube/docker/ kube/docker/docker.tgz kube/docker/README.md kube/images/ kube/images/images.tar kube/images/README.md 2019-12-17 02:37:23 [INFO] [github.com/fanux/sealos/install/utils.go:98] [172.20.0.11]exec cmd is : cd /root/kube/shell && sh init.sh 2019-12-17 02:37:27 [DEBG] [github.com/fanux/sealos/install/utils.go:111] [172.20.0.12]command result is: kube/ kube/shell/ kube/shell/init.sh kube/shell/master.sh kube/shell/docker.sh kube/README.md kube/bin/ kube/bin/kubelet kube/bin/sealos kube/bin/kubectl kube/bin/crictl kube/bin/kubeadm kube/bin/kubelet-pre-start.sh kube/conf/ kube/conf/docker.service kube/conf/kubeadm.yaml kube/conf/kubelet.service kube/conf/calico.yaml kube/conf/10-kubeadm.conf kube/conf/net/ kube/conf/net/calico.yaml kube/docker/ kube/docker/docker.tgz kube/docker/README.md kube/images/ kube/images/images.tar kube/images/README.md 2019-12-17 02:37:27 [INFO] [github.com/fanux/sealos/install/utils.go:98] [172.20.0.12]exec cmd is : cd /root/kube/shell && sh init.sh 2019-12-17 02:37:52 [DEBG] [github.com/fanux/sealos/install/utils.go:111] [172.20.0.11]command result is: + tar --strip-components=1 -xvzf ../docker/docker.tgz -C /usr/local/bin docker/ctr docker/runc docker/dockerd docker/docker docker/containerd docker/docker-init docker/containerd-shim docker/docker-proxy + cp ../conf/docker.service /usr/lib/systemd/system/docker.service + systemctl enable docker.service Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service. + systemctl restart docker.service + storage=/var/docker/lib + harbor_ip=127.0.0.1 + mkdir -p /var/docker/lib + cat + systemctl restart docker.service + docker version Client: Docker Engine - Community Version: 19.03.0 API version: 1.40 Go version: go1.12.5 Git commit: aeac9490dc Built: Wed Jul 17 18:11:50 2019 OS/Arch: linux/amd64 Experimental: false Server: Docker Engine - Community Engine: Version: 19.03.0 API version: 1.40 (minimum version 1.12) Go version: go1.12.5 Git commit: aeac9490dc Built: Wed Jul 17 18:22:15 2019 OS/Arch: linux/amd64 Experimental: false containerd: Version: v1.2.6 GitCommit: 894b81a4b802e4eb2a91d1ce216b8817763c29fb runc: Version: 1.0.0-rc8 GitCommit: 425e105d5a03fabd737a126ad93d62a9eeede87f docker-init: Version: 0.18.0 GitCommit: fec3683 * Applying /usr/lib/sysctl.d/00-system.conf ... net.bridge.bridge-nf-call-ip6tables = 0 net.bridge.bridge-nf-call-iptables = 0 net.bridge.bridge-nf-call-arptables = 0 * Applying /usr/lib/sysctl.d/10-default-yama-scope.conf ... kernel.yama.ptrace_scope = 0 * Applying /usr/lib/sysctl.d/50-default.conf ... kernel.sysrq = 16 kernel.core_uses_pid = 1 net.ipv4.conf.default.rp_filter = 1 net.ipv4.conf.all.rp_filter = 1 net.ipv4.conf.default.accept_source_route = 0 net.ipv4.conf.all.accept_source_route = 0 net.ipv4.conf.default.promote_secondaries = 1 net.ipv4.conf.all.promote_secondaries = 1 fs.protected_hardlinks = 1 fs.protected_symlinks = 1 * Applying /etc/sysctl.d/99-sysctl.conf ... * Applying /etc/sysctl.d/k8s.conf ... net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 * Applying /etc/sysctl.conf ... net.ipv4.ip_forward = 1 fe9a8b4f1dcc: Loading layer 43.87MB/43.87MB fb965c8779b8: Loading layer 174.7MB/174.7MB Loaded image: k8s.gcr.io/kube-apiserver:v1.16.0 9a01478873f3: Loading layer 183.2MB/183.2MB ec2dc9e995e3: Loading layer 21.18MB/21.18MB Loaded image: k8s.gcr.io/etcd:3.3.15-0 d8a33133e477: Loading layer 72.47MB/72.47MB 337ec577cf9c: Loading layer 33MB/33MB 45cc6dfacce1: Loading layer 3.584kB/3.584kB 7b3ecdc818b0: Loading layer 3.584kB/3.584kB 2b0805a50f82: Loading layer 21.85MB/21.85MB c9bf76343513: Loading layer 11.26kB/11.26kB f4176618c27b: Loading layer 11.26kB/11.26kB 4dcaff1da822: Loading layer 6.55MB/6.55MB 92e6b8f58573: Loading layer 2.945MB/2.945MB 5f970d4ac62d: Loading layer 35.84kB/35.84kB b1a2a2446599: Loading layer 55.22MB/55.22MB 014866f8df9e: Loading layer 1.14MB/1.14MB Loaded image: calico/node:v3.8.2 d69483a6face: Loading layer 209.5MB/209.5MB 4d5bbb6f00de: Loading layer 84.17MB/84.17MB 0c8fcaeca178: Loading layer 12.32MB/12.32MB Loaded image: fanux/lvscare:latest ba0d3c73e565: Loading layer 121MB/121MB Loaded image: k8s.gcr.io/kube-controller-manager:v1.16.0 15c9248be8a9: Loading layer 3.403MB/3.403MB e965763669eb: Loading layer 40.64MB/40.64MB Loaded image: k8s.gcr.io/kube-proxy:v1.16.0 de4b4a4f6616: Loading layer 44.95MB/44.95MB Loaded image: k8s.gcr.io/kube-scheduler:v1.16.0 225df95e717c: Loading layer 336.4kB/336.4kB 169c87e3a0eb: Loading layer 43.89MB/43.89MB Loaded image: k8s.gcr.io/coredns:1.6.2 466b4a33898e: Loading layer 88.05MB/88.05MB dd824a99572a: Loading layer 10.24kB/10.24kB d8fdd74cc7ed: Loading layer 2.56kB/2.56kB Loaded image: calico/cni:v3.8.2 8b62fd4eb2dd: Loading layer 43.99MB/43.99MB 40fe7b163104: Loading layer 2.828MB/2.828MB Loaded image: calico/kube-controllers:v3.8.2 3fc64803ca2d: Loading layer 4.463MB/4.463MB f03a403b18a7: Loading layer 5.12kB/5.12kB 0de6f9b8b1f7: Loading layer 5.166MB/5.166MB Loaded image: calico/pod2daemon-flexvol:v3.8.2 e17133b79956: Loading layer 744.4kB/744.4kB Loaded image: k8s.gcr.io/pause:3.1 cp: cannot create regular file ‘/usr/bin/sealos’: Text file busy driver is cgroupfs Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /etc/systemd/system/kubelet.service. 2019-12-17 02:37:55 [DEBG] [github.com/fanux/sealos/install/utils.go:111] [172.20.0.12]command result is: + tar --strip-components=1 -xvzf ../docker/docker.tgz -C /usr/local/bin docker/ctr docker/runc docker/dockerd docker/docker docker/containerd docker/docker-init docker/containerd-shim docker/docker-proxy + cp ../conf/docker.service /usr/lib/systemd/system/docker.service + systemctl enable docker.service Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service. + systemctl restart docker.service + storage=/var/docker/lib + harbor_ip=127.0.0.1 + mkdir -p /var/docker/lib + cat + systemctl restart docker.service + docker version Client: Docker Engine - Community Version: 19.03.0 API version: 1.40 Go version: go1.12.5 Git commit: aeac9490dc Built: Wed Jul 17 18:11:50 2019 OS/Arch: linux/amd64 Experimental: false Server: Docker Engine - Community Engine: Version: 19.03.0 API version: 1.40 (minimum version 1.12) Go version: go1.12.5 Git commit: aeac9490dc Built: Wed Jul 17 18:22:15 2019 OS/Arch: linux/amd64 Experimental: false containerd: Version: v1.2.6 GitCommit: 894b81a4b802e4eb2a91d1ce216b8817763c29fb runc: Version: 1.0.0-rc8 GitCommit: 425e105d5a03fabd737a126ad93d62a9eeede87f docker-init: Version: 0.18.0 GitCommit: fec3683 * Applying /usr/lib/sysctl.d/00-system.conf ... net.bridge.bridge-nf-call-ip6tables = 0 net.bridge.bridge-nf-call-iptables = 0 net.bridge.bridge-nf-call-arptables = 0 * Applying /usr/lib/sysctl.d/10-default-yama-scope.conf ... kernel.yama.ptrace_scope = 0 * Applying /usr/lib/sysctl.d/50-default.conf ... kernel.sysrq = 16 kernel.core_uses_pid = 1 net.ipv4.conf.default.rp_filter = 1 net.ipv4.conf.all.rp_filter = 1 net.ipv4.conf.default.accept_source_route = 0 net.ipv4.conf.all.accept_source_route = 0 net.ipv4.conf.default.promote_secondaries = 1 net.ipv4.conf.all.promote_secondaries = 1 fs.protected_hardlinks = 1 fs.protected_symlinks = 1 * Applying /etc/sysctl.d/99-sysctl.conf ... * Applying /etc/sysctl.d/k8s.conf ... net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 * Applying /etc/sysctl.conf ... net.ipv4.ip_forward = 1 fe9a8b4f1dcc: Loading layer 43.87MB/43.87MB fb965c8779b8: Loading layer 174.7MB/174.7MB Loaded image: k8s.gcr.io/kube-apiserver:v1.16.0 9a01478873f3: Loading layer 183.2MB/183.2MB ec2dc9e995e3: Loading layer 21.18MB/21.18MB Loaded image: k8s.gcr.io/etcd:3.3.15-0 d8a33133e477: Loading layer 72.47MB/72.47MB 337ec577cf9c: Loading layer 33MB/33MB 45cc6dfacce1: Loading layer 3.584kB/3.584kB 7b3ecdc818b0: Loading layer 3.584kB/3.584kB 2b0805a50f82: Loading layer 21.85MB/21.85MB c9bf76343513: Loading layer 11.26kB/11.26kB f4176618c27b: Loading layer 11.26kB/11.26kB 4dcaff1da822: Loading layer 6.55MB/6.55MB 92e6b8f58573: Loading layer 2.945MB/2.945MB 5f970d4ac62d: Loading layer 35.84kB/35.84kB b1a2a2446599: Loading layer 55.22MB/55.22MB 014866f8df9e: Loading layer 1.14MB/1.14MB Loaded image: calico/node:v3.8.2 d69483a6face: Loading layer 209.5MB/209.5MB 4d5bbb6f00de: Loading layer 84.17MB/84.17MB 0c8fcaeca178: Loading layer 12.32MB/12.32MB Loaded image: fanux/lvscare:latest ba0d3c73e565: Loading layer 121MB/121MB Loaded image: k8s.gcr.io/kube-controller-manager:v1.16.0 15c9248be8a9: Loading layer 3.403MB/3.403MB e965763669eb: Loading layer 40.64MB/40.64MB Loaded image: k8s.gcr.io/kube-proxy:v1.16.0 de4b4a4f6616: Loading layer 44.95MB/44.95MB Loaded image: k8s.gcr.io/kube-scheduler:v1.16.0 225df95e717c: Loading layer 336.4kB/336.4kB 169c87e3a0eb: Loading layer 43.89MB/43.89MB Loaded image: k8s.gcr.io/coredns:1.6.2 466b4a33898e: Loading layer 88.05MB/88.05MB dd824a99572a: Loading layer 10.24kB/10.24kB d8fdd74cc7ed: Loading layer 2.56kB/2.56kB Loaded image: calico/cni:v3.8.2 8b62fd4eb2dd: Loading layer 43.99MB/43.99MB 40fe7b163104: Loading layer 2.828MB/2.828MB Loaded image: calico/kube-controllers:v3.8.2 3fc64803ca2d: Loading layer 4.463MB/4.463MB f03a403b18a7: Loading layer 5.12kB/5.12kB 0de6f9b8b1f7: Loading layer 5.166MB/5.166MB Loaded image: calico/pod2daemon-flexvol:v3.8.2 e17133b79956: Loading layer 744.4kB/744.4kB Loaded image: k8s.gcr.io/pause:3.1 driver is cgroupfs Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /etc/systemd/system/kubelet.service. 2019-12-17 02:37:55 [DEBG] [github.com/fanux/sealos/install/print.go:20] ==>SendPackage 2019-12-17 02:37:55 [INFO] [github.com/fanux/sealos/install/utils.go:98] [172.20.0.11]exec cmd is : echo "apiVersion: kubeadm.k8s.io/v1beta1 kind: ClusterConfiguration kubernetesVersion: v1.16.0 controlPlaneEndpoint: "apiserver.cluster.local:6443" networking: podSubnet: 100.64.0.0/10 apiServer: certSANs: - 127.0.0.1 - apiserver.cluster.local - 172.20.0.11 - 10.103.97.2 --- apiVersion: kubeproxy.config.k8s.io/v1alpha1 kind: KubeProxyConfiguration mode: "ipvs" ipvs: excludeCIDRs: - "10.103.97.2/32"" > /root/kubeadm-config.yaml 2019-12-17 02:37:55 [DEBG] [github.com/fanux/sealos/install/utils.go:111] [172.20.0.11]command result is: 2019-12-17 02:37:55 [DEBG] [github.com/fanux/sealos/install/print.go:20] ==>SendPackage==>KubeadmConfigInstall 2019-12-17 02:37:55 [INFO] [github.com/fanux/sealos/install/utils.go:98] [172.20.0.11]exec cmd is : echo 172.20.0.11 apiserver.cluster.local >> /etc/hosts 2019-12-17 02:37:56 [DEBG] [github.com/fanux/sealos/install/utils.go:111] [172.20.0.11]command result is: 2019-12-17 02:37:56 [INFO] [github.com/fanux/sealos/install/utils.go:98] [172.20.0.11]exec cmd is : kubeadm init --config=/root/kubeadm-config.yaml --upload-certs 2019-12-17 02:38:27 [DEBG] [github.com/fanux/sealos/install/utils.go:111] [172.20.0.11]command result is: [init] Using Kubernetes version: v1.16.0 [preflight] Running pre-flight checks [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/ [WARNING FileExisting-socat]: socat not found in system path [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 19.03.0. Latest validated version: 18.09 [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Activating the kubelet service [certs] Using certificateDir folder "/etc/kubernetes/pki" [certs] Generating "ca" certificate and key [certs] Generating "apiserver" certificate and key [certs] apiserver serving cert is signed for DNS names [k8s-1.16-1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local apiserver.cluster.local apiserver.cluster.local] and IPs [10.96.0.1 10.0.2.15 127.0.0.1 172.20.0.11 10.103.97.2] [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "front-proxy-ca" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] Generating "etcd/ca" certificate and key [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [k8s-1.16-1 localhost] and IPs [10.0.2.15 127.0.0.1 ::1] [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [k8s-1.16-1 localhost] and IPs [10.0.2.15 127.0.0.1 ::1] [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "apiserver-etcd-client" certificate and key [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [apiclient] All control plane components are healthy after 21.003740 seconds [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster [upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace [upload-certs] Using certificate key: 249e037ce9d6ec83bc68a521d70be851c9399d6f32734a8c0ee6c6857d06bb9a [mark-control-plane] Marking the node k8s-1.16-1 as control-plane by adding the label "node-role.kubernetes.io/master=''" [mark-control-plane] Marking the node k8s-1.16-1 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule] [bootstrap-token] Using token: lga6x3.8aqoaccenl5nhuht [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of the control-plane node running the following command on each as root: kubeadm join apiserver.cluster.local:6443 --token lga6x3.8aqoaccenl5nhuht \ --discovery-token-ca-cert-hash sha256:5c56c9a631bf8b466e968a910c4b1bae981fbd2dd0aef0c031eb7db298177877 \ --control-plane --certificate-key 249e037ce9d6ec83bc68a521d70be851c9399d6f32734a8c0ee6c6857d06bb9a Please note that the certificate-key gives access to cluster sensitive data, keep it secret! As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use "kubeadm init phase upload-certs --upload-certs" to reload certs afterward. Then you can join any number of worker nodes by running the following on each as root: kubeadm join apiserver.cluster.local:6443 --token lga6x3.8aqoaccenl5nhuht \ --discovery-token-ca-cert-hash sha256:5c56c9a631bf8b466e968a910c4b1bae981fbd2dd0aef0c031eb7db298177877 kubernetes HA install: https://github.com/fanux/sealos www.sealyun.com QQ group: 98488045 2019-12-17 02:38:27 [INFO] [github.com/fanux/sealos/install/sealos.go:84] [globals]join command is: apiserver.cluster.local:6443 --token lga6x3.8aqoaccenl5nhuht \ --discovery-token-ca-cert-hash sha256:5c56c9a631bf8b466e968a910c4b1bae981fbd2dd0aef0c031eb7db298177877 \ --control-plane --certificate-key 249e037ce9d6ec83bc68a521d70be851c9399d6f32734a8c0ee6c6857d06bb9a 2019-12-17 02:38:27 [INFO] [github.com/fanux/sealos/install/utils.go:98] [172.20.0.11]exec cmd is : mkdir -p /root/.kube && cp /etc/kubernetes/admin.conf /root/.kube/config 2019-12-17 02:38:28 [DEBG] [github.com/fanux/sealos/install/utils.go:111] [172.20.0.11]command result is: 2019-12-17 02:38:28 [INFO] [github.com/fanux/sealos/install/utils.go:98] [172.20.0.11]exec cmd is : kubectl apply -f /root/kube/conf/net/calico.yaml || true 2019-12-17 02:38:29 [DEBG] [github.com/fanux/sealos/install/utils.go:111] [172.20.0.11]command result is: configmap/calico-config created customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created clusterrole.rbac.authorization.k8s.io/calico-node created clusterrolebinding.rbac.authorization.k8s.io/calico-node created daemonset.apps/calico-node created serviceaccount/calico-node created deployment.apps/calico-kube-controllers created serviceaccount/calico-kube-controllers created 2019-12-17 02:38:29 [DEBG] [github.com/fanux/sealos/install/print.go:20] ==>SendPackage==>KubeadmConfigInstall==>InstallMaster0 2019-12-17 02:38:29 [INFO] [github.com/fanux/sealos/install/utils.go:98] [172.20.0.12]exec cmd is : echo 10.103.97.2 apiserver.cluster.local >> /etc/hosts 2019-12-17 02:38:30 [DEBG] [github.com/fanux/sealos/install/utils.go:111] [172.20.0.12]command result is: 2019-12-17 02:38:30 [INFO] [github.com/fanux/sealos/install/utils.go:98] [172.20.0.12]exec cmd is : kubeadm join 10.103.97.2:6443 --token lga6x3.8aqoaccenl5nhuht --discovery-token-ca-cert-hash sha256:5c56c9a631bf8b466e968a910c4b1bae981fbd2dd0aef0c031eb7db298177877 --master 172.20.0.11:6443 </code></pre> <p>上面是安装时的全部日志, 我有两个节点 172.20.0.11, 172.20.0.13. 安装(离线安装, 安装包是 kube1.16.0.tar.gz)时, 指定了 master 为 172.20.0.11, node 为 172.20.0.13.</p> <p>安装期间没有任何的报错, 但是卡在最后一行了, 看样子是卡在安装计算节点了, 只好卡了一个小时还没好.</p> <p>我检查了一下, 管理节点已经安装成功了:</p> <pre><code>bash [root-1 ~]# kubectl get node NAME STATUS ROLES AGE VERSION k8s-1.16-1 Ready master 18m v1.16.0 </code></pre> <p>这个时候, 计算节点上已经存在相应的离线包, 并安装好了 docker.</p> <pre><code>bash [root-1 ~]# ls anaconda-ks.cfg kube kube1.16.0.tar.gz original-ks.cfg [root-1 ~]# docker version Client: Docker Engine - Community Version: 19.03.0 API version: 1.40 Go version: go1.12.5 Git commit: aeac9490dc Built: Wed Jul 17 18:11:50 2019 OS/Arch: linux/amd64 Experimental: false Server: Docker Engine - Community Engine: Version: 19.03.0 API version: 1.40 (minimum version 1.12) Go version: go1.12.5 Git commit: aeac9490dc Built: Wed Jul 17 18:22:15 2019 OS/Arch: linux/amd64 Experimental: false containerd: Version: v1.2.6 GitCommit: 894b81a4b802e4eb2a91d1ce216b8817763c29fb runc: Version: 1.0.0-rc8 GitCommit: 425e105d5a03fabd737a126ad93d62a9eeede87f docker-init: Version: 0.18.0 GitCommit: fec3683 </code></pre><p>该提问来源于开源项目:fanux/sealos</p></div>
©️2020 CSDN 皮肤主题: 酷酷鲨 设计师:CSDN官方博客 返回首页