错误报告
kubeadm版本(使用kubeadm version
):“ v1.12.0-alpha.0.957 + 1235adac3802fd-dirty”
我使用kubeadm init
创建了一个控制平面节点。 我在单独的节点上运行kubeadm join
,并收到以下错误消息:
[kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.12" ConfigMap in the kube-system namespace
configmaps "kubelet-config-1.12" is forbidden: User "system:bootstrap:4ipkdk" cannot get configmaps in the namespace "kube-system"
我期望kubeadm join
成功完成
据我所知,在另一个节点上运行kubeadm init
和kubeadm join
。 我有很多额外的代码/ yaml,它们不应该影响配置映射(用于快乐的aws部署是必需的)。 但是,如果结果证明是不可复制的,我将提供更深入的说明。
我认为kubeadm join
和kubeadm init
也在不一致地命名配置映射。 的init
命令使用kubernetesVersion
在配置文件中指定和join
命令使用kubelet版本为配置映射的名称(例如kubelet-CONFIG-1.1)。 除非您有不匹配的版本,否则就没问题了。
init
命令使RBAC规则可以匿名访问kube-public
名称空间中的配置映射,但是它似乎没有将kubelet配置放在公共名称空间中,因此节点加入无权访问它。
除非您有不匹配的版本,否则就没问题了。
@chuckha如果我记得很好, kubelet-*
配置映射应该为kube-system
而kubeadm创建允许访问引导令牌和节点的规则。 但是我将在上一轮更改后再次检查
@chuckha我成功完成了在群集上的加入,所有组件从master +版本号构建到v1.11.0
kubelet-config-1.11
是在kube-system
kubeadm:kubelet-config-1.11
是在kube-system
创建的,并在配置图上获得许可kube-system
为system:nodes
和system:bootstrappers:kubeadm:default-node-token
创建了角色绑定kubeadm:kubelet-config-1.11
system:bootstrappers:kubeadm:default-node-token
因此,IMO:
仍需调查的部分是
init命令使用配置文件中指定的kubernetesVersion,join命令使用kubelet版本作为配置映射的名称(例如kubelet-config-1.1)。
是的,这必须是版本问题。
我在构建时没有设置版本,因此二进制文件都认为它们是1.12.0+,但是我安装并强制kubeadm使用v1.11。
这导致
root@ip-10-0-0-7:~# k get cm -n kube-system
NAME DATA AGE
...
kubelet-config-1.11 1 32m
然后加入时:
[kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.12" ConfigMap in the kube-system namespace
configmaps "kubelet-config-1.12" is forbidden: User "system:bootstrap:u3ns6m" cannot get configmaps in the namespace "kube-system"
我将重建二进制文件并将其强制为正确的版本,然后重试。
当kubelet和kubeadm的版本匹配时,一切都很好。 最后,建议采取一种不太紧急的修复方式,以解决写入配置映射和获取配置映射之间的不一致(可能是有意的)。
我不是在使用kubeadm init,而是分别调用各个阶段。 在kube-system中没有configmap,也没有正确设置它的权限。 这是什么阶段?
@drewwells我遇到了与您相同的问题。 我正在逐个运行阶段,并且没有配置mpas。
sudo kubectl get cm -n kube-system --kubeconfig=/etc/kubernetes/admin.conf
NAME DATA AGE
calico-config 2 11m
coredns 1 15m
extension-apiserver-authentication 6 15m
kube-proxy 2 15m
找到解决方案了吗?
我还必须提到,所有组件均为1.11.4
这进一步。 我用kubeadm init引导了一个集群,现在我确实有正确的配置映射:
ubuntu@master-1-test2:~$ sudo kubectl get cm -n kube-system --kubeconfig=/etc/kubernetes/admin.conf
NAME DATA AGE
coredns 1 41m
extension-apiserver-authentication 6 41m
kube-proxy 2 41m
kubeadm-config 1 41m
kubelet-config-1.11 1 41m
ubuntu@master-1-test2:~$ kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.4", GitCommit:"bf9a868e8ea3d3a8fa53cbb22f566771b3f8068b", GitTreeState:"clean", BuildDate:"2018-10-25T19:13:39Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}
在节点上:
$ sudo kubeadm -v=10 join k8s.oz.noris.de:6443 --token 36etul.nv5lz6hjfifdd4c9 --discovery-token-unsafe-skip-ca-verification I1107 12:57:01.340638 3631 join.go:226] [join] found NodeName empty
I1107 12:57:01.340816 3631 join.go:227] [join] considered OS hostname as NodeName
[preflight] running pre-flight checks
I1107 12:57:01.341152 3631 join.go:238] [preflight] running various checks on all nodes
I1107 12:57:01.341265 3631 checks.go:253] validating the existence and emptiness of directory /etc/kubernetes/manifests
I1107 12:57:01.341677 3631 checks.go:291] validating the existence of file /etc/kubernetes/pki/ca.crt
I1107 12:57:01.341774 3631 checks.go:291] validating the existence of file /etc/kubernetes/kubelet.conf
I1107 12:57:01.341857 3631 checks.go:291] validating the existence of file /etc/kubernetes/bootstrap-kubelet.conf
I1107 12:57:01.341947 3631 kernelcheck_linux.go:45] validating the kernel module IPVS required exists in machine or not
[WARNING RequiredIPVSKernelModulesAvailable]: the IPVS proxier will not be used, because the following required kernel modules are not loaded: [ip_vs_sh ip_vs ip_vs_rr ip_vs_wrr] or no builtin kernel ipvs support: map[ip_vs:{} ip_
vs_rr:{} ip_vs_wrr:{} ip_vs_sh:{} nf_conntrack_ipv4:{}]
you can solve this problem with following methods:
1. Run 'modprobe -- ' to load missing kernel modules;
2. Provide the missing builtin kernel ipvs support
I1107 12:57:01.349448 3631 checks.go:138] validating if the service is enabled and active
I1107 12:57:01.361957 3631 checks.go:340] validating the contents of file /proc/sys/net/bridge/bridge-nf-call-iptables
I1107 12:57:01.362034 3631 checks.go:340] validating the contents of file /proc/sys/net/ipv4/ip_forward
I1107 12:57:01.362076 3631 checks.go:653] validating whether swap is enabled or not
I1107 12:57:01.362134 3631 checks.go:381] validating the presence of executable crictl
I1107 12:57:01.362204 3631 checks.go:381] validating the presence of executable ip
I1107 12:57:01.362244 3631 checks.go:381] validating the presence of executable iptables
I1107 12:57:01.362281 3631 checks.go:381] validating the presence of executable mount
I1107 12:57:01.362320 3631 checks.go:381] validating the presence of executable nsenter
...
[discovery] Trying to connect to API Server "mycluster.example.com:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://mycluster.example.com:6443"
I1107 12:57:01.487256 3631 round_trippers.go:386] curl -k -v -XGET -H "Accept: application/json, */*" -H "User-Agent: kubeadm/v1.11.4 (linux/amd64) kubernetes/bf9a868" 'https://mycluster.example.com:6443/api/v1/namespaces/kube-public/config
maps/cluster-info'
I1107 12:57:01.504539 3631 round_trippers.go:405] GET https://mycluster.example.com:6443/api/v1/namespaces/kube-public/configmaps/cluster-info 200 OK in 17 milliseconds
I1107 12:57:01.504720 3631 round_trippers.go:411] Response Headers:
I1107 12:57:01.504818 3631 round_trippers.go:414] Content-Type: application/json
I1107 12:57:01.504914 3631 round_trippers.go:414] Content-Length: 2217
I1107 12:57:01.505003 3631 round_trippers.go:414] Date: Wed, 07 Nov 2018 12:57:01 GMT
I1107 12:57:01.505174 3631 request.go:942] Response Body: {"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"cluster-info","namespace":"kube-public","selfLink":"/api/v1/namespaces/kube-public/configmaps/cluster-info","uid":"97e9
a3d1-e286-11e8-9272-fa163ef9e3af","resourceVersion":"425","creationTimestamp":"2018-11-07T12:13:53Z"},"data":{"jws-kubeconfig-36etul":"eyJhbGciOiJIUzI1NiIsImtpZCI6IjM2ZXR1bCJ9..zRgexonkjOpLJS0q3IignURwTcpBuQy7gv35Qhhsl_k","jws-kubeconfig-
eth6o8":"eyJhbGciOiJIUzI1NiIsImtpZCI6ImV0aDZvOCJ9..kWj4cI2j1WgKfNG07IGiIij4CSb9kWUbaM2mixlYThY","jws-kubeconfig-rbxd02":"eyJhbGciOiJIUzI1NiIsImtpZCI6InJieGQwMiJ9..HwIWDwfIbAjNM1EGbWdXYOhC8z1MxgwuzhjlJRaZ_pc","kubeconfig":"apiVersion: v1\n
clusters:\n- cluster:\n certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN5RENDQWJDZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRFNE1URXdNekEwTlRrME4xb1hEVEk0TVRBek1UQTB
OVGswTjFvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTU9BCkNHQU5jUjVRQWV3MlljY2V0eWVyYktiODd4RWRPVlp2aUdneElrbkpKTTZwZFVBbzMwSWVxckRqSnlFaTFVeDcKU0c5NS9sRlBqU1htdHhhNHMvc1g1KzNTVW4zZ
EtFRWw5TFhXa0lzeTRJYzRFUTMwWE9WcnNuYTYwN1UzNmQyaAp3NHdTK1dveE5QR3dqZDM2bXQzMFR4bUluYk54ZVl5d2NnVU1tMlZFZXM4dGhVaVhZMXB1N1Y2SUNCY243cE9NCkdoT2xlRXg4SmlEVnhuSGlpSm9oYytCbGNIdHdLU1pzK2cvZUhwdGdlSDdaQlZNRC8zZVFvZXVsUGVvTEkwamEKc09jTENMTkpEVVB
LUWJqRnRNbkFZSXVvOENHSXpFTzBDaDZNeW5vb1pTL0E0bEs1MXJmTVdkTkZ4N0dVdnQxYQo5KzZzMHo2NEpHeVFBdmtBcWhVQ0F3RUFBYU1qTUNFd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFEbWs1NEYzZ1BqOS91NzlRbTg2V1Mzc
k5YaFoKZG16Wmt3TXRDajRuTXdsSndGQy9iZUU4ZUdsWnFxWDcrdEpYUDVaY0xLNE1pSnM1U2JTMjd5NDF3WTRRTFFWaQpVWmRocEFHUTBOSlpHSGhWMTVDczlVQTA1ZTFNajNCaHZ6SG5VV2t1ZUhYbW84VmI4SkI5RGloeGdiUW5GY2FQCjRWcVhWY0pBemxVQ0V5aXhreVRGendZTklJbzJHdGtCdlI1YkxCM0doT2R
sQURmQzEwdzgvTmQveFFmRnRWdmYKL3lHaktpbW8rT2xERkV5YittcHVKMVdiN3Y3bnJJSzlSSy9WbVhUWENiOWZLQ3BmQ0hMU0hpa0lEWklZK0wxTQpwbWpXYXZFcjFLSlE5UEJIYmdZSHkxK1F0bkpXRDNjNnJrOUtoNU1zMFhTVmpBc2Z1RWdXaG9CYnlVdz0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=\n
server: https://k8s.oz.noris.de:6443\n name: \"\"\ncontexts: []\ncurrent-context: \"\"\nkind: Config\npreferences: {}\nusers: []\n"}}
[discovery] Cluster info signature and contents are valid and no TLS pinning was specified, will use API Server "k8s.oz.noris.de:6443"
[discovery] Successfully established connection with API Server "mycluster.example.com:6443"
I1107 12:57:01.509945 3631 join.go:260] [join] writing bootstrap kubelet config file at /etc/kubernetes/bootstrap-kubelet.conf
I1107 12:57:01.617006 3631 loader.go:359] Config loaded from file /etc/kubernetes/bootstrap-kubelet.conf
I1107 12:57:01.617871 3631 join.go:283] Stopping the kubelet
[kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.12" ConfigMap in the kube-system namespace
I1107 12:57:01.627838 3631 round_trippers.go:386] curl -k -v -XGET -H "Accept: application/json, */*" -H "User-Agent: kubeadm/v1.11.4 (linux/amd64) kubernetes/bf9a868" -H "Authorization: Bearer 36etul.nv5lz6hjfifdd4c9" 'https://mycluster.example.com:6443/api/v1/namespaces/kube-system/configmaps/kubelet-config-1.12'
I1107 12:57:01.639396 3631 round_trippers.go:405] GET https://mycluster.example.com:6443/api/v1/namespaces/kube-system/configmaps/kubelet-config-1.12 403 Forbidden in 11 milliseconds
I1107 12:57:01.639427 3631 round_trippers.go:411] Response Headers:
I1107 12:57:01.639443 3631 round_trippers.go:414] Content-Length: 311
I1107 12:57:01.639464 3631 round_trippers.go:414] Date: Wed, 07 Nov 2018 12:57:01 GMT
I1107 12:57:01.639477 3631 round_trippers.go:414] Content-Type: application/json
I1107 12:57:01.639492 3631 round_trippers.go:414] X-Content-Type-Options: nosniff
I1107 12:57:01.639525 3631 request.go:942] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"configmaps \"kubelet-config-1.12\" is forbidden: User \"system:bootstrap:36etul\" cannot get confi
gmaps in the namespace \"kube-system\"","reason":"Forbidden","details":{"name":"kubelet-config-1.12","kind":"configmaps"},"code":403}
configmaps "kubelet-config-1.12" is forbidden: User "system:bootstrap:36etul" cannot get configmaps in the namespace "kube-system"
注意到奇怪的东西吗?
显然,这一切都是因为,如果您检查kubelet的版本! 我有kubelet版本1.12.2。
我从代码中得到了提示
我遇到了同样的问题,只是更新了一个版本。
在高手:
$ lsb_release -d
Description: Ubuntu 16.04.5 LTS
$ dpkg -l | grep kub
ii kubeadm 1.12.1-00 amd64 Kubernetes Cluster Bootstrapping Tool
ii kubectl 1.12.1-00 amd64 Kubernetes Command Line Tool
ii kubelet 1.12.1-00 amd64 Kubernetes Node Agent
ii kubernetes-cni 0.6.0-00 amd64 Kubernetes CNI
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.1", GitCommit:"4ed3216f3ec431b140b1d899130a69fc671678f4", GitTreeState:"clean", BuildDate:"2018-10-05T16:46:06Z", GoVersion:"go1.10.4", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.0", GitCommit:"ddf47ac13c1a9483ea035a79cd7c10005ff21a6d", GitTreeState:"clean", BuildDate:"2018-12-03T20:56:12Z", GoVersion:"go1.11.2", Compiler:"gc", Platform:"linux/amd64"}
什么,该服务器版本1.13.0来自何处? 我没有安装。
无论如何,在此VM上运行kubeadm init之前,我已经对其进行了克隆,因此我准备好另一个作为群集中的另一个节点。 因为它是一个克隆,所以它也具有1.12.1。 当我尝试加入时:
$ kubeadm join --token blahblah 10.138.0.3:6443 --discovery-token-ca-cert-hash sha256:deadbeefdeadbeefetc
[preflight] running pre-flight checks
[WARNING RequiredIPVSKernelModulesAvailable]: the IPVS proxier will not be used, because the following required kernel modules are not loaded: [ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh] or no builtin kernel ipvs support: map[ip_vs_sh:{} nf_conntrack_ipv4:{} ip_vs:{} ip_vs_rr:{} ip_vs_wrr:{}]
you can solve this problem with following methods:
1. Run 'modprobe -- ' to load missing kernel modules;
2. Provide the missing builtin kernel ipvs support
[discovery] Trying to connect to API Server "10.138.0.3:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://10.138.0.3:6443"
[discovery] Requesting info from "https://10.138.0.3:6443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will
use API Server "10.138.0.3:6443"
[discovery] Successfully established connection with API Server "10.138.0.3:6443"
[kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.12" ConfigMap in the kube-system namespace configmaps "kubelet-config-1.12" is forbidden: User "system:bootstrap:3ai26q" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
为什么它不能获得kubelet-config-1.12 configmap? 因为没有人。 回到高手:
$ sudo kubectl get cm -n kube-system --kubeconfig=/etc/kubernetes/admin.conf
NAME DATA AGE
calico-config 4 26m
coredns 1 29m
extension-apiserver-authentication 6 29m
kube-proxy 2 29m
kubeadm-config 2 29m
kubelet-config-1.13 1 29m
@brianriceca :面临与您完全相同的问题...对此的任何解决方案..
Master:ram @ k8master1 :〜$ kubeadm版本
kubeadm版本:&version.Info {主要:“ 1”,次要:“ 12”,GitVersion:“ v1.12.1”,GitCommit:“ 4ed3216f3ec431b140b1d899130a69fc671678f4”,GitTreeState:“ clean”,BuildDate:“ 2018-10-05T16:43: 08Z“,GoVersion:” go1.10.4“,编译器:” gc“,平台:” linux / amd64“}
ram @ k8master1 :〜$ dpkg -l | grep kub
ii kubeadm 1.12.1-00 amd64 Kubernetes集群自举工具
ii kubectl 1.12.1-00 amd64 Kubernetes命令行工具
ii kubelet 1.12.1-00 amd64 Kubernetes节点代理
ii kubernetes-cni 0.6.0-00 amd64 Kubernetes CNI
ram @ k8master1 :〜$ kubectl版本
客户端版本:version.Info {主要:“ 1”,次要:“ 12”,GitVersion:“ v1.12.1”,GitCommit:“ 4ed3216f3ec431b140b1d899130a69fc671678f4”,GitTreeState:“ clean”,BuildDate:“ 2018-10-05T16:46: 06Z“,GoVersion:” go1.10.4“,编译器:” gc“,平台:” linux / amd64“}
服务器版本:version.Info {主要:“ 1”,次要:“ 13”,GitVersion:“ v1.13.0”,GitCommit:“ ddf47ac13c1a9483ea035a79cd7c10005ff21a6d”,GitTreeState:“ clean”,BuildDate:“ 2018-12-03T20:56: 12Z“,GoVersion:” go1.11.2“,编译器:” gc“,平台:” linux / amd64“}
ram @ k8master1 :〜$ kubectl获取节点
名称状态角色年龄版本
k8master1.example.com Ready master 101m v1.12.1
ram @ k8master1 :〜$ kubectl获取容器--all-namespaces
名称空间名称就绪状态重启年龄
kube-system calico-node-r248v 2/2跑步0 99m
kube-system coredns-869f847d58-72lqd 1/1跑步0 101m
kube-system coredns-869f847d58-p2zzs 1/1运行0 101m
kube-system etcd-k8master1.example.com 1/1跑步0 100m
kube-system kube-apiserver-k8master1.example.com 1/1运行0 100m
kube-system kube-controller-manager-k8master1.example.com 1/1运行0 100m
kube-system kube-proxy-77qbx 1/1跑步0 101m
kube-system kube-scheduler-k8master1.example.com 1/1跑步0 100m
工作节点:
root @ k8worker1 :〜#dpkg -l | grep -i kub
ii kubeadm 1.12.1-00 amd64 Kubernetes集群自举工具
ii kubectl 1.12.1-00 amd64 Kubernetes命令行工具
ii kubelet 1.12.1-00 amd64 Kubernetes节点代理
ii kubernetes-cni 0.6.0-00 amd64 Kubernetes CNI
root @ k8worker1 :〜#kubectl版本
客户端版本:version.Info {主要:“ 1”,次要:“ 12”,GitVersion:“ v1.12.1”,GitCommit:“ 4ed3216f3ec431b140b1d899130a69fc671678f4”,GitTreeState:“ clean”,BuildDate:“ 2018-10-05T16:46: 06Z“,GoVersion:” go1.10.4“,编译器:” gc“,平台:” linux / amd64“}
与服务器localhost:8080的连接被拒绝-您是否指定了正确的主机或端口?
root @ k8worker1 :〜#kubeadm join 10.0.0.61:6443 --token xjxgqa.h2vnld3x9ztgf3pr --discovery-token-ca-cert-hash sha256:7c18b654b623ee84164bb0dfa79409c821398f1a968843446af525ec72e0fdad
[飞行前]运行飞行前检查
[WARNING RequiredIPVSKernelModulesAvailable]:将不会使用IPVS代理,因为未加载以下必需的内核模块:[ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh]或没有内置内核ipvs支持:map [nf_conntrack_ipvs4:{} ip_vs_wrr:{} ip_vs_sh:{}]
您可以使用以下方法解决此问题:
[发现]尝试连接到API服务器“ 10.0.0.61:6443”
[发现]创建集群信息发现客户端,从“ https://10.0.0.61:6443 ”请求信息
[发现]再次从“ https://10.0.0.61:6443 ”请求信息以针对固定的公钥验证TLS
[发现]群集信息签名和内容有效,并且TLS证书针对固定根进行了验证,将使用API服务器“ 10.0.0.61:6443”
[发现]与API服务器“ 10.0.0.61:6443”成功建立连接
[kubelet]从kube-system命名空间中的“ kubelet-config-1.12” ConfigMap下载kubelet的配置
禁止使用configmaps“ kubelet-config-1.12”:用户“ system:bootstrap :xjxgqa”无法在名称空间“ kube-system”中的API组“”中获取资源“ configmaps”
root @ k8worker1 :〜#kubeadm join 10.0.0.61:6443 --token xjxgqa.h2vnld3x9ztgf3pr --discovery-token-ca-cert-hash sha256:7c18b654b623ee84164bb0dfa79409c821398f1a968843446af525ec72e0fdad
[飞行前]运行飞行前检查
[WARNING RequiredIPVSKernelModulesAvailable]:将不使用IPVS代理,因为未加载以下必需的内核模块:[ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh]或没有内置内核ipvs支持:map [ip_vs_rr:{} ip_vs_sh: nf_conntrack_ipv4:{} ip_vs:{}]
您可以使用以下方法解决此问题:
[预检]发生了一些致命错误:
[错误文件可用--etc-kubernetes-bootstrap-kubelet.conf]:/etc/kubernetes/bootstrap-kubelet.conf已存在
[错误文件可用-etc-kubernetes-pki-ca.crt]:/etc/kubernetes/pki/ca.crt已存在
[预检]如果您知道自己在做什么,则可以使用--ignore-preflight-errors=...
进行不致命的支票
与@brianriceca和@ramanjk相同。 两个节点上的kubadm和kubelet的版本均为1.12.1, kubelet-config-1.13
configmap在那里,并且也看到configmaps "kubelet-config-1.12" is forbidden
天哪! 我还不知道kubeadm总是从gcr.io下载最新版本的Kubernetes控制面板,除非您另有说明。 因此,即使在1.13可用时我也想安装1.12.1,我需要这样做
kubeadm init --kubernetes-version 1.12.1 --pod-network-cidr
_无论/什么_
我删除了两个节点,然后使用所有版本的1.12.2
再次尝试,这次没有发生问题。 有一些关于存在较新版本的输出,但是它将恢复为1.12(我现在丢失了)
我遇到了相同的错误,而(显然)在播种器中使用的是与节点中相同的版本。
在播种机上(初始化后):
$ kubeadm version -o json
{
"clientVersion": {
"major": "1",
"minor": "13",
"gitVersion": "v1.13.0",
"gitCommit": "ddf47ac13c1a9483ea035a79cd7c10005ff21a6d",
"gitTreeState": "clean",
"buildDate": "2018-12-11T17:03:40Z",
"goVersion": "go1.11.2",
"compiler": "gc",
"platform": "linux/amd64"
}
}
$
$ kubectl get cm --all-namespaces
NAMESPACE NAME DATA AGE
kube-public cluster-info 2 174m
kube-system coredns 1 174m
kube-system extension-apiserver-authentication 6 174m
kube-system flannel-plugin-config-map 2 174m
kube-system kube-proxy 2 174m
kube-system kubeadm-config 2 174m
kube-system kubelet-config-1.13 1 174m
kube-system kubic-init-config-seeder 1 174m
$
$
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.0", GitCommit:"ddf47ac13c1a9483ea035a79cd7c10005ff21a6d", GitTreeState:"archive", BuildDate:"2018-12-07T12:00:00Z", GoVersion:"go1.11.2", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.0", GitCommit:"ddf47ac13c1a9483ea035a79cd7c10005ff21a6d", GitTreeState:"clean", BuildDate:"2018-12-03T20:56:12Z", GoVersion:"go1.11.2", Compiler:"gc", Platform:"linux/amd64"}
在节点上:
$ cat config.txt
apiVersion: kubeadm.k8s.io/v1beta1
caCertPath: /etc/kubernetes/pki/ca.crt
discovery:
bootstrapToken:
apiServerEndpoint: 192.168.100.1:6443
token: 94dcda.c271f4ff502789ca
unsafeSkipCAVerification: true
timeout: 5m0s
tlsBootstrapToken: 94dcda.c271f4ff502789ca
kind: JoinConfiguration
nodeRegistration:
criSocket: /var/run/crio/crio.sock
kubeletExtraArgs:
cni-bin-dir: /var/lib/kubelet/cni/bin
cni-conf-dir: /etc/cni/net.d
container-runtime-endpoint: unix:///var/run/crio/crio.sock
network-plugin: cni
$
$ kubeadm join --v=8 --config=config.txt
I1220 11:55:56.879023 7 join.go:299] [join] found NodeName empty; using OS hostname as NodeName
I1220 11:55:56.880357 7 joinconfiguration.go:72] loading configuration from the given file
[preflight] Running pre-flight checks
I1220 11:55:56.890498 7 join.go:328] [preflight] Running general checks
I1220 11:55:56.891937 7 checks.go:245] validating the existence and emptiness of directory /etc/kubernetes/manifests
I1220 11:55:56.893051 7 checks.go:283] validating the existence of file /etc/kubernetes/kubelet.conf
I1220 11:55:56.894239 7 checks.go:283] validating the existence of file /etc/kubernetes/bootstrap-kubelet.conf
I1220 11:55:56.895384 7 checks.go:104] validating the container runtime
I1220 11:55:57.072517 7 checks.go:373] validating the presence of executable crictl
I1220 11:55:57.073553 7 checks.go:332] validating the contents of file /proc/sys/net/bridge/bridge-nf-call-iptables
I1220 11:55:57.074479 7 checks.go:332] validating the contents of file /proc/sys/net/ipv4/ip_forward
I1220 11:55:57.075518 7 checks.go:644] validating whether swap is enabled or not
I1220 11:55:57.076499 7 checks.go:373] validating the presence of executable ip
I1220 11:55:57.077424 7 checks.go:373] validating the presence of executable iptables
I1220 11:55:57.078594 7 checks.go:373] validating the presence of executable mount
I1220 11:55:57.079564 7 checks.go:373] validating the presence of executable nsenter
I1220 11:55:57.080425 7 checks.go:373] validating the presence of executable ebtables
I1220 11:55:57.081391 7 checks.go:373] validating the presence of executable ethtool
I1220 11:55:57.082170 7 checks.go:373] validating the presence of executable socat
I1220 11:55:57.084207 7 checks.go:373] validating the presence of executable tc
I1220 11:55:57.085250 7 checks.go:373] validating the presence of executable touch
I1220 11:55:57.086132 7 checks.go:515] running all checks
I1220 11:55:57.137681 7 checks.go:403] checking whether the given node name is reachable using net.LookupHost
I1220 11:55:57.150619 7 checks.go:613] validating kubelet version
I1220 11:55:57.450319 7 checks.go:130] validating if the service is enabled and active
I1220 11:55:57.554984 7 checks.go:208] validating availability of port 10250
I1220 11:55:57.556700 7 checks.go:283] validating the existence of file /etc/kubernetes/pki/ca.crt
I1220 11:55:57.557579 7 checks.go:430] validating if the connectivity type is via proxy or direct
[preflight] Some fatal errors occurred:
[ERROR FileAvailable--etc-kubernetes-bootstrap-kubelet.conf]: /etc/kubernetes/bootstrap-kubelet.conf already exists
[ERROR FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
sh-4.4# rm -f /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/pki/ca.crt
sh-4.4# kubeadm join --v=8 --config=config.txt
I1220 11:56:10.073469 30 join.go:299] [join] found NodeName empty; using OS hostname as NodeName
I1220 11:56:10.074575 30 joinconfiguration.go:72] loading configuration from the given file
[preflight] Running pre-flight checks
I1220 11:56:10.085937 30 join.go:328] [preflight] Running general checks
I1220 11:56:10.086871 30 checks.go:245] validating the existence and emptiness of directory /etc/kubernetes/manifests
I1220 11:56:10.087809 30 checks.go:283] validating the existence of file /etc/kubernetes/kubelet.conf
I1220 11:56:10.088573 30 checks.go:283] validating the existence of file /etc/kubernetes/bootstrap-kubelet.conf
I1220 11:56:10.089370 30 checks.go:104] validating the container runtime
I1220 11:56:10.126939 30 checks.go:373] validating the presence of executable crictl
I1220 11:56:10.128075 30 checks.go:332] validating the contents of file /proc/sys/net/bridge/bridge-nf-call-iptables
I1220 11:56:10.129096 30 checks.go:332] validating the contents of file /proc/sys/net/ipv4/ip_forward
I1220 11:56:10.129993 30 checks.go:644] validating whether swap is enabled or not
I1220 11:56:10.131006 30 checks.go:373] validating the presence of executable ip
I1220 11:56:10.132983 30 checks.go:373] validating the presence of executable iptables
I1220 11:56:10.139740 30 checks.go:373] validating the presence of executable mount
I1220 11:56:10.140267 30 checks.go:373] validating the presence of executable nsenter
I1220 11:56:10.140738 30 checks.go:373] validating the presence of executable ebtables
I1220 11:56:10.141092 30 checks.go:373] validating the presence of executable ethtool
I1220 11:56:10.141459 30 checks.go:373] validating the presence of executable socat
I1220 11:56:10.142799 30 checks.go:373] validating the presence of executable tc
I1220 11:56:10.145062 30 checks.go:373] validating the presence of executable touch
I1220 11:56:10.145954 30 checks.go:515] running all checks
I1220 11:56:10.189173 30 checks.go:403] checking whether the given node name is reachable using net.LookupHost
I1220 11:56:10.204103 30 checks.go:613] validating kubelet version
I1220 11:56:10.529594 30 checks.go:130] validating if the service is enabled and active
I1220 11:56:10.556043 30 checks.go:208] validating availability of port 10250
I1220 11:56:10.557915 30 checks.go:283] validating the existence of file /etc/kubernetes/pki/ca.crt
I1220 11:56:10.559380 30 checks.go:430] validating if the connectivity type is via proxy or direct
I1220 11:56:10.560242 30 join.go:334] [preflight] Fetching init configuration
I1220 11:56:10.561013 30 join.go:601] [join] Discovering cluster-info
[discovery] Trying to connect to API Server "192.168.100.1:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://192.168.100.1:6443"
I1220 11:56:10.567171 30 round_trippers.go:383] GET https://192.168.100.1:6443/api/v1/namespaces/kube-public/configmaps/cluster-info
I1220 11:56:10.568131 30 round_trippers.go:390] Request Headers:
I1220 11:56:10.568891 30 round_trippers.go:393] Accept: application/json, */*
I1220 11:56:10.569609 30 round_trippers.go:393] User-Agent: kubeadm/v1.13.0 (linux/amd64) kubernetes/ddf47ac
I1220 11:56:10.586461 30 round_trippers.go:408] Response Status: 200 OK in 16 milliseconds
I1220 11:56:10.587241 30 round_trippers.go:411] Response Headers:
I1220 11:56:10.588006 30 round_trippers.go:414] Content-Type: application/json
I1220 11:56:10.588757 30 round_trippers.go:414] Content-Length: 1991
I1220 11:56:10.589497 30 round_trippers.go:414] Date: Thu, 20 Dec 2018 11:56:11 GMT
I1220 11:56:10.590141 30 request.go:942] Response Body: {"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"cluster-info","namespace":"kube-public","selfLink":"/api/v1/namespaces/kube-public/configmaps/cluster-info","uid":"c8b93b6b-0436-11e9-b4e4-4845202d6379","resourceVersion":"368","creationTimestamp":"2018-12-20T09:08:15Z"},"data":{"jws-kubeconfig-94dcda":"eyJhbGciOiJIUzI1NiIsImtpZCI6Ijk0ZGNkYSJ9..qJePAaUQp5APwTC-dSSzvL3MEVE8PQxgbvipbsC1faA","kubeconfig":"apiVersion: v1\nclusters:\n- cluster:\n certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN5RENDQWJDZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRFNE1USXlNREE1TURjMU0xb1hEVEk0TVRJeE56QTVNRGMxTTFvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTkF3CmRET2tVSXI4M3BDMkY2bFE2UFZMYzlMbGhtenNnc3NPMWRQZWhVWTZ0azJwNUZIdmRlNEwwdkVNWHpaZU5oSGUKNFNnd1A1cTMxd1F0Wkx3aEFKWDRDR1dzNGVFZG9LNnZqMDVJYzQ1SDZhMDNvN3RqSWhNcUsvTHVrdDAzR2Q1Lwp0OENXUllHYjV5TWtBNldIMkxDZkwvSXlYQWU1NTV0STR1RTRuTTFSUXNSMFNMdy8rR2FK [truncated 967 chars]
[discovery] Cluster info signature and contents are valid and no TLS pinning was specified, will use API Server "192.168.100.1:6443"
[discovery] Successfully established connection with API Server "192.168.100.1:6443"
I1220 11:56:10.596836 30 join.go:608] [join] Retrieving KubeConfig objects
[join] Reading configuration from the cluster...
[join] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
I1220 11:56:10.600835 30 round_trippers.go:383] GET https://192.168.100.1:6443/api/v1/namespaces/kube-system/configmaps/kubeadm-config
I1220 11:56:10.601554 30 round_trippers.go:390] Request Headers:
I1220 11:56:10.602287 30 round_trippers.go:393] User-Agent: kubeadm/v1.13.0 (linux/amd64) kubernetes/ddf47ac
I1220 11:56:10.603124 30 round_trippers.go:393] Accept: application/json, */*
I1220 11:56:10.603831 30 round_trippers.go:393] Authorization: Bearer 94dcda.c271f4ff502789ca
I1220 11:56:10.633321 30 round_trippers.go:408] Response Status: 200 OK in 28 milliseconds
I1220 11:56:10.634283 30 round_trippers.go:411] Response Headers:
I1220 11:56:10.635127 30 round_trippers.go:414] Date: Thu, 20 Dec 2018 11:56:11 GMT
I1220 11:56:10.635912 30 round_trippers.go:414] Content-Type: application/json
I1220 11:56:10.636635 30 round_trippers.go:414] Content-Length: 1316
I1220 11:56:10.637413 30 request.go:942] Response Body: {"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"kubeadm-config","namespace":"kube-system","selfLink":"/api/v1/namespaces/kube-system/configmaps/kubeadm-config","uid":"c8069fb3-0436-11e9-b4e4-4845202d6379","resourceVersion":"173","creationTimestamp":"2018-12-20T09:08:14Z"},"data":{"ClusterConfiguration":"apiServer:\n certSANs:\n - 192.168.100.1\n extraArgs:\n authorization-mode: Node,RBAC\n oidc-ca-file: /etc/kubernetes/pki/ca.crt\n oidc-client-id: kubernetes\n oidc-groups-claim: group\n oidc-issuer-url: https://192.168.0.154:32000\n oidc-username-claim: email\n timeoutForControlPlane: 4m0s\napiVersion: kubeadm.k8s.io/v1beta1\ncertificatesDir: /etc/kubernetes/pki\nclusterName: kubernetes\ncontrolPlaneEndpoint: \"\"\ncontrollerManager: {}\ndns:\n type: CoreDNS\netcd:\n local:\n dataDir: /var/lib/etcd\n imageRepository: registry.opensuse.org/devel/kubic/containers/container/kubic\n imageTag: \"3.3\"\nimageRepository: k8s.gcr.io\nkind: ClusterConfiguration\nkubernetesVer [truncated 292 chars]
I1220 11:56:10.643565 30 round_trippers.go:383] GET https://192.168.100.1:6443/api/v1/namespaces/kube-system/configmaps/kube-proxy
I1220 11:56:10.644430 30 round_trippers.go:390] Request Headers:
I1220 11:56:10.645126 30 round_trippers.go:393] Accept: application/json, */*
I1220 11:56:10.645791 30 round_trippers.go:393] User-Agent: kubeadm/v1.13.0 (linux/amd64) kubernetes/ddf47ac
I1220 11:56:10.646455 30 round_trippers.go:393] Authorization: Bearer 94dcda.c271f4ff502789ca
I1220 11:56:10.654053 30 round_trippers.go:408] Response Status: 200 OK in 6 milliseconds
I1220 11:56:10.655099 30 round_trippers.go:411] Response Headers:
I1220 11:56:10.655921 30 round_trippers.go:414] Content-Type: application/json
I1220 11:56:10.656796 30 round_trippers.go:414] Content-Length: 1655
I1220 11:56:10.657597 30 round_trippers.go:414] Date: Thu, 20 Dec 2018 11:56:11 GMT
I1220 11:56:10.658883 30 request.go:942] Response Body: {"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"kube-proxy","namespace":"kube-system","selfLink":"/api/v1/namespaces/kube-system/configmaps/kube-proxy","uid":"c8de0370-0436-11e9-b4e4-4845202d6379","resourceVersion":"229","creationTimestamp":"2018-12-20T09:08:15Z","labels":{"app":"kube-proxy"}},"data":{"config.conf":"apiVersion: kubeproxy.config.k8s.io/v1alpha1\nbindAddress: 0.0.0.0\nclientConnection:\n acceptContentTypes: \"\"\n burst: 10\n contentType: application/vnd.kubernetes.protobuf\n kubeconfig: /var/lib/kube-proxy/kubeconfig.conf\n qps: 5\nclusterCIDR: 172.16.0.0/13\nconfigSyncPeriod: 15m0s\nconntrack:\n max: null\n maxPerCore: 32768\n min: 131072\n tcpCloseWaitTimeout: 1h0m0s\n tcpEstablishedTimeout: 24h0m0s\nenableProfiling: false\nhealthzBindAddress: 0.0.0.0:10256\nhostnameOverride: \"\"\niptables:\n masqueradeAll: false\n masqueradeBit: 14\n minSyncPeriod: 0s\n syncPeriod: 30s\nipvs:\n excludeCIDRs: null\n minSyncPeriod: 0s\n scheduler: \"\"\n syncPeriod: 30s\nkind: Kub [truncated 631 chars]
I1220 11:56:10.664746 30 round_trippers.go:383] GET https://192.168.100.1:6443/api/v1/namespaces/kube-system/configmaps/kubelet-config-1.13
I1220 11:56:10.665883 30 round_trippers.go:390] Request Headers:
I1220 11:56:10.666731 30 round_trippers.go:393] User-Agent: kubeadm/v1.13.0 (linux/amd64) kubernetes/ddf47ac
I1220 11:56:10.667616 30 round_trippers.go:393] Authorization: Bearer 94dcda.c271f4ff502789ca
I1220 11:56:10.668451 30 round_trippers.go:393] Accept: application/json, */*
I1220 11:56:10.676896 30 round_trippers.go:408] Response Status: 200 OK in 7 milliseconds
I1220 11:56:10.677820 30 round_trippers.go:411] Response Headers:
I1220 11:56:10.680010 30 round_trippers.go:414] Content-Type: application/json
I1220 11:56:10.681115 30 round_trippers.go:414] Content-Length: 2134
I1220 11:56:10.682015 30 round_trippers.go:414] Date: Thu, 20 Dec 2018 11:56:11 GMT
I1220 11:56:10.683204 30 request.go:942] Response Body: {"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"kubelet-config-1.13","namespace":"kube-system","selfLink":"/api/v1/namespaces/kube-system/configmaps/kubelet-config-1.13","uid":"c80c1601-0436-11e9-b4e4-4845202d6379","resourceVersion":"176","creationTimestamp":"2018-12-20T09:08:14Z"},"data":{"kubelet":"address: 0.0.0.0\napiVersion: kubelet.config.k8s.io/v1beta1\nauthentication:\n anonymous:\n enabled: false\n webhook:\n cacheTTL: 2m0s\n enabled: true\n x509:\n clientCAFile: /etc/kubernetes/pki/ca.crt\nauthorization:\n mode: Webhook\n webhook:\n cacheAuthorizedTTL: 5m0s\n cacheUnauthorizedTTL: 30s\ncgroupDriver: cgroupfs\ncgroupsPerQOS: true\nclusterDNS:\n- 172.24.0.10\nclusterDomain: cluster.local\nconfigMapAndSecretChangeDetectionStrategy: Watch\ncontainerLogMaxFiles: 5\ncontainerLogMaxSize: 10Mi\ncontentType: application/vnd.kubernetes.protobuf\ncpuCFSQuota: true\ncpuCFSQuotaPeriod: 100ms\ncpuManagerPolicy: none\ncpuManagerReconcilePeriod: 10s\nenableControllerAttachDetach: tr [truncated 1110 chars]
I1220 11:56:10.688139 30 interface.go:384] Looking for default routes with IPv4 addresses
I1220 11:56:10.688797 30 interface.go:389] Default route transits interface "eth0"
I1220 11:56:10.689612 30 interface.go:196] Interface eth0 is up
I1220 11:56:10.690375 30 interface.go:244] Interface "eth0" has 2 addresses :[192.168.100.220/24 fe80::d0a8:62ff:fe54:b6e9/64].
I1220 11:56:10.690995 30 interface.go:211] Checking addr 192.168.100.220/24.
I1220 11:56:10.691796 30 interface.go:218] IP found 192.168.100.220
I1220 11:56:10.692489 30 interface.go:250] Found valid IPv4 address 192.168.100.220 for interface "eth0".
I1220 11:56:10.693168 30 interface.go:395] Found active IP 192.168.100.220
I1220 11:56:10.694393 30 join.go:341] [preflight] Running configuration dependant checks
I1220 11:56:10.695211 30 join.go:478] [join] writing bootstrap kubelet config file at /etc/kubernetes/bootstrap-kubelet.conf
I1220 11:56:10.942159 30 loader.go:359] Config loaded from file /etc/kubernetes/bootstrap-kubelet.conf
I1220 11:56:10.943961 30 join.go:503] Stopping the kubelet
[kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.12" ConfigMap in the kube-system namespace
I1220 11:56:10.977300 30 round_trippers.go:383] GET https://192.168.100.1:6443/api/v1/namespaces/kube-system/configmaps/kubelet-config-1.12
I1220 11:56:10.978035 30 round_trippers.go:390] Request Headers:
I1220 11:56:10.978844 30 round_trippers.go:393] User-Agent: kubeadm/v1.13.0 (linux/amd64) kubernetes/ddf47ac
I1220 11:56:10.979502 30 round_trippers.go:393] Accept: application/json, */*
I1220 11:56:10.980081 30 round_trippers.go:393] Authorization: Bearer 94dcda.c271f4ff502789ca
I1220 11:56:10.983223 30 round_trippers.go:408] Response Status: 403 Forbidden in 2 milliseconds
I1220 11:56:10.984240 30 round_trippers.go:411] Response Headers:
I1220 11:56:10.985065 30 round_trippers.go:414] Content-Type: application/json
I1220 11:56:10.985883 30 round_trippers.go:414] X-Content-Type-Options: nosniff
I1220 11:56:10.987515 30 round_trippers.go:414] Content-Length: 342
I1220 11:56:10.989207 30 round_trippers.go:414] Date: Thu, 20 Dec 2018 11:56:11 GMT
I1220 11:56:10.990506 30 request.go:942] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"configmaps \"kubelet-config-1.12\" is forbidden: User \"system:bootstrap:94dcda\" cannot get resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"","reason":"Forbidden","details":{"name":"kubelet-config-1.12","kind":"configmaps"},"code":403}
configmaps "kubelet-config-1.12" is forbidden: User "system:bootstrap:94dcda" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
$
$
$ kubeadm version -o json
{
"clientVersion": {
"major": "1",
"minor": "13",
"gitVersion": "v1.13.0",
"gitCommit": "ddf47ac13c1a9483ea035a79cd7c10005ff21a6d",
"gitTreeState": "archive",
"buildDate": "2018-12-07T12:00:00Z",
"goVersion": "go1.11.2",
"compiler": "gc",
"platform": "linux/amd64"
}
}
由于某种原因,似乎正在寻找kubelet-config-1.12
而正确的ConfigMap应该是kubelet-config-1.13
。
检查kubelet --version
我看到:
$ kubelet --version
Kubernetes v1.12.0
configmap名称是从中派生的吗?
@inercia configmap名称是从kubelet版本派生的。 请参阅上面的链接。
感谢您澄清@ oz123。
我想知道更新会发生什么。 例如,
1)安装了包含kubeadm-1.13
my-distribution-1.13
播种机
2)播种者已被init
合理化
3)在那之后的某个时间,一个节点安装了相同的发行版,安装了kubeadm-1.13
和kubelet-1.13
4),但在安装结束时应用了一些更新,并安装了新的kubelet-1.14
5) kubeadm join
尝试寻找1.14的配置图,但它不存在...
我在安装具有匹配的kubeadm版本的k8s 1.13.1时遇到了此问题,但仅限于kube-proxy:
kubeadm join --config /etc/kubernetes/kubeadm-client.conf --ignore-preflight-errors=all
[preflight] Running pre-flight checks
[discovery] Trying to connect to API Server "xxx.xxx.xxx.xxx:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://xxx.xxx.xxx.xxx:6443"
[discovery] Cluster info signature and contents are valid and no TLS pinning was specified, will use API Server "xxx.xxx.xxx.xxx:6443"
[discovery] Successfully established connection with API Server "xxx.xxx.xxx.xxx:6443"
[join] Reading configuration from the cluster...
[join] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
unable to fetch the kubeadm-config ConfigMap: failed to get component configs: configmaps "kube-proxy" is forbidden: User "system:bootstrap:3tw24k" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
如果我手动创建替代方法RoleBinding,则可以加入节点。
# on controlplane node
kubectl create rolebinding -n kube-system --role kube-proxy --group system:bootstrappers:kubeadm:default-node-token kubeadm:kube-proxy-bootstrap
# on joining node
...
[join] Reading configuration from the cluster...
[join] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.13" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "ci-pdk1-debug4144-k8sne-1" as an annotation
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the master to see this node join the cluster.
是否应该自动填充system:bootstrappers:kubeadm:default-node-token
组的kube-proxy
RoleBinding? 知道这里发生了什么吗?
编辑:还发现,在我的第一个kubeadm init
之后显式运行kubeadm init phase addon kube-proxy
kubeadm init
会为system:bootstrappers:kubeadm:default-node-token
创建一个kube-proxy RoleBinding。 仍然不确定为什么我最初的kubeadm init
跳过创建此RoleBinding的过程。
这为我工作:
绝对检查您的kubeadm和kubelet版本,确保在所有节点上都使用了相同版本的软件包。 在安装之前,您应该在主机上“标记并保留”这些版本:
检查每个的当前版本:
kubelet-版本
检查kubeadm
kubeadm版本
如果它们不同,则说明您有问题。 您应该在所有节点之间重新安装相同的版本,并允许降级。 我在以下命令中的版本可能比当前版本要旧,您可以用更多最新版本替换版本号,但这可以工作:
sudo apt-get install -y docker-ce = 18.06.1〜ce〜3-0〜ubuntu kubelet = 1.12.2-00 kubeadm = 1.12.2-00 kubectl = 1.12.2-00 --allow-降级
然后,在安装它们之后,标记并按住它们,使其无法自动升级,并破坏您的系统
sudo apt-mark hold docker-ce kubelet kubeadm kubectl
最有用的评论