错误报告
kubeadm版本:1.9.0-00 amd64
kubelet版本:1.9.0-00 amd64
kubernetes- cni:0.6.0-00 amd64
docker -ce0〜ubuntu amd64
系统版本:Ubuntu 16.04.3 LTS
物理机
在ubuntu 16.04上安装kubernetes集群。 运行kubeadm init时,出现错误:
[init]如果必须拖动控制平面图像,则可能需要一分钟或更长时间。
[kubelet检查]似乎kubelet不在运行或运行状况良好。
[kubelet检查]似乎kubelet不在运行或运行状况良好。
[kubelet检查]似乎kubelet不在运行或运行状况良好。
[kubelet检查]似乎kubelet不在运行或运行状况良好。
[kubelet检查]似乎kubelet不在运行或运行状况良好。
[kubelet检查]似乎kubelet不在运行或运行状况良好。
[kubelet检查]似乎kubelet不在运行或运行状况良好。
在我看到syslog / var / log / syslog之后,出现如下错误:
1月4日16:20:58 master03 kubelet [10360]:W0104 16:20:58.268285 10360 cni.go:171]无法更新cni配置:在/etc/cni/net.d中找不到网络
1月4日16:20:58 master03 kubelet [10360]:W0104 16:20:58.269487 10360 cni.go:171]无法更新cni配置:在/etc/cni/net.d中找不到网络
1月4日16:20:58 master03 kubelet [10360]:I0104 16:20:58.269527 10360 docker_service.go:232]由cni管理的Docker cri网络
Jan 04 16:20:58 master03 kubelet [10360]:I0104 16:20:58.274386 10360 docker_service.go:237] Docker信息:&{ID:3 XXZ:XEDW : ZDQS:A2MI :5 AEN:CFEP :44AQ:YDS4 : CRME:UBRS :46LI:MXNS容器:0包含rsRunning:0续
1月4日16:20:58 master03 kubelet [10360]:错误:无法运行Kubelet:未能创建kubelet:配置错误:kubelet cgroup驱动程序:“ cgroupfs”不同于docker cgroup驱动程序:“ systemd”
然后我检查了docker cgroup驱动程序:docker info | grep -i cgroup
Cgroup驱动程序:systemd
kubeadm版本(使用kubeadm version
):
环境:
kubectl version
):uname -a
):错误:无法运行Kubelet:无法创建kubelet:错误配置:kubelet cgroup驱动程序:“ cgroupfs”与docker cgroup驱动程序:“ systemd”不同
码头工人信息| grep -i cgroup
Cgroup驱动程序:systemd
我可以确认这一点。
@ lavender2020您需要手动将--cgroup-driver=systemd
附加到kubelet
启动参数并重新加载kubelet单元文件以重新启动服务。
kubelet
用于操纵主机上的cgroup的默认驱动程序是cgroupfs
。
大多数人转向kubeadm
主要是为了非常快速地建立集群。 简单实用。
@luxas我们docker
和kubelet
之间添加对cgroup驱动程序一致性的预过滤检查,以给出更明确的警告? 或为kubelet.service
添加另一个插件? 还是只修改/etc/systemd/system/kubelet.service.d/10-kubeadm.conf
到位?
但是,如果是这样,我们可能需要获得root特权才能进行这些更改。
我用kubeadm v1.9.2
遇到了同样的问题,但是我可以看到kubelet被配置为使用systemd cgroup驱动程序。
kubelet正在使用--cgroup-driver = systemd
cat /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf"
Environment="KUBELET_SYSTEM_PODS_ARGS=--pod-manifest-path=/etc/kubernetes/manifests --allow-privileged=true"
Environment="KUBELET_NETWORK_ARGS=--network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin"
Environment="KUBELET_DNS_ARGS=--cluster-dns=10.96.0.10 --cluster-domain=cluster.local"
Environment="KUBELET_AUTHZ_ARGS=--authorization-mode=Webhook --client-ca-file=/etc/kubernetes/pki/ca.crt"
Environment="KUBELET_CADVISOR_ARGS=--cadvisor-port=0"
Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=systemd"
Environment="KUBELET_CERTIFICATE_ARGS=--rotate-certificates=true --cert-dir=/var/lib/kubelet/pki"
ExecStart=
ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_SYSTEM_PODS_ARGS $KUBELET_NETWORK_ARGS $KUBELET_DNS_ARGS $KUBELET_AUTHZ_ARGS $KUBELET_CADVISOR_ARGS $KUBELET_CGROUP_ARGS $KUBELET_CERTIFICATE_ARGS $KUBELET_EXTRA_ARGS
码头工人信息|
WARNING: Usage of loopback devices is strongly discouraged for production use. Use `--storage-opt dm.thinpooldev` to specify a custom block storage device.
Cgroup Driver: systemd
kubelet日志
I0206 16:20:40.010949 5712 feature_gate.go:220] feature gates: &{{} map[]}
I0206 16:20:40.011054 5712 controller.go:114] kubelet config controller: starting controller
I0206 16:20:40.011061 5712 controller.go:118] kubelet config controller: validating combination of defaults and flags
W0206 16:20:40.015566 5712 cni.go:171] Unable to update cni config: No networks found in /etc/cni/net.d
I0206 16:20:40.019079 5712 server.go:182] Version: v1.9.2
I0206 16:20:40.019136 5712 feature_gate.go:220] feature gates: &{{} map[]}
I0206 16:20:40.019240 5712 plugins.go:101] No cloud provider specified.
W0206 16:20:40.019273 5712 server.go:328] standalone mode, no API client
W0206 16:20:40.041031 5712 server.go:236] No api server defined - no events will be sent to API server.
I0206 16:20:40.041058 5712 server.go:428] --cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /
I0206 16:20:40.041295 5712 container_manager_linux.go:242] container manager verified user specified cgroup-root exists: /
I0206 16:20:40.041308 5712 container_manager_linux.go:247] Creating Container Manager object based on Node Config: {RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:docker CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.1} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>} {Signal:imagefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.15} GracePeriod:0s MinReclaim:<nil>}]} ExperimentalQOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalCPUManagerReconcilePeriod:10s}
I0206 16:20:40.041412 5712 container_manager_linux.go:266] Creating device plugin manager: false
W0206 16:20:40.043521 5712 kubelet_network.go:139] Hairpin mode set to "promiscuous-bridge" but kubenet is not enabled, falling back to "hairpin-veth"
I0206 16:20:40.043541 5712 kubelet.go:571] Hairpin mode set to "hairpin-veth"
I0206 16:20:40.044909 5712 client.go:80] Connecting to docker on unix:///var/run/docker.sock
I0206 16:20:40.044937 5712 client.go:109] Start docker client with request timeout=2m0s
W0206 16:20:40.046785 5712 cni.go:171] Unable to update cni config: No networks found in /etc/cni/net.d
I0206 16:20:40.049953 5712 docker_service.go:232] Docker cri networking managed by kubernetes.io/no-op
I0206 16:20:40.055138 5712 docker_service.go:237] Docker Info: &{ID:ZXWO:G2FL:QM3S:IAWM:ITQL:XHRH:ZA3T:FJMV:5JDW:IMKI:NIFS:2Z4M Containers:8 ContainersRunning:0 ContainersPaused:0 ContainersStopped:8 Images:11 Driver:devicemapper DriverStatus:[[Pool Name docker-253:0-33593794-pool] [Pool Blocksize 65.54 kB] [Base Device Size 10.74 GB] [Backing Filesystem xfs] [Data file /dev/loop0] [Metadata file /dev/loop1] [Data Space Used 1.775 GB] [Data Space Total 107.4 GB] [Data Space Available 14.72 GB] [Metadata Space Used 2.093 MB] [Metadata Space Total 2.147 GB] [Metadata Space Available 2.145 GB] [Thin Pool Minimum Free Space 10.74 GB] [Udev Sync Supported true] [Deferred Removal Enabled true] [Deferred Deletion Enabled true] [Deferred Deleted Device Count 0] [Data loop file /var/lib/docker/devicemapper/devicemapper/data] [Metadata loop file /var/lib/docker/devicemapper/devicemapper/metadata] [Library Version 1.02.140-RHEL7 (2017-05-03)]] SystemStatus:[] Plugins:{Volume:[local] Network:[overlay host null bridge] Authorization:[] Log:[]} MemoryLimit:true SwapLimit:true KernelMemory:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6tables:true Debug:true NFd:16 OomKillDisable:true NGoroutines:25 SystemTime:2018-02-06T16:20:40.054685386Z LoggingDriver:journald CgroupDriver:systemd NEventsListener:0 KernelVersion:3.10.0-693.el7.x86_64 OperatingSystem:CentOS Linux 7 (Core) OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:0xc42021a380 NCPU:2 MemTotal:2097782784 GenericResources:[] DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:master1 Labels:[] ExperimentalBuild:false ServerVersion:1.12.6 ClusterStore: ClusterAdvertise: Runtimes:map[docker-runc:{Path:/usr/libexec/docker/docker-runc-current Args:[]} runc:{Path:docker-runc Args:[]}] DefaultRuntime:docker-runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:[] Nodes:0 Managers:0 Cluster:0xc420472640} LiveRestoreEnabled:false Isolation: InitBinary: ContainerdCommit:{ID: Expected:} RuncCommit:{ID: Expected:} InitCommit:{ID: Expected:} SecurityOptions:[seccomp]}
error: failed to run Kubelet: failed to create kubelet: misconfiguration: kubelet cgroup driver: "cgroupfs" is different from docker cgroup driver: "systemd"
版本信息:
kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.2", GitCommit:"5fa2db2bd46ac79e5e00a4e6ed24191080aa463b", GitTreeState:"clean", BuildDate:"2018-01-18T09:42:01Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"}
kubelet --version
Kubernetes v1.9.2
docker version
Client:
Version: 1.12.6
API version: 1.24
Package version: docker-1.12.6-71.git3e8e77d.el7.centos.1.x86_64
Go version: go1.8.3
Git commit: 3e8e77d/1.12.6
Built: Tue Jan 30 09:17:00 2018
OS/Arch: linux/amd64
Server:
Version: 1.12.6
API version: 1.24
Package version: docker-1.12.6-71.git3e8e77d.el7.centos.1.x86_64
Go version: go1.8.3
Git commit: 3e8e77d/1.12.6
Built: Tue Jan 30 09:17:00 2018
OS/Arch: linux/amd64
@dkirrane您是否已重新加载kubelet.service
单位文件?
运行systemctl daemon-reload
。 然后systemctl restart kubelet
。
此问题未在1.9.3上修复
版本信息:
kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.3", GitCommit:"d2835416544f298c919e2ead3be3d0864b52323b", GitTreeState:"clean", BuildDate:"2018-02-07T11:55:20Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"}
kubelet --version
Kubernetes v1.9.3
docker version
Client:
Version: 1.13.1
API version: 1.26
Go version: go1.6.2
Git commit: 092cba3
Built: Thu Nov 2 20:40:23 2017
OS/Arch: linux/amd64
Server:
Version: 1.13.1
API version: 1.26 (minimum version 1.12)
Go version: go1.6.2
Git commit: 092cba3
Built: Thu Nov 2 20:40:23 2017
OS/Arch: linux/amd64
Experimental: false
@gades您的cgroup驱动程序是什么?
$ docker info | grep -i cgroup
有同样的问题。
docker info | grep -i cgroup
Cgroup Driver: systemd
cat /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf"
Environment="KUBELET_SYSTEM_PODS_ARGS=--pod-manifest-path=/etc/kubernetes/manifests --allow-privileged=true"
Environment="KUBELET_NETWORK_ARGS=--network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin"
Environment="KUBELET_DNS_ARGS=--cluster-dns=10.96.0.10 --cluster-domain=cluster.local"
Environment="KUBELET_AUTHZ_ARGS=--authorization-mode=Webhook --client-ca-file=/etc/kubernetes/pki/ca.crt"
Environment="KUBELET_CADVISOR_ARGS=--cadvisor-port=0"
Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=systemd"
Environment="KUBELET_CERTIFICATE_ARGS=--rotate-certificates=true --cert-dir=/var/lib/kubelet/pki"
ExecStart=
ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_SYSTEM_PODS_ARGS $KUBELET_NETWORK_ARGS $KUBELET_DNS_ARGS $KUBELET_AUTHZ_ARGS $KUBELET_CADVISOR_ARGS $KUBELET_CGROUP_ARGS $KUBELET_CERTIFICATE_ARGS $KUBELET_EXTRA_ARGS
I0227 13:17:43.802942 3493 docker_service.go:237] Docker Info: &{ID:RJUG:6DLB:A4JM:4T6H:JYKO:7JUC:NQCI:SLI2:DC64:ZXOT:DIX6:ASJY Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay DriverStatus:[[Backing Filesystem extfs]] SystemStatus:[] Plugins:{Volume:[local] Network:[bridge overlay null host] Authorization:[] Log:[]} MemoryLimit:true SwapLimit:true KernelMemory:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6tables:true Debug:false NFd:26 OomKillDisable:true NGoroutines:47 SystemTime:2018-02-27T13:17:43.802488651-08:00 LoggingDriver:journald CgroupDriver:systemd NEventsListener:0 KernelVersion:3.10.0-693.11.6.el7.x86_64 OperatingSystem:CentOS Linux 7 (Core) OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:0xc42033d7a0 NCPU:64 MemTotal:270186274816 GenericResources:[] DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:param03.lancelot.cluster.bds Labels:[] ExperimentalBuild:false ServerVersion:1.12.6 ClusterStore: ClusterAdvertise: Runtimes:map[docker-runc:{Path:/usr/libexec/docker/docker-runc-current Args:[]} runc:{Path:docker-runc Args:[]}] DefaultRuntime:docker-runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:[] Nodes:0 Managers:0 Cluster:0xc420360640} LiveRestoreEnabled:false Isolation: InitBinary: ContainerdCommit:{ID: Expected:} RuncCommit:{ID: Expected:} InitCommit:{ID: Expected:} SecurityOptions:[seccomp]}
error: failed to run Kubelet: failed to create kubelet: misconfiguration: kubelet cgroup driver: "cgroupfs" is different from docker cgroup driver: "systemd"
Kubelet是否还有其他地方正在获取cgroupfs驱动程序指令?
@ mas-dse-greina请参阅我的评论中的解决方案。
@dixudx即使将--cgroup-driver=systemd
附加到/etc/systemd/system/kubelet.service.d/10-kubeadm.conf
,问题仍然存在。
这是最新文件,
[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf"
Environment="KUBELET_SYSTEM_PODS_ARGS=--pod-manifest-path=/etc/kubernetes/manifests --allow-privileged=true"
Environment="KUBELET_NETWORK_ARGS=--network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin"
Environment="KUBELET_DNS_ARGS=--cluster-dns=10.96.0.10 --cluster-domain=cluster.local"
Environment="KUBELET_AUTHZ_ARGS=--authorization-mode=Webhook --client-ca-file=/etc/kubernetes/pki/ca.crt"
Environment="KUBELET_CADVISOR_ARGS=--cadvisor-port=0"
Environment="KUBELET_CERTIFICATE_ARGS=--rotate-certificates=true --cert-dir=/var/lib/kubelet/pki"
ExecStart=
ExecStart=/usr/bin/kubelet --cgroup-driver=systemd $KUBELET_KUBECONFIG_ARGS $KUBELET_SYSTEM_PODS_ARGS $KUBELET_NETWORK_ARGS $KUBELET_DNS_ARGS $KUBELET_AUTHZ_ARGS $KUBELET_CADVISOR_ARGS $KUBELET_CERTIFICATE_ARGS $KUBELET_EXTRA_ARGS
PS:已经修复。 重新启动守护程序和kubelet之后,我使用了kubeadm init --pod-network-cidr = 10.244.0.0 / 16
是的。 我发现了同样的事情。 附加--cgroup-driver = systemd
似乎没有任何作用。 我已经重启了服务,甚至
重新启动计算机。
似乎行为仅在一台计算机上。 我已经
在其他4台机器上都取得了成功,但是这台机器似乎并不想
加入集群。
-托尼
在2018年3月1日,星期四,上午11:44,srinivas491-oneconvergence <
[email protected]>写道:
@dixudx https://github.com/dixudx即使附加了
--cgroup-driver = systemd到/ etc / systemd / system / kubelet。
service.d / 10-kubeadm.conf,问题仍然存在。-
您收到此邮件是因为有人提到您。
直接回复此电子邮件,在GitHub上查看
https://github.com/kubernetes/kubeadm/issues/639#issuecomment-369707723 ,
或使线程静音
https://github.com/notifications/unsubscribe-auth/AVReEuQHJR80-8J4VLvACnGt1lTjEbYrks5taE-BgaJpZM4RSs0P
。
更改单位文件后,需要更改为systemdctl daemon-reload
才能使更改生效。
FWIW这是RPM中的默认设置,但.debs中没有默认设置。 主要支持中是否有任何当前不默认为systemd的发行版?
/分配@detiber
我用kubeadm v1.9.3和v1.9.4遇到了同样的问题。
使用--cgroup-driver = systemd启动kubelet
$ cat /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf"
Environment="KUBELET_SYSTEM_PODS_ARGS=--pod-manifest-path=/etc/kubernetes/manifests --allow-privileged=true"
Environment="KUBELET_NETWORK_ARGS=--network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin"
Environment="KUBELET_DNS_ARGS=--cluster-dns=10.96.0.10 --cluster-domain=cluster.local"
Environment="KUBELET_AUTHZ_ARGS=--authorization-mode=Webhook --client-ca-file=/etc/kubernetes/pki/ca.crt"
Environment="KUBELET_CADVISOR_ARGS=--cadvisor-port=0"
Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=systemd"
Environment="KUBELET_CERTIFICATE_ARGS=--rotate-certificates=true --cert-dir=/var/lib/kubelet/pki"
ExecStart=
ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_SYSTEM_PODS_ARGS $KUBELET_NETWORK_ARGS $KUBELET_DNS_ARGS $KUBELET_AUTHZ_ARGS $KUBELET_CADVISOR_ARGS $KUBELET_CGROUP_ARGS $KUBELET_CERTIFICATE_ARGS $KUBELET_EXTRA_ARGS
重新加载服务
$ systemctl daemon-reload
$ systemctl restart kubelet
检查码头工人信息
$ docker info |grep -i cgroup
WARNING: Usage of loopback devices is strongly discouraged for production use. Use `--storage-opt dm.thinpooldev` to specify a custom block storage device.
Cgroup Driver: systemd
kubelet日志
$ kubelet logs
error: failed to run Kubelet: failed to create kubelet: misconfiguration: kubelet cgroup driver: "cgroupfs" is different from docker cgroup driver: "systemd"
版本信息
$ kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.3", GitCommit:"d2835416544f298c919e2ead3be3d0864b52323b", GitTreeState:"clean", BuildDate:"2018-02-07T11:55:20Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"}
$ kubelet --version
Kubernetes v1.9.3
$ docker version
Client:
Version: 1.12.6
API version: 1.24
Package version: docker-1.12.6-71.git3e8e77d.el7.centos.1.x86_64
Go version: go1.8.3
Git commit: 3e8e77d/1.12.6
Built: Tue Jan 30 09:17:00 2018
OS/Arch: linux/amd64
Server:
Version: 1.12.6
API version: 1.24
Package version: docker-1.12.6-71.git3e8e77d.el7.centos.1.x86_64
Go version: go1.8.3
Git commit: 3e8e77d/1.12.6
Built: Tue Jan 30 09:17:00 2018
OS/Arch: linux/amd64
$ cat /etc/redhat-release
CentOS Linux release 7.2.1511 (Core)
@FrostyLeaf可以看看正在运行kubelet的命令行来查看是否在其中指定了cgroup驱动程序吗?
像ps aux |grep kubelet
或cat /proc/<kubelet pid>/cmdline
应该可以帮助您看到这一点。
@ bart0sh就是这样:
$ ps aux |grep /bin/kubelet
root 13025 0.0 0.0 112672 980 pts/4 S+ 01:49 0:00 grep --color=auto /bin/kubelet
root 30495 4.5 0.6 546152 76924 ? Ssl 00:14 4:22 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --pod-manifest-path=/etc/kubernetes/manifests --allow-privileged=true --network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin --cluster-dns=10.96.0.10 --cluster-domain=cluster.local --authorization-mode=Webhook --client-ca-file=/etc/kubernetes/pki/ca.crt --cadvisor-port=0 --cgroup-driver=systemd --rotate-certificates=true --cert-dir=/var/lib/kubelet/pki --fail-swap-on=false
@FrostyLeaf谢谢! 我也可以重现这一点。 似乎是一个错误。 看着它。
作为临时的解决方法,您可以将docker和kubelet切换到cgroupfs驱动程序。 它应该工作。
@ bart0sh很好。 非常感谢。 我会尝试的。
同样在这里。
[root<strong i="7">@kubernetes</strong> ~]# cat /etc/redhat-release
CentOS Linux release 7.4.1708 (Core)
``
[ root @ kubernetes〜 ]#kubelet --version
Kubernetes v1.9.4
```bash
[root<strong i="14">@kubernetes</strong> ~]# docker version
Client:
Version: 1.13.1
API version: 1.26
Package version: <unknown>
Go version: go1.8.3
Git commit: 774336d/1.13.1
Built: Wed Mar 7 17:06:16 2018
OS/Arch: linux/amd64
Server:
Version: 1.13.1
API version: 1.26 (minimum version 1.12)
Package version: <unknown>
Go version: go1.8.3
Git commit: 774336d/1.13.1
Built: Wed Mar 7 17:06:16 2018
OS/Arch: linux/amd64
Experimental: false
``
[ root @ kubernetes〜 ]#kubeadm版本
kubeadm版本:&version.Info {主要:“ 1”,次要:“ 9”,GitVersion:“ v1.9.4”,GitCommit:“ bee2d1505c4fe820744d26d41ecd3fdd4a3d6546”,GitTreeState:“ clean”,BuildDate:“ 2018-03-12T16:21: 35Z“,GoVersion:” go1.9.3“,编译器:” gc“,平台:” linux / amd64“}
### docker Cgroup is systemd
```bash
[root<strong i="21">@kubernetes</strong> ~]# docker info | grep Cgroup
WARNING: You're not using the default seccomp profile
Cgroup Driver: systemd
[root<strong i="25">@kubernetes</strong> ~]# grep cgroup /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=systemd --runtime-cgroups=/systemd/system.slice --kubelet-cgroups=/systemd/system.slice"
[root<strong i="29">@kubernetes</strong> ~]# systemctl daemon-reload
[root<strong i="30">@kubernetes</strong> ~]# systemctl stop kubelet.service
[root<strong i="31">@kubernetes</strong> ~]# systemctl start kubelet.service
[root<strong i="6">@kubernetes</strong> ~]# kubelet logs
I0318 02:07:10.006151 29652 feature_gate.go:226] feature gates: &{{} map[]}
I0318 02:07:10.006310 29652 controller.go:114] kubelet config controller: starting controller
I0318 02:07:10.006315 29652 controller.go:118] kubelet config controller: validating combination of defaults and flags
I0318 02:07:10.018880 29652 server.go:182] Version: v1.9.4
I0318 02:07:10.018986 29652 feature_gate.go:226] feature gates: &{{} map[]}
I0318 02:07:10.019118 29652 plugins.go:101] No cloud provider specified.
W0318 02:07:10.019239 29652 server.go:328] standalone mode, no API client
W0318 02:07:10.068650 29652 **server.go:236] No api server defined - no events will be sent to API server.**
I0318 02:07:10.068670 29652 **server.go:428] --cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /**
I0318 02:07:10.069130 29652 container_manager_linux.go:242] container manager verified user specified cgroup-root exists: /
I0318 02:07:10.069306 29652 container_manager_linux.go:247] Creating Container Manager object based on Node Config: {RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:docker CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>} {Signal:imagefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.15} GracePeriod:0s MinReclaim:<nil>} {Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.1} GracePeriod:0s MinReclaim:<nil>}]} ExperimentalQOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalCPUManagerReconcilePeriod:10s}
I0318 02:07:10.069404 29652 container_manager_linux.go:266] Creating device plugin manager: false
W0318 02:07:10.072836 29652 kubelet_network.go:139] Hairpin mode set to "promiscuous-bridge" but kubenet is not enabled, falling back to "hairpin-veth"
I0318 02:07:10.072860 29652 kubelet.go:576] Hairpin mode set to "hairpin-veth"
I0318 02:07:10.075139 29652 client.go:80] Connecting to docker on unix:///var/run/docker.sock
I0318 02:07:10.075156 29652 client.go:109] Start docker client with request timeout=2m0s
I0318 02:07:10.080336 29652 docker_service.go:232] Docker cri networking managed by kubernetes.io/no-op
I0318 02:07:10.090943 29652 docker_service.go:237] Docker Info: &{ID:DUEI:P7Y3:JKGP:XJDI:UFXG:NAOX:K7ID:KHCF:PCGW:46QA:TQZB:WEXF Containers:18 ContainersRunning:17 ContainersPaused:0 ContainersStopped:1 Images:11 Driver:overlay2 DriverStatus:[[Backing Filesystem xfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:[] Plugins:{Volume:[local] Network:[bridge host macvlan null overlay] Authorization:[] Log:[]} MemoryLimit:true SwapLimit:true KernelMemory:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6tables:true Debug:false NFd:89 OomKillDisable:true NGoroutines:98 SystemTime:2018-03-18T02:07:10.083543475+01:00 LoggingDriver:journald CgroupDriver:systemd NEventsListener:0 KernelVersion:3.10.0-693.21.1.el7.x86_64 OperatingSystem:CentOS Linux 7 (Core) OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:0xc42027b810 NCPU:2 MemTotal:2097364992 GenericResources:[] DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:kubernetes.master Labels:[] ExperimentalBuild:false ServerVersion:1.13.1 ClusterStore: ClusterAdvertise: Runtimes:map[runc:{Path:docker-runc Args:[]} docker-runc:{Path:/usr/libexec/docker/docker-runc-current Args:[]}] DefaultRuntime:docker-runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:[] Nodes:0 Managers:0 Cluster:0xc4202a8f00} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID: Expected:aa8187dbd3b7ad67d8e5e3a15115d3eef43a7ed1} RuncCommit:{ID:N/A Expected:9df8b306d01f59d3a8029be411de015b7304dd8f} InitCommit:{ID:N/A Expected:949e6facb77383876aeff8a6944dde66b3089574} SecurityOptions:[name=seccomp,profile=/etc/docker/seccomp.json name=selinux]}
**error: failed to run Kubelet: failed to create kubelet: misconfiguration: kubelet cgroup driver: "cgroupfs" is different from docker cgroup driver: "systemd"**
[root<strong i="6">@kubernetes</strong> ~]# ps aux | grep -i kube
root 10182 0.4 1.2 54512 25544 ? Ssl mars17 1:10 kube-scheduler --leader-elect=true --kubeconfig=/etc/kubernetes/scheduler.conf --address=127.0.0.1
root 10235 1.8 12.7 438004 261948 ? Ssl mars17 4:44 kube-apiserver --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt --admission-control=Initializers,NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota --allow-privileged=true --requestheader-group-headers=X-Remote-Group --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-allowed-names=front-proxy-client --service-account-key-file=/etc/kubernetes/pki/sa.pub --client-ca-file=/etc/kubernetes/pki/ca.crt --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key --requestheader-username-headers=X-Remote-User --tls-private-key-file=/etc/kubernetes/pki/apiserver.key --insecure-port=0 --enable-bootstrap-token-auth=true --tls-cert-file=/etc/kubernetes/pki/apiserver.crt --secure-port=6443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --advertise-address=192.168.1.70 --service-cluster-ip-range=10.96.0.0/12 --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt --authorization-mode=Node,RBAC --etcd-servers=http://127.0.0.1:2379
root 10421 0.1 1.0 52464 22052 ? Ssl mars17 0:20 /usr/local/bin/kube-proxy --config=/var/lib/kube-proxy/config.conf
root 12199 1.7 8.5 326552 174108 ? Ssl mars17 4:11 kube-controller-manager --address=127.0.0.1 --leader-elect=true --controllers=*,bootstrapsigner,tokencleaner --cluster-signing-key-file=/etc/kubernetes/pki/ca.key --cluster-signing-cert-file=/etc/kubernetes/pki/ca.crt --use-service-account-credentials=true --kubeconfig=/etc/kubernetes/controller-manager.conf --root-ca-file=/etc/kubernetes/pki/ca.crt --service-account-private-key-file=/etc/kubernetes/pki/sa.key
root 22928 0.0 1.0 279884 20752 ? Sl 01:10 0:00 /home/weave/weaver --port=6783 --datapath=datapath --name=fe:9b:da:25:e2:b2 --host-root=/host --http-addr=127.0.0.1:6784 --status-addr=0.0.0.0:6782 --docker-api= --no-dns --db-prefix=/weavedb/weave-net --ipalloc-range=10.32.0.0/12 --nickname=kubernetes.master --ipalloc-init consensus=1 --conn-limit=30 --expect-npc 192.168.1.70
root 23308 0.0 0.7 38936 15340 ? Ssl 01:10 0:01 /kube-dns --domain=cluster.local. --dns-port=10053 --config-dir=/kube-dns-config --v=2
65534 23443 0.0 0.8 37120 18028 ? Ssl 01:10 0:03 /sidecar --v=2 --logtostderr --probe=kubedns,127.0.0.1:10053,kubernetes.default.svc.cluster.local,5,SRV --probe=dnsmasq,127.0.0.1:53,kubernetes.default.svc.cluster.local,5,SRV
root 29547 1.6 2.9 819012 61196 ? Ssl 02:07 0:22 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --pod-manifest-path=/etc/kubernetes/manifests --allow-privileged=true --network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin --cluster-dns=10.96.0.10 --cluster-domain=cluster.local --authorization-mode=Webhook --client-ca-file=/etc/kubernetes/pki/ca.crt --cadvisor-port=0 --cgroup-driver=systemd --runtime-cgroups=/systemd/system.slice --kubelet-cgroups=/systemd/system.slice --rotate-certificates=true --cert-dir=/var/lib/kubelet/pki
v1.9.5解决了这个问题,太棒了!@ bart0sh
@FrostyLeaf我仍然可以用1.9.5来重现它:
$ rpm -qa | grep kube
kubeadm-1.9.5-0.x86_64
kubelet-1.9.5-0.x86_64
kubernetes-cni-0.6.0-0.x86_64
kubectl-1.9.5-0.x86_64
$ docker info 2> / dev / null | grep -i cgroup
Cgroup驱动程序: systemd
$ ps aux | grep cgroup-driver
根29078 1.9 0.1 1222632 91824? Ssl 13:45 0:04 / usr / bin / kubelet --bootstrap-kubeconfig = / etc / kubernetes / bootstrap-kubelet.conf --kubeconfig = / etc / kubernetes / kubelet.conf --pod-manifest-path = / etc / kubernetes / manifests --allow-privileged = true --network-plugin = cni --cni-conf-dir = / etc / cni / net.d --cni-bin-dir = / opt / cni / bin- -cluster-DNS = 10.96.0.10 --cluster域= cluster.local --authorization模式=网络挂接--client-CA-文件=的/ etc / kubernetes / PKI / ca.crt --cadvisor端口= 0 - -cgroup-driver = systemd --rotate-certificates = true --cert-dir = / var / lib / kubelet / pki
I0321 13:50:29.901008 30817 container_manager_linux.go:247]基于节点配置创建容器管理器对象:{RuntimeCgroupsName:SystemCgroupsName:KubeletCgroupsName:包含erRuntime: docker Cgro upsPerQOS:true CgroupRoot:/ Cgr oupDriver:cgroupfs KubeletRootDir / kubelet ProtectKerne lDefaults:假NodeAllocatableConfig:{KubeReservedCgroupName:SystemReservedCgroupName:EnforceNodeAl可定位:地图[荚:{}]卡布eReserved:映射[] SYSTE mReserved:映射[] HardEvictionThresholds:[{信号:memory.available运营商:每种不超过值:{数量:100Mi百分比:0 }限期:0s最小回收:
错误:无法运行Kubelet:无法创建kubelet:错误配置:kubelet cgroup驱动程序: “ cgroupfs”与docker cgroup驱动程序:“ systemd”不同
您还在使用systemd cgroup驱动程序吗?
我建议结束这个问题
我观察到了导致大多数报告出现的两个原因:
编辑systemd插件后忘记运行'systemctl daemon-reload'。 尽管-cgroup-driver = systemd已添加到/etc/systemd/system/kubelet.service.d/10-kubeadm.conf,但它没有任何效果,并且默认驱动程序(或先前使用--cgroup-driver指定的驱动程序)用过的。
运行“ kubelet日志”命令以查看kubelet日志。 kubelet中不存在'logs'子命令,因此'kubelet logs'和'kubelet'是相同的命令。 'kubelet日志'使用默认的cgroup驱动程序'cgroupfs'运行kubelet,而kubelet抱怨kubelet和docker驱动程序之间的不一致。 应该使用“ journalctl -ux kubelet”来查看日志。
我使用kubelet 1.8.0、1.9.0、1.9.3和1.9.5测试了--cgroup-driver = systemd选项。 日志中没有错误消息“ cgroupfs与docker cgroup驱动程序:systemd不同”。
@timothysc关于我的最后评论没有任何异议。 请问您能解决这个问题吗? 这不是错误,因为它是由于对kubelet和/或systemd的了解不足而引起的。
从我的角度来看,有两件事可能是有意义的:
我们可能要考虑为这些问题创建单独的问题。
无论如何,这个问题可以解决。
感谢v1.9.5,一切对我来说都很好
同意@ bart0sh关于初始化检查kubelet和docker之间的cgroup驱动程序一致性的初始化。
也许`kublet日志{应该指向journactl -u kubelet.service
只是我的2克拉。
嗨,我遇到了同样的问题。
Centos 7
kubeadm版本是:1.9.6
码头工人版本是:1.13.1 API版本:1.26
当我运行时: docker info | grep -i cgroup
,
我懂了:
WARNING: You're not using the default seccomp profile
Cgroup Driver: systemd
当我运行cat /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
,
我可以看到设置Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=systemd"
。
我确实运行过systemctl daemon-reload *和* systemctl restart kubelet ,但是它仍然显示
错误配置:kubelet cgroup驱动程序:“ cgroupfs”与docker cgroup驱动程序:“ systemd”不同
另一个奇怪的事情是:当我运行sed -i "s/cgroup-driver=systemd/cgroup-driver=cgroupfs/g" /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
我看到--cgroup-drive更改为cgroupfs 。
但是,当我再次运行kubelet status
时,显示了完全相同的错误味精。
错误配置:kubelet cgroup驱动程序:“ cgroupfs”与docker cgroup驱动程序:“ systemd”不同
我无法弄清楚问题所在。
我将尝试使用上述版本。 有谁知道如何安装旧版本的kubernetes? 谢谢你。
@moqichenle真奇怪。 它应该工作。 您可以显示以下命令的输出吗?
systemctl daemon-reload
systemctl restart kubelet
docker info 2>/dev/null |grep -i group
ps aux |grep group-driver
journalctl -u kubelet.service | grep "is different from docker cgroup driver"
这是我在系统上看到的内容:
# systemctl daemon-reload
# systemctl restart kubelet
# docker info 2>/dev/null |grep -i group
Cgroup Driver: systemd
# ps aux |grep group-driver
root 25062 5.7 0.1 983760 78888 ? Ssl 15:26 0:00 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --pod-manifest-path=/etc/kubernetes/manifests --allow-privileged=true --network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin --cluster-dns=NN.NN.NN.NN --cluster-domain=cluster.local --authorization-mode=Webhook --client-ca-file=/etc/kubernetes/pki/ca.crt --cadvisor-port=0 --cgroup-driver=systemd --rotate-certificates=true --cert-dir=/var/lib/kubelet/pki
root 25520 0.0 0.0 9288 1560 pts/0 R+ 15:26 0:00 grep --color=auto group-driver
# journalctl -u kubelet.service | grep "is different from docker cgroup driver"
#
@ bart0sh嗨,谢谢您的帮助。
这是我所拥有的(在启动kubeadm init之前):
[root<strong i="8">@localhost</strong> bin]# docker info 2>/dev/null |grep -i group
Cgroup Driver: systemd
[root<strong i="9">@localhost</strong> bin]# ps aux |grep group-driver
root 13472 0.0 0.1 12476 984 pts/0 R+ 13:23 0:00 grep --color=auto group-driver
输入命令kubeadm init之后,
这就是我所拥有的:
[vagrant<strong i="14">@localhost</strong> ~]$ ps aux |grep group-driver
root 13606 5.1 4.5 605240 22992 ? Ssl 13:25 0:03 /usr/bin/kubelet --kubeconfig=/etc/kubernetes/kubelet.conf --require-kubeconfig=true --pod-manifest-path=/etc/kubernetes/manifests --allow-privileged=true --network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin --cluster-dns=10.96.0.10 --cluster-domain=cluster.local --cgroup-driver=systemd --hostname-override=default
vagrant 13924 0.0 0.1 12476 984 pts/1 R+ 13:26 0:00 grep --color=auto group-driver
但是,由于kubelet运行状况不佳或kubelet未运行,因此kubeadm初始化将失败。
@moqichenle您在运行kubeadm init
之前运行过kubeadm init
systemctl daemon-reload
和systemctl restart kubelet
kubeadm init
吗?
您可以在journalctl -u kubelet.service
之后运行kubeadm init
并在此处显示其输出吗?
是的,我确实在初始化之前运行了这两个命令。
奇怪的是:运行journalctl -u kubelet.service | grep "is different from docker cgroup driver"
时没有看到任何输出。
我只在运行kubelet status
时看到该错误。
@moqichenle kubelet status
命令不存在。 这意味着您使用默认参数(和默认cgroup驱动程序)运行kubelet。 这就是为什么您会得到错误。 有关更多详细信息,请参阅我关于kubelet logs
消息。
您在journalctl -u kubelet.service
的输出中看到任何可疑的内容(错误,警告)吗?
啊,我明白了。 谢谢你。 :)
嗯..有下面显示的错误:
Mar 26 13:39:40 localhost.localdomain kubelet[13606]: E0326 13:39:34.198202 13606 kuberuntime_image.go:140] ListImages failed: rpc error: code = DeadlineExceeded desc = context deadline exceeded
Mar 26 13:39:45 localhost.localdomain kubelet[13606]: E0326 13:39:44.824222 13606 kubelet.go:1259] Container garbage collection failed: rpc error: code = DeadlineExceeded desc = context deadline exceeded
Mar 26 13:39:47 localhost.localdomain kubelet[13606]: W0326 13:39:44.749819 13606 image_gc_manager.go:173] [imageGCManager] Failed to monitor images: rpc error: code = DeadlineExceeded desc = context deadline exceeded
Mar 26 13:39:49 localhost.localdomain kubelet[13606]: E0326 13:39:49.486990 13606 kubelet.go:1281] Image garbage collection failed once. Stats initialization may not have completed yet: failed to get image stats: rpc error: code = DeadlineExceeded desc = context deadline exceeded
Mar 26 13:42:03 localhost.localdomain kubelet[13606]: E0326 13:42:03.934312 13606 remote_runtime.go:169] ListPodSandbox with filter nil from runtime service failed: rpc error: code = DeadlineExceeded desc = context deadline exceeded
Mar 26 13:42:03 localhost.localdomain kubelet[13606]: E0326 13:42:03.934359 13606 kuberuntime_sandbox.go:192] ListPodSandbox failed: rpc error: code = DeadlineExceeded desc = context deadline exceeded
Mar 26 13:42:03 localhost.localdomain kubelet[13606]: E0326 13:42:03.934374 13606 generic.go:197] GenericPLEG: Unable to retrieve pods: rpc error: code = DeadlineExceeded desc = context deadline exceeded
Mar 26 13:42:03 localhost.localdomain kubelet[13606]: E0326 13:42:03.936761 13606 remote_image.go:67] ListImages with filter nil from image service failed: rpc error: code = DeadlineExceeded desc = context deadline exceeded
Mar 26 13:42:03 localhost.localdomain kubelet[13606]: E0326 13:42:03.936788 13606 kuberuntime_image.go:106] ListImages failed: rpc error: code = DeadlineExceeded desc = context deadline exceeded
Mar 26 13:42:03 localhost.localdomain kubelet[13606]: W0326 13:42:03.936795 13606 image_gc_manager.go:184] [imageGCManager] Failed to update image list: rpc error: code = DeadlineExceeded desc = context deadline exceeded
Mar 26 13:42:03 localhost.localdomain kubelet[13606]: E0326 13:42:03.937002 13606 remote_runtime.go:69] Version from runtime service failed: rpc error: code = DeadlineExceeded desc = context deadline exceeded
Mar 26 13:42:03 localhost.localdomain kubelet[13606]: E0326 13:42:03.937020 13606 kuberuntime_manager.go:245] Get remote runtime version failed: rpc error: code = DeadlineExceeded desc = context deadline exceeded
当我运行kubeadm init时,如果cgroup驱动器设置不同:
表明:
`[etcd]将本地etcd实例的静态Pod清单写到“ /etc/kubernetes/manifests/etcd.yaml”
[init]等待kubelet作为静态Pod从目录“ / etc / kubernetes / manifests”启动控制平面。
[init]如果必须拖动控制平面图像,则可能需要一分钟或更长时间。
[kubelet检查]似乎kubelet不在运行或运行状况良好。
当cgroup驱动器设置相同时,
它只是挂在拉动控制面板的步骤上,最终导致kubelet不健康或无法运行。
@moqichenle在我看来,这是一个
您可以搜索“超出上下文截止日期”以获取更多信息。
@ bart0sh是的,不要以为它不再与此问题有关。 会做。 非常感谢你:D
此PR应该有助于减少由于运行“ kubelet日志”,“ kubelet状态”和其他不存在的kubelet命令而引起的混乱: #61833
如果使用不正确的命令行运行,它会使kubelet产生错误并退出。
请查阅。
嗨,我可以在1.10上重现此问题,只是要检查这是否是一个错误并将在v1.11中修复?
@dixudx我正在尝试安装k8,然后从网站https://kubernetes.io/docs/setup/independent/install-kubeadm/安装指南,并且此问题已解决,下面是我所处环境的详细信息;
作业系统:
CentOS Linux release 7.4.1708 (Core)
<br i="12"/>
Server Version: 1.13.1<br i="13"/>
API version: 1.26 (minimum version 1.12)<br i="14"/>
Package version: <unknown i="15"><br i="16"/>
Go version: go1.8.3<br i="17"/>
Git commit: 774336d/1.13.1<br i="18"/>
Built: Wed Mar 7 17:06:16 2018<br i="19"/>
OS/Arch: linux/amd64<br i="20"/>
Experimental: false</unknown>
kubeadm.x86_64 1.10.1-0<br i="24"/>
kubectl.x86_64 1.10.1-0<br i="25"/>
kubelet.x86_64 1.10.1-0<br i="26"/>
kubernetes-cni.x86_64 0.6.0-0
docker和kubelet之间的cgroup
docker info | grep -i cgroup<br i="30"/> WARNING: You're not using the default seccomp profile<br i="31"/> Cgroup Driver: systemd
cat /etc/systemd/system/kubelet.service.d/10-kubeadm.conf | grep -i cgroup<br i="35"/> Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=systemd"
它的cgroup与systemd相同,因此无需手动调整kubelet的cgroup。 我开始运行kubelet,但由于提到的错误消息而失败
[root@K8S-Master /]# kubelet logs
I0424 10:41:29.240854 19245 feature_gate.go:226] feature gates: &{{} map[]}
W0424 10:41:29.247770 19245 cni.go:171] Unable to update cni config: No networks found in /etc/cni/net.d
W0424 10:41:29.253069 19245 hostport_manager.go:68] The binary conntrack is not installed, this can cause failures in network connection cleanup.
I0424 10:41:29.253111 19245 server.go:376] Version: v1.10.1
I0424 10:41:29.253175 19245 feature_gate.go:226] feature gates: &{{} map[]}
I0424 10:41:29.253290 19245 plugins.go:89] No cloud provider specified.
W0424 10:41:29.253327 19245 server.go:517] standalone mode, no API client
W0424 10:41:29.283851 19245 server.go:433] No api server defined - no events will be sent to API server.
I0424 10:41:29.283867 19245 server.go:613] --cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /
I0424 10:41:29.284091 19245 container_manager_linux.go:242] container manager verified user specified cgroup-root exists: /
I0424 10:41:29.284101 19245 container_manager_linux.go:247] Creating Container Manager object based on Node Config: {RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:docker CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:<nil i="39">} {Signal:nodefs.available Operator:LessThan Value:{Quantity:<nil i="40"> Percentage:0.1} GracePeriod:0s MinReclaim:<nil i="41">} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity:<nil i="42"> Percentage:0.05} GracePeriod:0s MinReclaim:<nil i="43">} {Signal:imagefs.available Operator:LessThan Value:{Quantity:<nil i="44"> Percentage:0.15} GracePeriod:0s MinReclaim:<nil i="45">}]} ExperimentalQOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalCPUManagerReconcilePeriod:10s ExperimentalPodPidsLimit:-1 EnforceCPULimits:true}
I0424 10:41:29.284195 19245 container_manager_linux.go:266] Creating device plugin manager: true
I0424 10:41:29.284242 19245 state_mem.go:36] [cpumanager] initializing new in-memory state store
I0424 10:41:29.284292 19245 state_mem.go:87] [cpumanager] updated default cpuset: ""
I0424 10:41:29.284326 19245 state_mem.go:95] [cpumanager] updated cpuset assignments: "map[]"
W0424 10:41:29.286890 19245 kubelet_network.go:139] Hairpin mode set to "promiscuous-bridge" but kubenet is not enabled, falling back to "hairpin-veth"
I0424 10:41:29.286912 19245 kubelet.go:556] Hairpin mode set to "hairpin-veth"
I0424 10:41:29.288233 19245 client.go:75] Connecting to docker on unix:///var/run/docker.sock
I0424 10:41:29.288268 19245 client.go:104] Start docker client with request timeout=2m0s
W0424 10:41:29.289762 19245 cni.go:171] Unable to update cni config: No networks found in /etc/cni/net.d
W0424 10:41:29.292669 19245 hostport_manager.go:68] The binary conntrack is not installed, this can cause failures in network connection cleanup.
I0424 10:41:29.293904 19245 docker_service.go:244] Docker cri networking managed by kubernetes.io/no-op
I0424 10:41:29.302849 19245 docker_service.go:249] Docker Info: &{ID:UJ6K:K2AW:HKQY:5MRL:KROX:FTJV:3TKY:GHGI:L7GV:UQFP:AU2Q:AKC6 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem xfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:[] Plugins:{Volume:[local] Network:[bridge host macvlan null overlay] Authorization:[] Log:[]} MemoryLimit:true SwapLimit:true KernelMemory:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6tables:true Debug:false NFd:16 OomKillDisable:true NGoroutines:26 SystemTime:2018-04-24T10:41:29.295491971+08:00 LoggingDriver:journald CgroupDriver:systemd NEventsListener:0 KernelVersion:3.10.0-693.5.2.el7.x86_64 OperatingSystem:CentOS Linux 7 (Core) OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:0xc4203dcbd0 NCPU:4 MemTotal:8371650560 GenericResources:[] DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:K8S-Master Labels:[] ExperimentalBuild:false ServerVersion:1.13.1 ClusterStore: ClusterAdvertise: Runtimes:map[docker-runc:{Path:/usr/libexec/docker/docker-runc-current Args:[]} runc:{Path:docker-runc Args:[]}] DefaultRuntime:docker-runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:[] Nodes:0 Managers:0 Cluster:0xc4201317c0} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID: Expected:aa8187dbd3b7ad67d8e5e3a15115d3eef43a7ed1} RuncCommit:{ID:N/A Expected:9df8b306d01f59d3a8029be411de015b7304dd8f} InitCommit:{ID:N/A Expected:949e6facb77383876aeff8a6944dde66b3089574} SecurityOptions:[name=seccomp,profile=/etc/docker/seccomp.json name=selinux]}
F0424 10:41:29.302989 19245 server.go:233] failed to run Kubelet: failed to create kubelet: misconfiguration: kubelet cgroup driver: <font i="46">"cgroupfs"</font> is different from docker cgroup driver: "systemd"</nil></nil></nil></nil></nil></nil></nil>
我从日志中看到的关键信息是CgroupDriver仍然是cgroupfs,我想这是导致cgroup不匹配问题的原因,但不知道如何调整此默认值? 您能帮忙澄清一下吗,谢谢!
@wshandao请停止使用kubelet logs
,这不是查看日志的正确方法。
查看日志的正确方法是journalctl -f -u kubelet
。
谢谢@dixudx ,我的错误,但这并不是保存我的安装的问题
我同意关闭该请求的请求。
该文档已经涵盖了用户需要验证匹配的cgroups驱动程序。
这独立于kubeadm,更多是kubelet vs docker问题。
相似的报告:
https://github.com/kubernetes/kubernetes/issues/59794
https://github.com/openshift/origin/issues/18776
https://github.com/kubernetes/kubernetes/issues/43805
@timstclair
FWIW这是RPM中的默认设置,但.debs中没有默认设置。 主要支持中是否有任何当前不默认为systemd的发行版?
我已经在3个不同的准系统Ubuntu 16.04.2、16.04.0、17.04上对此进行了测试,并且看来docker驱动程序是cgroupfs
,它与kublet的默认参数值匹配。
不像在原岗位的用户报告,其中泊坞窗使用systemd
的16.04.3
,所以它可能是泊坞窗泊坞窗配置版本之间的变化。 很难说。
考虑到我的测试,我认为不需要在deb中添加Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=systemd"
,因为至少对于这些Ubuntu版本来说这是错误的。
对于友好的UX,kublet应该做的是始终自动匹配docker驱动程序。
@ neolit123同意。
但是,我确实认为我们应该打开疑难解答文档问题JIC。
关闭这个,我将开始我的文档。
我在Ubuntu 16.04(Kube版本v1.10.4)上遇到了同样的问题。 Docker版本1.13.1
Docker从native.cgroupdriver = systemd开始。 这个配置是我在/etc/docker/daemon.json中设置的
{
"exec-opts": ["native.cgroupdriver=systemd"]
}
我已经在/etc/systemd/system/kubelet.service.d/10-kubeadm.conf中修改了配置
添加了新行: Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=systemd"
并在ExecStart中添加参数$KUBELET_CGROUP_ARGS
然后,执行systemctl daemon-reload和service kubelet重新启动。
Kubelet正确启动。
@dragosrosculete
我们正在改进故障排除文档,但在1.11及更高版本中,kubeadm应该会自动匹配docker的cgroup驱动程序。
我确实认为这是一个错误。 我检查了docker版本和kubeadm文件,当然kubeadm脚本也进行了检查。 但是我得到了不匹配的错误信息。 如果有人仔细阅读过,您会发现上面的一些问题在正确设置参数后出现了问题。
这仍然在发生,没有任何效果!
最有用的评论
我用kubeadm
v1.9.2
遇到了同样的问题,但是我可以看到kubelet被配置为使用systemd cgroup驱动程序。kubelet正在使用--cgroup-driver = systemd
码头工人信息|
kubelet日志
版本信息: