¡Gracias por presentar un problema! Antes de presionar el botón, responda estas preguntas.
Lo es, pero he buscado StackOverflow y he buscado en Google muchas veces sin encontrar el problema. Además, esto parece afectar a más personas.
Los mensajes de error que veo en journalctl
Informe de error
versión de kubeadm : kubeadm version: &version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.1", GitCommit:"b1b29978270dc22fecc592ac55d903350454310a", GitTreeState:"clean", BuildDate:"2018-07-17T18:50:16Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}
Medio ambiente :
Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.1", GitCommit:"b1b29978270dc22fecc592ac55d903350454310a", GitTreeState:"clean", BuildDate:"2018-07-17T18:53:20Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}
kubectl version
no lo mostraráNAME="CentOS Linux"
VERSION="7 (Core)"
ID="centos"
ID_LIKE="rhel fedora"
VERSION_ID="7"
PRETTY_NAME="CentOS Linux 7 (Core)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:centos:centos:7"
HOME_URL="https://www.centos.org/"
BUG_REPORT_URL="https://bugs.centos.org/"
CENTOS_MANTISBT_PROJECT="CentOS-7"
CENTOS_MANTISBT_PROJECT_VERSION="7"
REDHAT_SUPPORT_PRODUCT="centos"
REDHAT_SUPPORT_PRODUCT_VERSION="7"
Linux kubernetes 3.10.0-862.9.1.el7.x86_64 #1 SMP Mon Jul 16 16:29:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
El servicio de Kubelet no se inicia
El servicio de Kubelet debería comenzar
Registros de Journalctl:
Jul 27 14:46:17 kubernetes systemd[1]: Starting kubelet: The Kubernetes Node Agent...
-- Subject: Unit kubelet.service has begun start-up
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit kubelet.service has begun starting up.
Jul 27 14:46:17 kubernetes kubelet[1619]: Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more informatio
n.
Jul 27 14:46:17 kubernetes kubelet[1619]: Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Jul 27 14:46:17 kubernetes kubelet[1619]: I0727 14:46:17.608612 1619 server.go:408] Version: v1.11.1
Jul 27 14:46:17 kubernetes kubelet[1619]: I0727 14:46:17.609679 1619 plugins.go:97] No cloud provider specified.
Jul 27 14:46:17 kubernetes kubelet[1619]: I0727 14:46:17.613651 1619 certificate_store.go:131] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
Jul 27 14:46:17 kubernetes kubelet[1619]: I0727 14:46:17.709720 1619 server.go:648] --cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /
Jul 27 14:46:17 kubernetes kubelet[1619]: I0727 14:46:17.710299 1619 container_manager_linux.go:243] container manager verified user specified cgroup-root exists: []
Jul 27 14:46:17 kubernetes kubelet[1619]: I0727 14:46:17.710322 1619 container_manager_linux.go:248] Creating Container Manager object based on Node Config: {RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:docker CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:nodefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.1} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>} {Signal:imagefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.15} GracePeriod:0s MinReclaim:<nil>} {Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:<nil>}]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalCPUManagerReconcilePeriod:10s ExperimentalPodPidsLimit:-1 EnforceCPULimits:true}
Jul 27 14:46:17 kubernetes kubelet[1619]: I0727 14:46:17.710457 1619 container_manager_linux.go:267] Creating device plugin manager: true
Jul 27 14:46:17 kubernetes kubelet[1619]: I0727 14:46:17.710515 1619 state_mem.go:36] [cpumanager] initializing new in-memory state store
Jul 27 14:46:17 kubernetes kubelet[1619]: I0727 14:46:17.710600 1619 state_mem.go:84] [cpumanager] updated default cpuset: ""
Jul 27 14:46:17 kubernetes kubelet[1619]: I0727 14:46:17.710617 1619 state_mem.go:92] [cpumanager] updated cpuset assignments: "map[]"
Jul 27 14:46:17 kubernetes kubelet[1619]: I0727 14:46:17.710751 1619 kubelet.go:274] Adding pod path: /etc/kubernetes/manifests
Jul 27 14:46:17 kubernetes kubelet[1619]: I0727 14:46:17.710814 1619 kubelet.go:299] Watching apiserver
Jul 27 14:46:17 kubernetes kubelet[1619]: E0727 14:46:17.711655 1619 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://192.168.1.19:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 14:46:17 kubernetes kubelet[1619]: E0727 14:46:17.711661 1619 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://192.168.1.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 14:46:17 kubernetes kubelet[1619]: E0727 14:46:17.711752 1619 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.1.19:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 14:46:17 kubernetes kubelet[1619]: I0727 14:46:17.717242 1619 client.go:75] Connecting to docker on unix:///var/run/docker.sock
Jul 27 14:46:17 kubernetes kubelet[1619]: I0727 14:46:17.717277 1619 client.go:104] Start docker client with request timeout=2m0s
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.718726 1619 docker_service.go:545] Hairpin mode set to "promiscuous-bridge" but kubenet is not enabled, falling back to "hairpin-veth"
Jul 27 14:46:17 kubernetes kubelet[1619]: I0727 14:46:17.718756 1619 docker_service.go:238] Hairpin mode set to "hairpin-veth"
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.721656 1619 hostport_manager.go:68] The binary conntrack is not installed, this can cause failures in network connection cleanup.
Jul 27 14:46:17 kubernetes kubelet[1619]: I0727 14:46:17.721975 1619 docker_service.go:253] Docker cri networking managed by cni
Jul 27 14:46:17 kubernetes kubelet[1619]: I0727 14:46:17.733083 1619 docker_service.go:258] Docker Info: &{ID:V36L:ETJO:IECX:PJF4:G3GB:JHA6:LGCF:VQBJ:D2GY:PVFO:567O:545Y Containers:66 ContainersRunning:0 ContainersPaused:0 ContainersStopped:66 Images:21 Driver:overlay2 DriverStatus:[[Backing Filesystem xfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:[] Plugins:{Volume:[local] Network:[bridge host macvlan null overlay] Authorization:[] Log:[]} MemoryLimit:true SwapLimit:true KernelMemory:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6tables:true Debug:false NFd:15 OomKillDisable:true NGoroutines:22 SystemTime:2018-07-27T14:46:17.727178862+02:00 LoggingDriver:journald CgroupDriver:systemd NEventsListener:0 KernelVersion:3.10.0-862.9.1.el7.x86_64 OperatingSystem:CentOS Linux 7 (Core) OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:0xc420ebd110 NCPU:12 MemTotal:33386934272 GenericResources:[] DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:kubernetes Labels:[] ExperimentalBuild:false ServerVersion:1.13.1 ClusterStore: ClusterAdvertise: Runtimes:map[runc:{Path:docker-runc Args:[]} docker-runc:{Path:/usr/libexec/docker/docker-runc-current Args:[]}] DefaultRuntime:docker-runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:[] Nodes:0 Managers:0 Cluster:0xc421016140} LiveRestoreEnabled:false Isolation: InitBinary:/usr/libexec/docker/docker-init-current ContainerdCommit:{ID: Expected:aa8187dbd3b7ad67d8e5e3a15115d3eef43a7ed1} RuncCommit:{ID:5eda6f6fd0c2884c2c8e78a6e7119e8d0ecedb77 Expected:9df8b306d01f59d3a8029be411de015b7304dd8f} InitCommit:{ID:fec3683b971d9c3ef73f284f176672c44b448662 Expected:949e6facb77383876aeff8a6944dde66b3089574} SecurityOptions:[name=seccomp,profile=/etc/docker/seccomp.json name=selinux]}
Jul 27 14:46:17 kubernetes kubelet[1619]: I0727 14:46:17.733181 1619 docker_service.go:271] Setting cgroupDriver to systemd
Jul 27 14:46:17 kubernetes kubelet[1619]: I0727 14:46:17.825381 1619 kuberuntime_manager.go:186] Container runtime docker initialized, version: 1.13.1, apiVersion: 1.26.0
Jul 27 14:46:17 kubernetes kubelet[1619]: I0727 14:46:17.839306 1619 csi_plugin.go:111] kubernetes.io/csi: plugin initializing...
Jul 27 14:46:17 kubernetes kubelet[1619]: I0727 14:46:17.840955 1619 server.go:129] Starting to listen on 0.0.0.0:10250
Jul 27 14:46:17 kubernetes kubelet[1619]: I0727 14:46:17.841036 1619 server.go:986] Started kubelet
Jul 27 14:46:17 kubernetes kubelet[1619]: I0727 14:46:17.841423 1619 fs_resource_analyzer.go:66] Starting FS ResourceAnalyzer
Jul 27 14:46:17 kubernetes kubelet[1619]: I0727 14:46:17.841448 1619 status_manager.go:152] Starting to sync pod status with apiserver
Jul 27 14:46:17 kubernetes kubelet[1619]: I0727 14:46:17.841462 1619 kubelet.go:1758] Starting kubelet main sync loop.
Jul 27 14:46:17 kubernetes kubelet[1619]: I0727 14:46:17.841479 1619 kubelet.go:1775] skipping pod synchronization - [container runtime is down PLEG is not healthy: pleg was last seen active 2562047h47m16.854775807s ago; threshold is 3m0s]
Jul 27 14:46:17 kubernetes kubelet[1619]: I0727 14:46:17.841710 1619 volume_manager.go:247] Starting Kubelet Volume Manager
Jul 27 14:46:17 kubernetes kubelet[1619]: I0727 14:46:17.841754 1619 desired_state_of_world_populator.go:130] Desired state populator starts to run
Jul 27 14:46:17 kubernetes kubelet[1619]: I0727 14:46:17.842653 1619 server.go:302] Adding debug handlers to kubelet server.
Jul 27 14:46:17 kubernetes kubelet[1619]: E0727 14:46:17.868316 1619 kubelet.go:1261] Image garbage collection failed once. Stats initialization may not have completed yet: failed to get imageFs info: unable to find data for container /
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.872508 1619 container.go:393] Failed to create summary reader for "/system.slice/systemd-hostnamed.service": none of the resources are being tracked.
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.872925 1619 container.go:393] Failed to create summary reader for "/system.slice/systemd-journal-flush.service": none of the resources are being tracked.
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.873312 1619 container.go:393] Failed to create summary reader for "/system.slice/systemd-logind.service": none of the resources are being tracked.
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.873703 1619 container.go:393] Failed to create summary reader for "/system.slice/systemd-remount-fs.service": none of the resources are being tracked.
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.874064 1619 container.go:393] Failed to create summary reader for "/system.slice/rsyslog.service": none of the resources are being tracked.
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.874452 1619 container.go:393] Failed to create summary reader for "/system.slice/systemd-readahead-collect.service": none of the resources are being tracked.
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.874765 1619 container.go:393] Failed to create summary reader for "/system.slice": none of the resources are being tracked.
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.875097 1619 container.go:393] Failed to create summary reader for "/system.slice/kmod-static-nodes.service": none of the resources are being tracked.
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.875392 1619 container.go:393] Failed to create summary reader for "/system.slice/irqbalance.service": none of the resources are being tracked.
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.875679 1619 container.go:393] Failed to create summary reader for "/system.slice/rhel-dmesg.service": none of the resources are being tracked.
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.876007 1619 container.go:393] Failed to create summary reader for "/system.slice/systemd-readahead-replay.service": none of the resources are being tracked.
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.876289 1619 container.go:393] Failed to create summary reader for "/system.slice/NetworkManager.service": none of the resources are being tracked.
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.876567 1619 container.go:393] Failed to create summary reader for "/system.slice/auditd.service": none of the resources are being tracked.
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.876913 1619 container.go:393] Failed to create summary reader for "/system.slice/systemd-udev-trigger.service": none of the resources are being tracked.
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.877200 1619 container.go:393] Failed to create summary reader for "/system.slice/kubelet.service": none of the resources are being tracked.
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.877503 1619 container.go:393] Failed to create summary reader for "/system.slice/network.service": none of the resources are being tracked.
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.877792 1619 container.go:393] Failed to create summary reader for "/system.slice/system-getty.slice": none of the resources are being tracked.
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.878118 1619 container.go:393] Failed to create summary reader for "/system.slice/systemd-journald.service": none of the resources are being tracked.
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.878486 1619 container.go:393] Failed to create summary reader for "/system.slice/systemd-user-sessions.service": none of the resources are being tracked.
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.878912 1619 container.go:393] Failed to create summary reader for "/system.slice/polkit.service": none of the resources are being tracked.
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.879312 1619 container.go:393] Failed to create summary reader for "/system.slice/rhel-domainname.service": none of the resources are being tracked.
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.879802 1619 container.go:393] Failed to create summary reader for "/system.slice/lvm2-monitor.service": none of the resources are being tracked.
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.880172 1619 container.go:393] Failed to create summary reader for "/system.slice/tuned.service": none of the resources are being tracked.
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.880491 1619 container.go:393] Failed to create summary reader for "/system.slice/dbus.service": none of the resources are being tracked.
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.880788 1619 container.go:393] Failed to create summary reader for "/system.slice/docker.service": none of the resources are being tracked.
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.881112 1619 container.go:393] Failed to create summary reader for "/system.slice/systemd-udevd.service": none of the resources are being tracked.
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.881402 1619 container.go:393] Failed to create summary reader for "/system.slice/kdump.service": none of the resources are being tracked.
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.881710 1619 container.go:393] Failed to create summary reader for "/system.slice/rhel-import-state.service": none of the resources are being tracked.
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.882166 1619 container.go:393] Failed to create summary reader for "/system.slice/systemd-random-seed.service": none of the resources are being tracked.
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.882509 1619 container.go:393] Failed to create summary reader for "/system.slice/systemd-tmpfiles-setup-dev.service": none of the resources are being tracked.
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.882806 1619 container.go:393] Failed to create summary reader for "/system.slice/systemd-tmpfiles-setup.service": none of the resources are being tracked.
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.883115 1619 container.go:393] Failed to create summary reader for "/system.slice/rhel-readonly.service": none of the resources are being tracked.
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.883420 1619 container.go:393] Failed to create summary reader for "/system.slice/NetworkManager-dispatcher.service": none of the resources are being tracked.
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.883704 1619 container.go:393] Failed to create summary reader for "/system.slice/NetworkManager-wait-online.service": none of the resources are being tracked.
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.884005 1619 container.go:393] Failed to create summary reader for "/system.slice/crond.service": none of the resources are being tracked.
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.884329 1619 container.go:393] Failed to create summary reader for "/system.slice/system-selinux\\x2dpolicy\\x2dmigrate\\x2dlocal\\x2dchanges.slice": none of the resources are being tracked.
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.884617 1619 container.go:393] Failed to create summary reader for "/system.slice/systemd-sysctl.service": none of the resources are being tracked.
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.884907 1619 container.go:393] Failed to create summary reader for "/system.slice/k8s-self-hosted-recover.service": none of the resources are being tracked.
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.885213 1619 container.go:393] Failed to create summary reader for "/system.slice/lvm2-lvmetad.service": none of the resources are being tracked.
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.885466 1619 container.go:393] Failed to create summary reader for "/user.slice": none of the resources are being tracked.
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.885730 1619 container.go:393] Failed to create summary reader for "/system.slice/sshd.service": none of the resources are being tracked.
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.886098 1619 container.go:393] Failed to create summary reader for "/system.slice/systemd-update-utmp.service": none of the resources are being tracked.
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.886384 1619 container.go:393] Failed to create summary reader for "/system.slice/systemd-vconsole-setup.service": none of the resources are being tracked.
Jul 27 14:46:17 kubernetes kubelet[1619]: I0727 14:46:17.913789 1619 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
Jul 27 14:46:17 kubernetes kubelet[1619]: I0727 14:46:17.917905 1619 cpu_manager.go:155] [cpumanager] starting with none policy
Jul 27 14:46:17 kubernetes kubelet[1619]: I0727 14:46:17.917923 1619 cpu_manager.go:156] [cpumanager] reconciling every 10s
Jul 27 14:46:17 kubernetes kubelet[1619]: I0727 14:46:17.917935 1619 policy_none.go:42] [cpumanager] none policy: Start
Jul 27 14:46:17 kubernetes kubelet[1619]: E0727 14:46:17.926164 1619 event.go:212] Unable to write event: 'Post https://192.168.1.19:6443/api/v1/namespaces/default/events: dial tcp 192.168.1.19:6443: connect: connection refused' (may retry after sleeping)
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.932356 1619 container.go:393] Failed to create summary reader for "/libcontainer_1619_systemd_test_default.slice": none of the resources are being tracked.
Jul 27 14:46:17 kubernetes kubelet[1619]: I0727 14:46:17.941592 1619 kubelet.go:1775] skipping pod synchronization - [container runtime is down PLEG is not healthy: pleg was last seen active 2562047h47m16.854775807s ago; threshold is 3m0s]
Jul 27 14:46:17 kubernetes kubelet[1619]: I0727 14:46:17.941762 1619 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
Jul 27 14:46:17 kubernetes kubelet[1619]: I0727 14:46:17.944471 1619 kubelet_node_status.go:79] Attempting to register node kubernetes
Jul 27 14:46:17 kubernetes kubelet[1619]: E0727 14:46:17.944714 1619 kubelet_node_status.go:103] Unable to register node "kubernetes" with API server: Post https://192.168.1.19:6443/api/v1/nodes: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 14:46:17 kubernetes kubelet[1619]: Starting Device Plugin manager
Jul 27 14:46:17 kubernetes kubelet[1619]: E0727 14:46:17.986308 1619 eviction_manager.go:243] eviction manager: failed to get get summary stats: failed to get node info: node "kubernetes" not found
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.986668 1619 container_manager_linux.go:792] CPUAccounting not enabled for pid: 998
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.986680 1619 container_manager_linux.go:795] MemoryAccounting not enabled for pid: 998
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.986749 1619 container_manager_linux.go:792] CPUAccounting not enabled for pid: 1619
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.986755 1619 container_manager_linux.go:795] MemoryAccounting not enabled for pid: 1619
Jul 27 14:46:18 kubernetes kubelet[1619]: I0727 14:46:18.144855 1619 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
Jul 27 14:46:18 kubernetes kubelet[1619]: I0727 14:46:18.148528 1619 kubelet_node_status.go:79] Attempting to register node kubernetes
Jul 27 14:46:18 kubernetes kubelet[1619]: E0727 14:46:18.148933 1619 kubelet_node_status.go:103] Unable to register node "kubernetes" with API server: Post https://192.168.1.19:6443/api/v1/nodes: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 14:46:18 kubernetes kubelet[1619]: W0727 14:46:18.158503 1619 docker_sandbox.go:372] failed to read pod IP from plugin/docker: NetworkPlugin cni failed on the status hook for pod "rook-ceph-mon0-4txgr_rook-ceph": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "5b910771d1fd895b3b8d2feabdeb564cc57b213ae712416bdffec4a414dc4747"
Jul 27 14:46:18 kubernetes kubelet[1619]: W0727 14:46:18.300596 1619 pod_container_deletor.go:75] Container "5b910771d1fd895b3b8d2feabdeb564cc57b213ae712416bdffec4a414dc4747" not found in pod's containers
Jul 27 14:46:18 kubernetes kubelet[1619]: W0727 14:46:18.323729 1619 docker_sandbox.go:372] failed to read pod IP from plugin/docker: NetworkPlugin cni failed on the status hook for pod "rook-ceph-osd-id-0-54d59fc64b-c5tw4_rook-ceph": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "a73305551840113b16cedd206109a837f57c6c3b2c8b1864ed5afab8b40b186d"
Jul 27 14:46:18 kubernetes kubelet[1619]: W0727 14:46:18.516802 1619 pod_container_deletor.go:75] Container "a73305551840113b16cedd206109a837f57c6c3b2c8b1864ed5afab8b40b186d" not found in pod's containers
Jul 27 14:46:18 kubernetes kubelet[1619]: I0727 14:46:18.549067 1619 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
Jul 27 14:46:18 kubernetes kubelet[1619]: I0727 14:46:18.552841 1619 kubelet_node_status.go:79] Attempting to register node kubernetes
Jul 27 14:46:18 kubernetes kubelet[1619]: E0727 14:46:18.553299 1619 kubelet_node_status.go:103] Unable to register node "kubernetes" with API server: Post https://192.168.1.19:6443/api/v1/nodes: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 14:46:18 kubernetes kubelet[1619]: W0727 14:46:18.674143 1619 pod_container_deletor.go:75] Container "96b85439f089170cf6161f5410f8970de67f0609d469105dff4e3d5ec2d10351" not found in pod's containers
Jul 27 14:46:18 kubernetes kubelet[1619]: E0727 14:46:18.712440 1619 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://192.168.1.19:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 14:46:18 kubernetes kubelet[1619]: E0727 14:46:18.713284 1619 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://192.168.1.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 14:46:18 kubernetes kubelet[1619]: E0727 14:46:18.714397 1619 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.1.19:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 14:46:19 kubernetes kubelet[1619]: W0727 14:46:19.139032 1619 pod_container_deletor.go:75] Container "7b9757b85bc8ee4ce6ac954acf0bcd5c06b2ceb815aee802a8f53f9de18d967f" not found in pod's containers
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.932356 1619 container.go:393] Failed to create summary reader for "/libcontainer_1619_systemd_test_default.slice": none of the resources are being tracked.
Jul 27 14:46:17 kubernetes kubelet[1619]: I0727 14:46:17.941592 1619 kubelet.go:1775] skipping pod synchronization - [container runtime is down PLEG is not healthy: pleg was last seen active 2562047h47m16.854775807s ago; threshold is 3m0s]
Jul 27 14:46:17 kubernetes kubelet[1619]: I0727 14:46:17.941762 1619 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
Jul 27 14:46:17 kubernetes kubelet[1619]: I0727 14:46:17.944471 1619 kubelet_node_status.go:79] Attempting to register node kubernetes
Jul 27 14:46:17 kubernetes kubelet[1619]: E0727 14:46:17.944714 1619 kubelet_node_status.go:103] Unable to register node "kubernetes" with API server: Post https://192.168.1.19:6443/api/v1/nodes: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 14:46:17 kubernetes kubelet[1619]: Starting Device Plugin manager
Jul 27 14:46:17 kubernetes kubelet[1619]: E0727 14:46:17.986308 1619 eviction_manager.go:243] eviction manager: failed to get get summary stats: failed to get node info: node "kubernetes" not found
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.986668 1619 container_manager_linux.go:792] CPUAccounting not enabled for pid: 998
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.986680 1619 container_manager_linux.go:795] MemoryAccounting not enabled for pid: 998
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.986749 1619 container_manager_linux.go:792] CPUAccounting not enabled for pid: 1619
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.986755 1619 container_manager_linux.go:795] MemoryAccounting not enabled for pid: 1619
Jul 27 14:46:18 kubernetes kubelet[1619]: I0727 14:46:18.144855 1619 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
Jul 27 14:46:18 kubernetes kubelet[1619]: I0727 14:46:18.148528 1619 kubelet_node_status.go:79] Attempting to register node kubernetes
Jul 27 14:46:18 kubernetes kubelet[1619]: E0727 14:46:18.148933 1619 kubelet_node_status.go:103] Unable to register node "kubernetes" with API server: Post https://192.168.1.19:6443/api/v1/nodes: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 14:46:18 kubernetes kubelet[1619]: W0727 14:46:18.158503 1619 docker_sandbox.go:372] failed to read pod IP from plugin/docker: NetworkPlugin cni failed on the status hook for pod "rook-ceph-mon0-4txgr_rook-ceph": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "5b910771d1fd895b3b8d2feabdeb564cc57b213ae712416bdffec4a414dc4747"
Jul 27 14:46:18 kubernetes kubelet[1619]: W0727 14:46:18.300596 1619 pod_container_deletor.go:75] Container "5b910771d1fd895b3b8d2feabdeb564cc57b213ae712416bdffec4a414dc4747" not found in pod's containers
Jul 27 14:46:18 kubernetes kubelet[1619]: W0727 14:46:18.323729 1619 docker_sandbox.go:372] failed to read pod IP from plugin/docker: NetworkPlugin cni failed on the status hook for pod "rook-ceph-osd-id-0-54d59fc64b-c5tw4_rook-ceph": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "a73305551840113b16cedd206109a837f57c6c3b2c8b1864ed5afab8b40b186d"
Jul 27 14:46:18 kubernetes kubelet[1619]: W0727 14:46:18.516802 1619 pod_container_deletor.go:75] Container "a73305551840113b16cedd206109a837f57c6c3b2c8b1864ed5afab8b40b186d" not found in pod's containers
Jul 27 14:46:18 kubernetes kubelet[1619]: I0727 14:46:18.549067 1619 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
Jul 27 14:46:18 kubernetes kubelet[1619]: I0727 14:46:18.552841 1619 kubelet_node_status.go:79] Attempting to register node kubernetes
Jul 27 14:46:18 kubernetes kubelet[1619]: E0727 14:46:18.553299 1619 kubelet_node_status.go:103] Unable to register node "kubernetes" with API server: Post https://192.168.1.19:6443/api/v1/nodes: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 14:46:18 kubernetes kubelet[1619]: W0727 14:46:18.674143 1619 pod_container_deletor.go:75] Container "96b85439f089170cf6161f5410f8970de67f0609d469105dff4e3d5ec2d10351" not found in pod's containers
Jul 27 14:46:18 kubernetes kubelet[1619]: E0727 14:46:18.712440 1619 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://192.168.1.19:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 14:46:18 kubernetes kubelet[1619]: E0727 14:46:18.713284 1619 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://192.168.1.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Y continúa y uno sobre no poder registrar kubernetes
(ese es mi nombre de host) y no enumerar los recursos de kubernetes.
Desde el principio, apliqué el script de recuperación autohospedada (https://github.com/xetys/k8s-self-hosted-recovery) para que no me afectara un reinicio. Aquí están los registros:
Jul 27 14:46:09 kubernetes systemd[1]: Starting Recovers self-hosted k8s after reboot...
Jul 27 14:46:09 kubernetes k8s-self-hosted-recover[1001]: [k8s-self-hosted-recover] Restoring old plane...
Jul 27 14:46:12 kubernetes k8s-self-hosted-recover[1001]: [controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
Jul 27 14:46:12 kubernetes k8s-self-hosted-recover[1001]: [controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
Jul 27 14:46:12 kubernetes k8s-self-hosted-recover[1001]: [controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
Jul 27 14:46:17 kubernetes k8s-self-hosted-recover[1001]: [k8s-self-hosted-recover] Waiting while the api server is back..
Me estoy quedando sin ideas y agradecería cualquier ayuda que pueda traer.
hola @ PierrickI3
tienes conectividad a:
https://192.168.1.19:6443
No lo hago, pero es mi dirección IP local:
[pierrick<strong i="6">@kubernetes</strong> ~]$ curl https://192.168.1.19:6443
curl: (7) Failed connect to 192.168.1.19:6443; Connection refused
[pierrick<strong i="7">@kubernetes</strong> ~]$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
2: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether fc:aa:14:9a:97:e4 brd ff:ff:ff:ff:ff:ff
inet 192.168.1.19/24 brd 192.168.1.255 scope global noprefixroute dynamic eno1
valid_lft 86196sec preferred_lft 86196sec
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:4e:90:66:f7 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 scope global docker0
valid_lft forever preferred_lft forever
6443 es el puerto seguro del servidor API.
¿Se está ejecutando el servidor API?
cuál es la salida de kubectl get pods --all-namespaces
no he visto una falla justo después de Watching apiserver
...
¿Tiene un firewall que bloquea 6443?
Además, ¿puede compartir el manifiesto de su servidor API? ocultar cualquier dato sensible, donde sea necesario.
Aquí está la salida de kubectl:
[pierrick<strong i="6">@kubernetes</strong> ~]$ kubectl get pods --all-namespaces
The connection to the server 192.168.1.19:6443 was refused - did you specify the right host or port?
No tengo ningún cortafuegos que bloquee este puerto y lo hice directamente en la máquina.
Aquí está el manifiesto (no hay datos confidenciales para ocultar, ya que no está expuesto):
apiVersion: v1
kind: Pod
metadata:
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ""
creationTimestamp: null
labels:
component: kube-apiserver
tier: control-plane
name: kube-apiserver
namespace: kube-system
spec:
containers:
- command:
- kube-apiserver
- --authorization-mode=Node,RBAC
- --advertise-address=192.168.1.19
- --allow-privileged=true
- --client-ca-file=/etc/kubernetes/pki/ca.crt
- --disable-admission-plugins=PersistentVolumeLabel
- --enable-admission-plugins=NodeRestriction
- --enable-bootstrap-token-auth=true
- --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt
- --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt
- --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key
- --etcd-servers=https://127.0.0.1:2379
- --insecure-port=0
- --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt
- --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key
- --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
- --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt
- --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key
- --requestheader-allowed-names=front-proxy-client
- --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
- --requestheader-extra-headers-prefix=X-Remote-Extra-
- --requestheader-group-headers=X-Remote-Group
- --requestheader-username-headers=X-Remote-User
- --secure-port=6443
- --service-account-key-file=/etc/kubernetes/pki/sa.pub
- --service-cluster-ip-range=10.96.0.0/12
- --tls-cert-file=/etc/kubernetes/pki/apiserver.crt
- --tls-private-key-file=/etc/kubernetes/pki/apiserver.key
image: k8s.gcr.io/kube-apiserver-amd64:v1.11.1
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 8
httpGet:
host: 192.168.1.19
path: /healthz
port: 6443
scheme: HTTPS
initialDelaySeconds: 15
timeoutSeconds: 15
name: kube-apiserver
resources:
requests:
cpu: 250m
volumeMounts:
- mountPath: /etc/kubernetes/pki
name: k8s-certs
readOnly: true
- mountPath: /etc/ssl/certs
name: ca-certs
readOnly: true
- mountPath: /etc/pki
name: etc-pki
readOnly: true
hostNetwork: true
priorityClassName: system-cluster-critical
volumes:
- hostPath:
path: /etc/kubernetes/pki
type: DirectoryOrCreate
name: k8s-certs
- hostPath:
path: /etc/ssl/certs
type: DirectoryOrCreate
name: ca-certs
- hostPath:
path: /etc/pki
type: DirectoryOrCreate
name: etc-pki
aquí está la salida de docker ps:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
0ab99c997dc4 272b3a60cd68 "kube-scheduler --..." 37 minutes ago Up 37 minutes k8s_kube-scheduler_kube-scheduler-kubernetes_kube-system_537879acc30dd5eff5497cb2720a6d64_8
058ee146b06f 52096ee87d0e "kube-controller-m..." 37 minutes ago Up 37 minutes k8s_kube-controller-manager_kube-controller-manager-kubernetes_kube-system_0da157f6a48b9a49830c56c19af5c954_0
dca22f2e66c1 k8s.gcr.io/pause:3.1 "/pause" 37 minutes ago Up 37 minutes k8s_POD_kube-scheduler-kubernetes_kube-system_537879acc30dd5eff5497cb2720a6d64_8
301d6736b6ad k8s.gcr.io/pause:3.1 "/pause" 37 minutes ago Up 37 minutes k8s_POD_kube-controller-manager-kubernetes_kube-system_0da157f6a48b9a49830c56c19af5c954_0
10cf027f301b k8s.gcr.io/pause:3.1 "/pause" 37 minutes ago Up 37 minutes k8s_POD_kube-apiserver-kubernetes_kube-system_b784b670ba660d7fe4b0407690d68d81_4
c7c59c971636 k8s.gcr.io/pause:3.1 "/pause" 37 minutes ago Up 37 minutes k8s_POD_etcd-kubernetes_kube-system_6fd4d3c9fe373df920ce5e1e4572fd1d_8
--disable-admission-plugins=PersistentVolumeLabel
este debería estar en desuso y deshabilitarse de forma predeterminada en 1.11.1, ¿adaptó una configuración antigua de kubeadm?
(editar: oh, espera, no elegimos esto: https://github.com/kubernetes/kubernetes/pull/65827)
¿Cuáles son los contenidos de su configuración de kubeadm (nuevamente, oculte los datos confidenciales donde sea necesario)?
¿Cómo puedo recuperar la configuración de kubeadm? Ejecutar kubeadm config view
da como resultado el mismo error connection refused
.
Para implementar, seguí las instrucciones de kubeadm. Esto es lo que ejecuté:
# update yum packages
yum update -y
# install git, wget & docker
yum install -y git wget nano go docker
# install CRI
rpm --import https://mirror.go-repo.io/centos/RPM-GPG-KEY-GO-REPO
curl -s https://mirror.go-repo.io/centos/go-repo.repo | tee /etc/yum.repos.d/go-repo.repo
yum update -y golang
# start Docker
systemctl enable docker && systemctl start docker
# disable swap (not supported by kubeadm)
swapoff -a
# add kubernetes repo to yum
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF
setenforce 0 # required to allow containers to access the host filesystem (https://www.centos.org/docs/5/html/5.2/Deployment_Guide/sec-sel-enable-disable-enforcement.html). To disable permanently: https://www.tecmint.com/disable-selinux-temporarily-permanently-in-centos-rhel-fedora/
# disable firewall (I know, not great but I am fed up with opening ports and I am behind another firewall and I can do whatever I want)
systemctl disable firewalld && systemctl stop firewalld
###########
# KUBEADM #
###########
# install kubelet, kubeadm and kubectl
yum install -y kubelet kubeadm kubectl
systemctl enable kubelet && systemctl start kubelet
# prevent issuers with traffic being routed incorrectly due to iptables being bypassed
cat <<EOF > /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system
# install CRICTL (https://github.com/kubernetes-incubator/cri-tools), required by kubeadm
go get github.com/kubernetes-incubator/cri-tools/cmd/crictl
# deploy kubernetes
kubeadm init --pod-network-cidr=10.244.0.0/16
# allow kubectl for non sudoers (run this as a regular user)
cp /etc/kubernetes/admin.conf $HOME/
chown $(id -u):$(id -g) $HOME/admin.conf
export KUBECONFIG=$HOME/admin.conf
echo 'export KUBECONFIG=$HOME/admin.conf' >> $HOME/.bashrc
# For the root user, run this:
export KUBECONFIG=/etc/kubernetes/admin.conf
echo 'KUBECONFIG=/etc/kubernetes/admin.conf' >> $HOME/.bashrc
# deploy pod network (flannel)
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/v0.10.0/Documentation/kube-flannel.yml
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/v0.10.0/Documentation/k8s-manifests/kube-flannel-rbac.yml
kubectl taint nodes --all node-role.kubernetes.io/master- # allow pods to be scheduled on master
###################
# REBOOTING ISSUE #
###################
# At the time of writing this, rebooting causes kubernetes to no longer work. This will fix it (http://stytex.de/blog/2018/01/16/how-to-recover-self-hosted-kubeadm-kubernetes-cluster-after-reboot/)
git clone https://github.com/xetys/k8s-self-hosted-recovery
cd k8s-self-hosted-recovery
chmod +x install.sh
./install.sh
cd ..
kubeadm init --pod-network-cidr=10.244.0.0/16
en este caso, no está pasando --config
por lo que su configuración es más o menos:
kubeadm config print-default
+ el cambio de CIDR.
¿Qué sucede si usa un CNI diferente?
(Sé que funcionó antes de reiniciar, pero solo probando ...).
primero esto sin CIDR:
# deploy kubernetes
kubeadm init
luego:
# deploy pod network (weave)
kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"
kubectl taint nodes --all node-role.kubernetes.io/master- # allow pods to be scheduled on master
editar: jugar con la eliminación de la línea de mancha:
kubectl taint nodes --all node-role.kubernetes.io/master- ...
¿Debo ejecutar kubeadm reset
antes de intentarlo de nuevo? o simplemente ejecute kubeadm init
y luego cargue el CIDR?
¿Debo ejecutar kubeadm reset antes de intentarlo de nuevo?
sí, a menos que tenga alguna razón para no quererlo?
Ejecuté kubeadm reset
y luego ocurrió un error después de [init] this might take a minute or longer if the control plane images have to be pulled
:
[init] using Kubernetes version: v1.11.1
[preflight] running pre-flight checks
I0727 16:36:29.663874 26847 kernel_validator.go:81] Validating kernel version
I0727 16:36:29.664814 26847 kernel_validator.go:96] Validating kernel config
[preflight/images] Pulling images required for setting up a Kubernetes cluster
[preflight/images] This might take a minute or two, depending on the speed of your internet connection
[preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[preflight] Activating the kubelet service
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [kubernetes kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.1.19]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Generated etcd/ca certificate and key.
[certificates] Generated etcd/server certificate and key.
[certificates] etcd/server serving cert is signed for DNS names [kubernetes localhost] and IPs [127.0.0.1 ::1]
[certificates] Generated etcd/peer certificate and key.
[certificates] etcd/peer serving cert is signed for DNS names [kubernetes localhost] and IPs [192.168.1.19 127.0.0.1 ::1]
[certificates] Generated etcd/healthcheck-client certificate and key.
[certificates] Generated apiserver-etcd-client certificate and key.
[certificates] valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests"
[init] this might take a minute or longer if the control plane images have to be pulled
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
- No internet connection is available so the kubelet cannot pull or find the following control plane images:
- k8s.gcr.io/kube-apiserver-amd64:v1.11.1
- k8s.gcr.io/kube-controller-manager-amd64:v1.11.1
- k8s.gcr.io/kube-scheduler-amd64:v1.11.1
- k8s.gcr.io/etcd-amd64:3.2.18
- You can check or miligate this in beforehand with "kubeadm config images pull" to make sure the images
are downloaded locally and cached.
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
couldn't initialize a Kubernetes cluster
Aquí está la salida kubeadm reset
:
[reset] WARNING: changes made to this host by 'kubeadm init' or 'kubeadm join' will be reverted.
[reset] are you sure you want to proceed? [y/N]: y
[preflight] running pre-flight checks
[reset] stopping the kubelet service
[reset] unmounting mounted directories in "/var/lib/kubelet"
[reset] removing kubernetes-managed containers
[reset] cleaning up running containers using crictl with socket /var/run/dockershim.sock
[reset] failed to list running pods using crictl: exit status 1. Trying to use docker instead[reset] deleting contents of stateful directories: [/var/lib/kubelet /etc/cni/net.d /var/lib/dockershim /var/run/kubernetes /var/lib/etcd]
[reset] deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
Unfortunately, an error has occurred: timed out waiting for the condition
He estado viendo esto en los últimos días por parte de los usuarios, por alguna razón ...
¿Puedes pre-extraer las imágenes usando:
kubeadm config images pull --kubernetes-version 1.11.1
luego intente init
nuevamente.
si vuelve a fallar, ¿qué dice journalctl
sobre el kubelet?
aquí está la salida de kubeadm config images pull --kubernetes-version 1.11.1
:
[root<strong i="7">@kubernetes</strong> ~]# kubeadm config images pull --kubernetes-version 1.11.1
[config/images] Pulled k8s.gcr.io/kube-apiserver-amd64:v1.11.1
[config/images] Pulled k8s.gcr.io/kube-controller-manager-amd64:v1.11.1
[config/images] Pulled k8s.gcr.io/kube-scheduler-amd64:v1.11.1
[config/images] Pulled k8s.gcr.io/kube-proxy-amd64:v1.11.1
[config/images] Pulled k8s.gcr.io/pause:3.1
[config/images] Pulled k8s.gcr.io/etcd-amd64:3.2.18
[config/images] Pulled k8s.gcr.io/coredns:1.1.3
Luego ejecutó kubeadm reset
:
[reset] WARNING: changes made to this host by 'kubeadm init' or 'kubeadm join' will be reverted.
[reset] are you sure you want to proceed? [y/N]: y
[preflight] running pre-flight checks
[reset] stopping the kubelet service
[reset] unmounting mounted directories in "/var/lib/kubelet"
[reset] removing kubernetes-managed containers
[reset] cleaning up running containers using crictl with socket /var/run/dockershim.sock
[reset] failed to list running pods using crictl: exit status 1. Trying to use docker instead[reset] deleting contents of stateful directories: [/var/lib/kubelet /etc/cni/net.d /var/lib/dockershim /var/run/kubernetes /var/lib/etcd]
[reset] deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
Entonces kubeadm init
:
[init] using Kubernetes version: v1.11.1
[preflight] running pre-flight checks
I0727 17:02:09.000020 3408 kernel_validator.go:81] Validating kernel version
I0727 17:02:09.000209 3408 kernel_validator.go:96] Validating kernel config
[preflight/images] Pulling images required for setting up a Kubernetes cluster
[preflight/images] This might take a minute or two, depending on the speed of your internet connection
[preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[preflight] Activating the kubelet service
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [kubernetes kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.1.19]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Generated etcd/ca certificate and key.
[certificates] Generated etcd/server certificate and key.
[certificates] etcd/server serving cert is signed for DNS names [kubernetes localhost] and IPs [127.0.0.1 ::1]
[certificates] Generated etcd/peer certificate and key.
[certificates] etcd/peer serving cert is signed for DNS names [kubernetes localhost] and IPs [192.168.1.19 127.0.0.1 ::1]
[certificates] Generated etcd/healthcheck-client certificate and key.
[certificates] Generated apiserver-etcd-client certificate and key.
[certificates] valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests"
[init] this might take a minute or longer if the control plane images have to be pulled
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
- No internet connection is available so the kubelet cannot pull or find the following control plane images:
- k8s.gcr.io/kube-apiserver-amd64:v1.11.1
- k8s.gcr.io/kube-controller-manager-amd64:v1.11.1
- k8s.gcr.io/kube-scheduler-amd64:v1.11.1
- k8s.gcr.io/etcd-amd64:3.2.18
- You can check or miligate this in beforehand with "kubeadm config images pull" to make sure the images
are downloaded locally and cached.
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
couldn't initialize a Kubernetes cluster
journalctl últimas entradas:
Jul 27 17:03:00 kubernetes kubelet[3636]: I0727 17:03:00.604819 3636 kuberuntime_manager.go:757] checking backoff for container "etcd" in pod "etcd-kubernetes_kube-system(6fd4d3c9fe373df920ce5e1e4572fd1d)"
Jul 27 17:03:00 kubernetes kubelet[3636]: I0727 17:03:00.604946 3636 kuberuntime_manager.go:767] Back-off 20s restarting failed container=etcd pod=etcd-kubernetes_kube-system(6fd4d3c9fe373df920ce5e1e4572fd1d)
Jul 27 17:03:00 kubernetes kubelet[3636]: E0727 17:03:00.604982 3636 pod_workers.go:186] Error syncing pod 6fd4d3c9fe373df920ce5e1e4572fd1d ("etcd-kubernetes_kube-system(6fd4d3c9fe373df920ce5e1e4572fd1d)"), skipping: failed to "StartContainer" for "etcd" with CrashLoopBackOff: "Back-off 20s restarting failed container=etcd pod=etcd-kubernetes_kube-system(6fd4d3c9fe373df920ce5e1e4572fd1d)"
Jul 27 17:03:01 kubernetes kubelet[3636]: I0727 17:03:01.175841 3636 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
Jul 27 17:03:01 kubernetes kubelet[3636]: I0727 17:03:01.179832 3636 kubelet_node_status.go:79] Attempting to register node kubernetes
Jul 27 17:03:01 kubernetes kubelet[3636]: E0727 17:03:01.180286 3636 kubelet_node_status.go:103] Unable to register node "kubernetes" with API server: Post https://192.168.1.19:6443/api/v1/nodes: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:03:01 kubernetes kubelet[3636]: E0727 17:03:01.414597 3636 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://192.168.1.19:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:03:01 kubernetes kubelet[3636]: E0727 17:03:01.415518 3636 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.1.19:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:03:01 kubernetes kubelet[3636]: E0727 17:03:01.416549 3636 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://192.168.1.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:03:01 kubernetes kubelet[3636]: I0727 17:03:01.428500 3636 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
Jul 27 17:03:01 kubernetes kubelet[3636]: I0727 17:03:01.732777 3636 kuberuntime_manager.go:513] Container {Name:kube-apiserver Image:k8s.gcr.io/kube-apiserver-amd64:v1.11.1 Command:[kube-apiserver --authorization-mode=Node,RBAC --advertise-address=192.168.1.19 --allow-privileged=true --client-ca-file=/etc/kubernetes/pki/ca.crt --disable-admission-plugins=PersistentVolumeLabel --enable-admission-plugins=NodeRestriction --enable-bootstrap-token-auth=true --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key --etcd-servers=https://127.0.0.1:2379 --insecure-port=0 --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-key-file=/etc/kubernetes/pki/sa.pub --service-cluster-ip-range=10.96.0.0/12 --tls-cert-file=/etc/kubernetes/pki/apiserver.crt --tls-private-key-file=/etc/kubernetes/pki/apiserver.key] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[cpu:{i:{value:250 scale:-3} d:{Dec:<nil>} s:250m Format:DecimalSI}]} VolumeMounts:[{Name:k8s-certs ReadOnly:true MountPath:/etc/kubernetes/pki SubPath: MountPropagation:<nil>} {Name:ca-certs ReadOnly:true MountPath:/etc/ssl/certs SubPath: MountPropagation:<nil>} {Name:etc-pki ReadOnly:true MountPath:/etc/pki SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:6443,Host:192.168.1.19,Scheme:HTTPS,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:15,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:8,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Jul 27 17:03:01 kubernetes kubelet[3636]: I0727 17:03:01.732893 3636 kuberuntime_manager.go:757] checking backoff for container "kube-apiserver" in pod "kube-apiserver-kubernetes_kube-system(32544bee4c007108f4b6c54da83cc67e)"
Jul 27 17:03:02 kubernetes kubelet[3636]: I0727 17:03:02.325269 3636 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
Jul 27 17:03:02 kubernetes kubelet[3636]: I0727 17:03:02.325333 3636 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
Jul 27 17:03:02 kubernetes kubelet[3636]: W0727 17:03:02.329421 3636 status_manager.go:482] Failed to get status for pod "kube-apiserver-kubernetes_kube-system(32544bee4c007108f4b6c54da83cc67e)": Get https://192.168.1.19:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-kubernetes: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:03:02 kubernetes kubelet[3636]: E0727 17:03:02.415253 3636 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://192.168.1.19:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:03:02 kubernetes kubelet[3636]: E0727 17:03:02.416228 3636 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.1.19:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:03:02 kubernetes kubelet[3636]: E0727 17:03:02.417254 3636 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://192.168.1.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:03:02 kubernetes kubelet[3636]: I0727 17:03:02.629551 3636 kuberuntime_manager.go:513] Container {Name:kube-apiserver Image:k8s.gcr.io/kube-apiserver-amd64:v1.11.1 Command:[kube-apiserver --authorization-mode=Node,RBAC --advertise-address=192.168.1.19 --allow-privileged=true --client-ca-file=/etc/kubernetes/pki/ca.crt --disable-admission-plugins=PersistentVolumeLabel --enable-admission-plugins=NodeRestriction --enable-bootstrap-token-auth=true --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key --etcd-servers=https://127.0.0.1:2379 --insecure-port=0 --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-key-file=/etc/kubernetes/pki/sa.pub --service-cluster-ip-range=10.96.0.0/12 --tls-cert-file=/etc/kubernetes/pki/apiserver.crt --tls-private-key-file=/etc/kubernetes/pki/apiserver.key] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[cpu:{i:{value:250 scale:-3} d:{Dec:<nil>} s:250m Format:DecimalSI}]} VolumeMounts:[{Name:k8s-certs ReadOnly:true MountPath:/etc/kubernetes/pki SubPath: MountPropagation:<nil>} {Name:ca-certs ReadOnly:true MountPath:/etc/ssl/certs SubPath: MountPropagation:<nil>} {Name:etc-pki ReadOnly:true MountPath:/etc/pki SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:6443,Host:192.168.1.19,Scheme:HTTPS,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:15,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:8,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Jul 27 17:03:02 kubernetes kubelet[3636]: I0727 17:03:02.629669 3636 kuberuntime_manager.go:757] checking backoff for container "kube-apiserver" in pod "kube-apiserver-kubernetes_kube-system(32544bee4c007108f4b6c54da83cc67e)"
Jul 27 17:03:02 kubernetes kubelet[3636]: I0727 17:03:02.629850 3636 kuberuntime_manager.go:767] Back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-kubernetes_kube-system(32544bee4c007108f4b6c54da83cc67e)
Jul 27 17:03:02 kubernetes kubelet[3636]: E0727 17:03:02.629891 3636 pod_workers.go:186] Error syncing pod 32544bee4c007108f4b6c54da83cc67e ("kube-apiserver-kubernetes_kube-system(32544bee4c007108f4b6c54da83cc67e)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "Back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-kubernetes_kube-system(32544bee4c007108f4b6c54da83cc67e)"
Jul 27 17:03:03 kubernetes kubelet[3636]: E0727 17:03:03.415923 3636 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://192.168.1.19:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:03:03 kubernetes kubelet[3636]: E0727 17:03:03.416815 3636 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.1.19:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:03:03 kubernetes kubelet[3636]: E0727 17:03:03.417995 3636 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://192.168.1.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:03:04 kubernetes kubelet[3636]: E0727 17:03:04.416640 3636 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://192.168.1.19:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:03:04 kubernetes kubelet[3636]: E0727 17:03:04.417583 3636 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.1.19:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:03:04 kubernetes kubelet[3636]: E0727 17:03:04.418535 3636 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://192.168.1.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:03:05 kubernetes kubelet[3636]: E0727 17:03:05.417293 3636 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://192.168.1.19:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:03:05 kubernetes kubelet[3636]: E0727 17:03:05.418283 3636 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.1.19:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:03:05 kubernetes kubelet[3636]: E0727 17:03:05.419337 3636 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://192.168.1.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:03:05 kubernetes kubelet[3636]: W0727 17:03:05.511995 3636 cni.go:172] Unable to update cni config: No networks found in /etc/cni/net.d
Jul 27 17:03:05 kubernetes kubelet[3636]: E0727 17:03:05.512167 3636 kubelet.go:2110] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Jul 27 17:03:06 kubernetes kubelet[3636]: I0727 17:03:06.396897 3636 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
Jul 27 17:03:06 kubernetes kubelet[3636]: W0727 17:03:06.401380 3636 status_manager.go:482] Failed to get status for pod "etcd-kubernetes_kube-system(6fd4d3c9fe373df920ce5e1e4572fd1d)": Get https://192.168.1.19:6443/api/v1/namespaces/kube-system/pods/etcd-kubernetes: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:03:06 kubernetes kubelet[3636]: E0727 17:03:06.417876 3636 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://192.168.1.19:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:03:06 kubernetes kubelet[3636]: E0727 17:03:06.418801 3636 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.1.19:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:03:06 kubernetes kubelet[3636]: E0727 17:03:06.419836 3636 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://192.168.1.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:03:06 kubernetes kubelet[3636]: I0727 17:03:06.577952 3636 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
Jul 27 17:03:06 kubernetes kubelet[3636]: W0727 17:03:06.582165 3636 status_manager.go:482] Failed to get status for pod "kube-apiserver-kubernetes_kube-system(32544bee4c007108f4b6c54da83cc67e)": Get https://192.168.1.19:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-kubernetes: dial tcp 192.168.1.19:6443: connect: connection refused
Todavía no puedo agregar ningún módulo de red.
Todavía no puedo agregar ningún módulo de red.
sí, esto está fallando antes.
pull images
implica que tiene conectividad con el depósito gcr.io
.
por favor, reinicie el kubelet manualmente y vea lo que muestran los registros:
systemctl restart kubelet
systemctl status kubelet # <---- ?
journalctl -xeu kubelet # <---- ?
Me estoy quedando sin ideas.
systemctl restart kubelet
systemctl status kubelet
:● kubelet.service - kubelet: The Kubernetes Node Agent
Loaded: loaded (/etc/systemd/system/kubelet.service; enabled; vendor preset: disabled)
Drop-In: /etc/systemd/system/kubelet.service.d
└─10-kubeadm.conf
Active: active (running) since Fri 2018-07-27 17:22:15 CEST; 6s ago
Docs: http://kubernetes.io/docs/
Main PID: 10401 (kubelet)
CGroup: /system.slice/kubelet.service
└─10401 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --cgroup-driver=systemd --cni-bin-dir=/opt/cni/bin --cni-conf-dir=/etc/cni/net.d --network...
Jul 27 17:22:18 kubernetes kubelet[10401]: E0727 17:22:18.636513 10401 kubelet_node_status.go:103] Unable to register node "kubernetes" with API server: Post https://192.168.1.19:6443/api/v1/nodes: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:19 kubernetes kubelet[10401]: E0727 17:22:19.482761 10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://192.168.1.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dkubernetes&li...: connection refused
Jul 27 17:22:19 kubernetes kubelet[10401]: E0727 17:22:19.483784 10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.1.19:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dkuberne...: connection refused
Jul 27 17:22:19 kubernetes kubelet[10401]: E0727 17:22:19.484750 10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://192.168.1.19:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp...: connection refused
Jul 27 17:22:20 kubernetes kubelet[10401]: E0727 17:22:20.483505 10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://192.168.1.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dkubernetes&li...: connection refused
Jul 27 17:22:20 kubernetes kubelet[10401]: E0727 17:22:20.484300 10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.1.19:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dkuberne...: connection refused
Jul 27 17:22:20 kubernetes kubelet[10401]: E0727 17:22:20.485469 10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://192.168.1.19:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp...: connection refused
Jul 27 17:22:20 kubernetes kubelet[10401]: W0727 17:22:20.623173 10401 cni.go:172] Unable to update cni config: No networks found in /etc/cni/net.d
Jul 27 17:22:20 kubernetes kubelet[10401]: E0727 17:22:20.623379 10401 kubelet.go:2110] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Jul 27 17:22:21 kubernetes kubelet[10401]: E0727 17:22:21.058116 10401 event.go:212] Unable to write event: 'Post https://192.168.1.19:6443/api/v1/namespaces/default/events: dial tcp 192.168.1.19:6443: connect: connection refused' (may retry after sleeping)
Hint: Some lines were ellipsized, use -l to show in full.
journalctl -xeu kubelet | menos:
-- Subject: Unit kubelet.service has begun shutting down
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit kubelet.service has begun shutting down.
Jul 27 17:22:15 kubernetes systemd[1]: Started kubelet: The Kubernetes Node Agent.
-- Subject: Unit kubelet.service has finished start-up
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit kubelet.service has finished starting up.
--
-- The start-up result is done.
Jul 27 17:22:15 kubernetes systemd[1]: Starting kubelet: The Kubernetes Node Agent...
-- Subject: Unit kubelet.service has begun start-up
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit kubelet.service has begun starting up.
Jul 27 17:22:15 kubernetes kubelet[10401]: Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more informati
on.
Jul 27 17:22:15 kubernetes kubelet[10401]: Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more informati
on.
Jul 27 17:22:15 kubernetes kubelet[10401]: I0727 17:22:15.439279 10401 server.go:408] Version: v1.11.1
Jul 27 17:22:15 kubernetes kubelet[10401]: I0727 17:22:15.439636 10401 plugins.go:97] No cloud provider specified.
Jul 27 17:22:15 kubernetes kubelet[10401]: I0727 17:22:15.443829 10401 certificate_store.go:131] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
Jul 27 17:22:15 kubernetes kubelet[10401]: I0727 17:22:15.478490 10401 server.go:648] --cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /
Jul 27 17:22:15 kubernetes kubelet[10401]: I0727 17:22:15.478906 10401 container_manager_linux.go:243] container manager verified user specified cgroup-root exists: []
Jul 27 17:22:15 kubernetes kubelet[10401]: I0727 17:22:15.478931 10401 container_manager_linux.go:248] Creating Container Manager object based on Node Config: {RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:docker CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:imagefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.15} GracePeriod:0s MinReclaim:<nil>} {Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.1} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>}]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalCPUManagerReconcilePeriod:10s ExperimentalPodPidsLimit:-1 EnforceCPULimits:true}
Jul 27 17:22:15 kubernetes kubelet[10401]: I0727 17:22:15.479017 10401 container_manager_linux.go:267] Creating device plugin manager: true
Jul 27 17:22:15 kubernetes kubelet[10401]: I0727 17:22:15.479052 10401 state_mem.go:36] [cpumanager] initializing new in-memory state store
Jul 27 17:22:15 kubernetes kubelet[10401]: I0727 17:22:15.479103 10401 state_mem.go:84] [cpumanager] updated default cpuset: ""
Jul 27 17:22:15 kubernetes kubelet[10401]: I0727 17:22:15.479113 10401 state_mem.go:92] [cpumanager] updated cpuset assignments: "map[]"
Jul 27 17:22:15 kubernetes kubelet[10401]: I0727 17:22:15.479206 10401 kubelet.go:274] Adding pod path: /etc/kubernetes/manifests
Jul 27 17:22:15 kubernetes kubelet[10401]: I0727 17:22:15.479256 10401 kubelet.go:299] Watching apiserver
Jul 27 17:22:15 kubernetes kubelet[10401]: E0727 17:22:15.479973 10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://192.168.1.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:15 kubernetes kubelet[10401]: E0727 17:22:15.480000 10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.1.19:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:15 kubernetes kubelet[10401]: E0727 17:22:15.480053 10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://192.168.1.19:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:15 kubernetes kubelet[10401]: I0727 17:22:15.484688 10401 client.go:75] Connecting to docker on unix:///var/run/docker.sock
Jul 27 17:22:15 kubernetes kubelet[10401]: I0727 17:22:15.484709 10401 client.go:104] Start docker client with request timeout=2m0s
Jul 27 17:22:15 kubernetes kubelet[10401]: W0727 17:22:15.486438 10401 docker_service.go:545] Hairpin mode set to "promiscuous-bridge" but kubenet is not enabled, falling back to "hairpin-veth"
Jul 27 17:22:15 kubernetes kubelet[10401]: I0727 17:22:15.486467 10401 docker_service.go:238] Hairpin mode set to "hairpin-veth"
Jul 27 17:22:15 kubernetes kubelet[10401]: W0727 17:22:15.486599 10401 cni.go:172] Unable to update cni config: No networks found in /etc/cni/net.d
Jul 27 17:22:15 kubernetes kubelet[10401]: W0727 17:22:15.488739 10401 hostport_manager.go:68] The binary conntrack is not installed, this can cause failures in network connection cleanup.
Jul 27 17:22:15 kubernetes kubelet[10401]: W0727 17:22:15.488845 10401 cni.go:172] Unable to update cni config: No networks found in /etc/cni/net.d
Jul 27 17:22:15 kubernetes kubelet[10401]: I0727 17:22:15.488882 10401 docker_service.go:253] Docker cri networking managed by cni
Jul 27 17:22:15 kubernetes kubelet[10401]: I0727 17:22:15.499501 10401 docker_service.go:258] Docker Info: &{ID:V36L:ETJO:IECX:PJF4:G3GB:JHA6:LGCF:VQBJ:D2GY:PVFO:567O:545Y Containers:8 ContainersRunning:6 ContainersPaused:0 ContainersStopped:2 Images:21 Driver:overlay2 DriverStatus:[[Backing Filesystem xfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:[] Plugins:{Volume:[local] Network:[bridge host macvlan null overlay] Authorization:[] Log:[]} MemoryLimit:true SwapLimit:true KernelMemory:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6tables:true Debug:false NFd:40 OomKillDisable:true NGoroutines:47 SystemTime:2018-07-27T17:22:15.493615517+02:00 LoggingDriver:journald CgroupDriver:systemd NEventsListener:0 KernelVersion:3.10.0-862.9.1.el7.x86_64 OperatingSystem:CentOS Linux 7 (Core) OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:0xc420f3e000 NCPU:12 MemTotal:33386934272 GenericResources:[] DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:kubernetes Labels:[] ExperimentalBuild:false ServerVersion:1.13.1 ClusterStore: ClusterAdvertise: Runtimes:map[docker-runc:{Path:/usr/libexec/docker/docker-runc-current Args:[]} runc:{Path:docker-runc Args:[]}] DefaultRuntime:docker-runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:[] Nodes:0 Managers:0 Cluster:0xc420f48000} LiveRestoreEnabled:false Isolation: InitBinary:/usr/libexec/docker/docker-init-current ContainerdCommit:{ID: Expected:aa8187dbd3b7ad67d8e5e3a15115d3eef43a7ed1} RuncCommit:{ID:5eda6f6fd0c2884c2c8e78a6e7119e8d0ecedb77 Expected:9df8b306d01f59d3a8029be411de015b7304dd8f} InitCommit:{ID:fec3683b971d9c3ef73f284f176672c44b448662 Expected:949e6facb77383876aeff8a6944dde66b3089574} SecurityOptions:[name=seccomp,profile=/etc/docker/seccomp.json name=selinux]}
Jul 27 17:22:15 kubernetes kubelet[10401]: I0727 17:22:15.499629 10401 docker_service.go:271] Setting cgroupDriver to systemd
Jul 27 17:22:15 kubernetes kubelet[10401]: I0727 17:22:15.510960 10401 kuberuntime_manager.go:186] Container runtime docker initialized, version: 1.13.1, apiVersion: 1.26.0
Jul 27 17:22:15 kubernetes kubelet[10401]: I0727 17:22:15.511524 10401 csi_plugin.go:111] kubernetes.io/csi: plugin initializing...
Jul 27 17:22:15 kubernetes kubelet[10401]: I0727 17:22:15.512249 10401 server.go:129] Starting to listen on 0.0.0.0:10250
Jul 27 17:22:15 kubernetes kubelet[10401]: E0727 17:22:15.512297 10401 kubelet.go:1261] Image garbage collection failed once. Stats initialization may not have completed yet: failed to get imageFs info: unable to find data for container /
Jul 27 17:22:15 kubernetes kubelet[10401]: I0727 17:22:15.512389 10401 server.go:986] Started kubelet
Jul 27 17:22:15 kubernetes kubelet[10401]: E0727 17:22:15.512717 10401 event.go:212] Unable to write event: 'Post https://192.168.1.19:6443/api/v1/namespaces/default/events: dial tcp 192.168.1.19:6443: connect: connection refused' (may retry after sleeping)
Jul 27 17:22:15 kubernetes kubelet[10401]: I0727 17:22:15.513432 10401 server.go:302] Adding debug handlers to kubelet server.
Jul 27 17:22:15 kubernetes kubelet[10401]: I0727 17:22:15.513637 10401 fs_resource_analyzer.go:66] Starting FS ResourceAnalyzer
Jul 27 17:22:15 kubernetes kubelet[10401]: I0727 17:22:15.513742 10401 status_manager.go:152] Starting to sync pod status with apiserver
Jul 27 17:22:15 kubernetes kubelet[10401]: I0727 17:22:15.513810 10401 kubelet.go:1758] Starting kubelet main sync loop.
Jul 27 17:22:15 kubernetes kubelet[10401]: I0727 17:22:15.513901 10401 kubelet.go:1775] skipping pod synchronization - [container runtime is down PLEG is not healthy: pleg was last seen active 2562047h47m16.854775807s ago; threshold is 3m0s]
Jul 27 17:22:15 kubernetes kubelet[10401]: I0727 17:22:15.516479 10401 volume_manager.go:247] Starting Kubelet Volume Manager
Jul 27 17:22:15 kubernetes kubelet[10401]: I0727 17:22:15.516557 10401 desired_state_of_world_populator.go:130] Desired state populator starts to run
Jul 27 17:22:15 kubernetes kubelet[10401]: W0727 17:22:15.518992 10401 cni.go:172] Unable to update cni config: No networks found in /etc/cni/net.d
Jul 27 17:22:15 kubernetes kubelet[10401]: E0727 17:22:15.519170 10401 kubelet.go:2110] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Jul 27 17:22:15 kubernetes kubelet[10401]: W0727 17:22:15.557259 10401 container.go:393] Failed to create summary reader for "/system.slice/docker.service": none of the resources are being tracked.
Jul 27 17:22:15 kubernetes kubelet[10401]: W0727 17:22:15.557505 10401 container.go:393] Failed to create summary reader for "/system.slice/irqbalance.service": none of the resources are being tracked.
Jul 27 17:22:15 kubernetes kubelet[10401]: W0727 17:22:15.557765 10401 container.go:393] Failed to create summary reader for "/system.slice/sshd.service": none of the resources are being tracked.
Jul 27 17:22:15 kubernetes kubelet[10401]: W0727 17:22:15.561683 10401 container.go:393] Failed to create summary reader for "/system.slice/k8s-self-hosted-recover.service": none of the resources are being tracked.
Jul 27 17:22:15 kubernetes kubelet[10401]: W0727 17:22:15.565386 10401 container.go:393] Failed to create summary reader for "/system.slice/systemd-udevd.service": none of the resources are being tracked.
Jul 27 17:22:15 kubernetes kubelet[10401]: W0727 17:22:15.565666 10401 container.go:393] Failed to create summary reader for "/system.slice/tuned.service": none of the resources are being tracked.
Jul 27 17:22:15 kubernetes kubelet[10401]: W0727 17:22:15.567498 10401 container.go:393] Failed to create summary reader for "/system.slice/auditd.service": none of the resources are being tracked.
Jul 27 17:22:15 kubernetes kubelet[10401]: W0727 17:22:15.567815 10401 container.go:393] Failed to create summary reader for "/system.slice/system-getty.slice": none of the resources are being tracked.
Jul 27 17:22:15 kubernetes kubelet[10401]: W0727 17:22:15.568113 10401 container.go:393] Failed to create summary reader for "/system.slice/systemd-journald.service": none of the resources are being tracked.
Jul 27 17:22:15 kubernetes kubelet[10401]: W0727 17:22:15.568957 10401 container.go:393] Failed to create summary reader for "/system.slice/NetworkManager.service": none of the resources are being tracked.
Jul 27 17:22:15 kubernetes kubelet[10401]: W0727 17:22:15.569544 10401 container.go:393] Failed to create summary reader for "/system.slice/polkit.service": none of the resources are being tracked.
Jul 27 17:22:15 kubernetes kubelet[10401]: W0727 17:22:15.572074 10401 container.go:393] Failed to create summary reader for "/system.slice/rsyslog.service": none of the resources are being tracked.
Jul 27 17:22:15 kubernetes kubelet[10401]: W0727 17:22:15.572295 10401 container.go:393] Failed to create summary reader for "/system.slice/systemd-logind.service": none of the resources are being tracked.
Jul 27 17:22:15 kubernetes kubelet[10401]: W0727 17:22:15.575556 10401 container.go:393] Failed to create summary reader for "/system.slice/dbus.service": none of the resources are being tracked.
Jul 27 17:22:15 kubernetes kubelet[10401]: W0727 17:22:15.580473 10401 container.go:393] Failed to create summary reader for "/system.slice/crond.service": none of the resources are being tracked.
Jul 27 17:22:15 kubernetes kubelet[10401]: W0727 17:22:15.580825 10401 container.go:393] Failed to create summary reader for "/system.slice/lvm2-lvmetad.service": none of the resources are being tracked.
Jul 27 17:22:15 kubernetes kubelet[10401]: I0727 17:22:15.614059 10401 kubelet.go:1775] skipping pod synchronization - [container runtime is down]
Jul 27 17:22:15 kubernetes kubelet[10401]: I0727 17:22:15.614738 10401 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
Jul 27 17:22:15 kubernetes kubelet[10401]: I0727 17:22:15.617620 10401 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
Jul 27 17:22:15 kubernetes kubelet[10401]: I0727 17:22:15.618177 10401 kubelet_node_status.go:79] Attempting to register node kubernetes
Jul 27 17:22:15 kubernetes kubelet[10401]: E0727 17:22:15.618610 10401 kubelet_node_status.go:103] Unable to register node "kubernetes" with API server: Post https://192.168.1.19:6443/api/v1/nodes: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:15 kubernetes kubelet[10401]: I0727 17:22:15.620065 10401 cpu_manager.go:155] [cpumanager] starting with none policy
Jul 27 17:22:15 kubernetes kubelet[10401]: I0727 17:22:15.620083 10401 cpu_manager.go:156] [cpumanager] reconciling every 10s
Jul 27 17:22:15 kubernetes kubelet[10401]: I0727 17:22:15.620097 10401 policy_none.go:42] [cpumanager] none policy: Start
Jul 27 17:22:15 kubernetes kubelet[10401]: Starting Device Plugin manager
Jul 27 17:22:15 kubernetes kubelet[10401]: W0727 17:22:15.621716 10401 container_manager_linux.go:792] CPUAccounting not enabled for pid: 986
Jul 27 17:22:15 kubernetes kubelet[10401]: W0727 17:22:15.621726 10401 container_manager_linux.go:795] MemoryAccounting not enabled for pid: 986
Jul 27 17:22:15 kubernetes kubelet[10401]: W0727 17:22:15.621797 10401 container_manager_linux.go:792] CPUAccounting not enabled for pid: 10401
Jul 27 17:22:15 kubernetes kubelet[10401]: W0727 17:22:15.621804 10401 container_manager_linux.go:795] MemoryAccounting not enabled for pid: 10401
Jul 27 17:22:15 kubernetes kubelet[10401]: E0727 17:22:15.622220 10401 eviction_manager.go:243] eviction manager: failed to get get summary stats: failed to get node info: node "kubernetes" not found
Jul 27 17:22:15 kubernetes kubelet[10401]: I0727 17:22:15.818778 10401 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
Jul 27 17:22:15 kubernetes kubelet[10401]: I0727 17:22:15.822804 10401 kubelet_node_status.go:79] Attempting to register node kubernetes
Jul 27 17:22:15 kubernetes kubelet[10401]: E0727 17:22:15.823257 10401 kubelet_node_status.go:103] Unable to register node "kubernetes" with API server: Post https://192.168.1.19:6443/api/v1/nodes: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:16 kubernetes kubelet[10401]: I0727 17:22:16.223461 10401 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
Jul 27 17:22:16 kubernetes kubelet[10401]: I0727 17:22:16.227305 10401 kubelet_node_status.go:79] Attempting to register node kubernetes
Jul 27 17:22:16 kubernetes kubelet[10401]: E0727 17:22:16.227718 10401 kubelet_node_status.go:103] Unable to register node "kubernetes" with API server: Post https://192.168.1.19:6443/api/v1/nodes: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:16 kubernetes kubelet[10401]: E0727 17:22:16.480650 10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://192.168.1.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:16 kubernetes kubelet[10401]: E0727 17:22:16.481646 10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.1.19:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:16 kubernetes kubelet[10401]: E0727 17:22:16.482678 10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://192.168.1.19:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:17 kubernetes kubelet[10401]: I0727 17:22:17.027967 10401 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
Jul 27 17:22:17 kubernetes kubelet[10401]: I0727 17:22:17.031757 10401 kubelet_node_status.go:79] Attempting to register node kubernetes
Jul 27 17:22:17 kubernetes kubelet[10401]: E0727 17:22:17.032125 10401 kubelet_node_status.go:103] Unable to register node "kubernetes" with API server: Post https://192.168.1.19:6443/api/v1/nodes: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:17 kubernetes kubelet[10401]: E0727 17:22:17.481348 10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://192.168.1.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:17 kubernetes kubelet[10401]: E0727 17:22:17.482224 10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.1.19:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:17 kubernetes kubelet[10401]: E0727 17:22:17.483219 10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://192.168.1.19:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:18 kubernetes kubelet[10401]: E0727 17:22:18.482059 10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://192.168.1.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:18 kubernetes kubelet[10401]: E0727 17:22:18.483034 10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.1.19:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:18 kubernetes kubelet[10401]: E0727 17:22:18.484060 10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://192.168.1.19:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:18 kubernetes kubelet[10401]: I0727 17:22:18.632340 10401 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
Jul 27 17:22:18 kubernetes kubelet[10401]: I0727 17:22:18.636096 10401 kubelet_node_status.go:79] Attempting to register node kubernetes
Jul 27 17:22:18 kubernetes kubelet[10401]: E0727 17:22:18.636513 10401 kubelet_node_status.go:103] Unable to register node "kubernetes" with API server: Post https://192.168.1.19:6443/api/v1/nodes: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:19 kubernetes kubelet[10401]: E0727 17:22:19.482761 10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://192.168.1.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:19 kubernetes kubelet[10401]: E0727 17:22:19.483784 10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.1.19:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:19 kubernetes kubelet[10401]: E0727 17:22:19.484750 10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://192.168.1.19:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:20 kubernetes kubelet[10401]: E0727 17:22:20.483505 10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://192.168.1.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:20 kubernetes kubelet[10401]: E0727 17:22:20.484300 10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.1.19:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:20 kubernetes kubelet[10401]: E0727 17:22:20.485469 10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://192.168.1.19:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:20 kubernetes kubelet[10401]: W0727 17:22:20.623173 10401 cni.go:172] Unable to update cni config: No networks found in /etc/cni/net.d
Jul 27 17:22:20 kubernetes kubelet[10401]: E0727 17:22:20.623379 10401 kubelet.go:2110] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Jul 27 17:22:21 kubernetes kubelet[10401]: E0727 17:22:21.058116 10401 event.go:212] Unable to write event: 'Post https://192.168.1.19:6443/api/v1/namespaces/default/events: dial tcp 192.168.1.19:6443: connect: connection refused' (may retry after sleeping)
Jul 27 17:22:21 kubernetes kubelet[10401]: E0727 17:22:21.484293 10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://192.168.1.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:21 kubernetes kubelet[10401]: E0727 17:22:21.485115 10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.1.19:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:21 kubernetes kubelet[10401]: E0727 17:22:21.486258 10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://192.168.1.19:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:21 kubernetes kubelet[10401]: I0727 17:22:21.836741 10401 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
Jul 27 17:22:21 kubernetes kubelet[10401]: I0727 17:22:21.840909 10401 kubelet_node_status.go:79] Attempting to register node kubernetes
Jul 27 17:22:21 kubernetes kubelet[10401]: E0727 17:22:21.841341 10401 kubelet_node_status.go:103] Unable to register node "kubernetes" with API server: Post https://192.168.1.19:6443/api/v1/nodes: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:22 kubernetes kubelet[10401]: E0727 17:22:22.484956 10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://192.168.1.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:22 kubernetes kubelet[10401]: E0727 17:22:22.485979 10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.1.19:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:22 kubernetes kubelet[10401]: E0727 17:22:22.487497 10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://192.168.1.19:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:23 kubernetes kubelet[10401]: E0727 17:22:23.485724 10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://192.168.1.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:23 kubernetes kubelet[10401]: E0727 17:22:23.486543 10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.1.19:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:23 kubernetes kubelet[10401]: E0727 17:22:23.488107 10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://192.168.1.19:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:24 kubernetes kubelet[10401]: E0727 17:22:24.486391 10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://192.168.1.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:24 kubernetes kubelet[10401]: E0727 17:22:24.487305 10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.1.19:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:24 kubernetes kubelet[10401]: E0727 17:22:24.488708 10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://192.168.1.19:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:25 kubernetes kubelet[10401]: E0727 17:22:25.487061 10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://192.168.1.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:25 kubernetes kubelet[10401]: E0727 17:22:25.488108 10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.1.19:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:25 kubernetes kubelet[10401]: E0727 17:22:25.489302 10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://192.168.1.19:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:25 kubernetes kubelet[10401]: E0727 17:22:25.622507 10401 eviction_manager.go:243] eviction manager: failed to get get summary stats: failed to get node info: node "kubernetes" not found
Jul 27 17:22:25 kubernetes kubelet[10401]: W0727 17:22:25.624576 10401 cni.go:172] Unable to update cni config: No networks found in /etc/cni/net.d
Jul 27 17:22:25 kubernetes kubelet[10401]: E0727 17:22:25.624815 10401 kubelet.go:2110] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Jul 27 17:22:26 kubernetes kubelet[10401]: E0727 17:22:26.487846 10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://192.168.1.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:26 kubernetes kubelet[10401]: E0727 17:22:26.488682 10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.1.19:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:26 kubernetes kubelet[10401]: E0727 17:22:26.489896 10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://192.168.1.19:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:27 kubernetes kubelet[10401]: E0727 17:22:27.488510 10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://192.168.1.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:27 kubernetes kubelet[10401]: E0727 17:22:27.489607 10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.1.19:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:27 kubernetes kubelet[10401]: E0727 17:22:27.490628 10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://192.168.1.19:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:28 kubernetes kubelet[10401]: I0727 17:22:28.241605 10401 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
Jul 27 17:22:28 kubernetes kubelet[10401]: I0727 17:22:28.245892 10401 kubelet_node_status.go:79] Attempting to register node kubernetes
Jul 27 17:22:28 kubernetes kubelet[10401]: E0727 17:22:28.246319 10401 kubelet_node_status.go:103] Unable to register node "kubernetes" with API server: Post https://192.168.1.19:6443/api/v1/nodes: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:28 kubernetes kubelet[10401]: E0727 17:22:28.489234 10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://192.168.1.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:28 kubernetes kubelet[10401]: E0727 17:22:28.490271 10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.1.19:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:28 kubernetes kubelet[10401]: E0727 17:22:28.491127 10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://192.168.1.19:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:29 kubernetes kubelet[10401]: E0727 17:22:29.489955 10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://192.168.1.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:29 kubernetes kubelet[10401]: E0727 17:22:29.490777 10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.1.19:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:29 kubernetes kubelet[10401]: E0727 17:22:29.491942 10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://192.168.1.19:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:30 kubernetes kubelet[10401]: E0727 17:22:30.490671 10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://192.168.1.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:30 kubernetes kubelet[10401]: E0727 17:22:30.491715 10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.1.19:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:30 kubernetes kubelet[10401]: E0727 17:22:30.492608 10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://192.168.1.19:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:30 kubernetes kubelet[10401]: W0727 17:22:30.626001 10401 cni.go:172] Unable to update cni config: No networks found in /etc/cni/net.d
Jul 27 17:22:30 kubernetes kubelet[10401]: E0727 17:22:30.626177 10401 kubelet.go:2110] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Jul 27 17:22:31 kubernetes kubelet[10401]: E0727 17:22:31.058799 10401 event.go:212] Unable to write event: 'Post https://192.168.1.19:6443/api/v1/namespaces/default/events: dial tcp 192.168.1.19:6443: connect: connection refused' (may retry after sleeping)
Jul 27 17:22:31 kubernetes kubelet[10401]: E0727 17:22:31.491375 10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://192.168.1.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:31 kubernetes kubelet[10401]: E0727 17:22:31.492253 10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.1.19:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:31 kubernetes kubelet[10401]: E0727 17:22:31.493403 10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://192.168.1.19:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:32 kubernetes kubelet[10401]: E0727 17:22:32.492126 10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://192.168.1.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:32 kubernetes kubelet[10401]: E0727 17:22:32.493049 10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.1.19:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:32 kubernetes kubelet[10401]: E0727 17:22:32.494127 10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://192.168.1.19:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:33 kubernetes kubelet[10401]: E0727 17:22:33.492838 10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://192.168.1.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:33 kubernetes kubelet[10401]: E0727 17:22:33.493750 10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.1.19:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:33 kubernetes kubelet[10401]: E0727 17:22:33.494858 10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://192.168.1.19:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:34 kubernetes kubelet[10401]: E0727 17:22:34.493558 10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://192.168.1.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:34 kubernetes kubelet[10401]: E0727 17:22:34.494514 10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.1.19:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:34 kubernetes kubelet[10401]: E0727 17:22:34.495496 10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://192.168.1.19:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:35 kubernetes kubelet[10401]: I0727 17:22:35.246559 10401 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
Jul 27 17:22:35 kubernetes kubelet[10401]: I0727 17:22:35.251191 10401 kubelet_node_status.go:79] Attempting to register node kubernetes
Jul 27 17:22:35 kubernetes kubelet[10401]: E0727 17:22:35.251606 10401 kubelet_node_status.go:103] Unable to register node "kubernetes" with API server: Post https://192.168.1.19:6443/api/v1/nodes: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:35 kubernetes kubelet[10401]: I0727 17:22:35.493606 10401 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
Jul 27 17:22:35 kubernetes kubelet[10401]: E0727 17:22:35.494144 10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://192.168.1.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:35 kubernetes kubelet[10401]: E0727 17:22:35.495144 10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.1.19:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:35 kubernetes kubelet[10401]: E0727 17:22:35.496164 10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://192.168.1.19:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:35 kubernetes kubelet[10401]: I0727 17:22:35.530217 10401 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
Jul 27 17:22:35 kubernetes kubelet[10401]: I0727 17:22:35.530338 10401 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
Jul 27 17:22:35 kubernetes kubelet[10401]: W0727 17:22:35.534682 10401 status_manager.go:482] Failed to get status for pod "etcd-kubernetes_kube-system(6fd4d3c9fe373df920ce5e1e4572fd1d)": Get https://192.168.1.19:6443/api/v1/namespaces/kube-system/pods/etcd-kubernetes: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:35 kubernetes kubelet[10401]: I0727 17:22:35.552571 10401 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "etcd-certs" (UniqueName: "kubernetes.io/host-path/6fd4d3c9fe373df920ce5e1e4572fd1d-etcd-certs") pod "etcd-kubernetes" (UID: "6fd4d3c9fe373df920ce5e1e4572fd1d")
Jul 27 17:22:35 kubernetes kubelet[10401]: I0727 17:22:35.552633 10401 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "etcd-data" (UniqueName: "kubernetes.io/host-path/6fd4d3c9fe373df920ce5e1e4572fd1d-etcd-data") pod "etcd-kubernetes" (UID: "6fd4d3c9fe373df920ce5e1e4572fd1d")
Jul 27 17:22:35 kubernetes kubelet[10401]: I0727 17:22:35.563624 10401 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
Jul 27 17:22:35 kubernetes kubelet[10401]: I0727 17:22:35.563750 10401 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
Jul 27 17:22:35 kubernetes kubelet[10401]: W0727 17:22:35.567753 10401 status_manager.go:482] Failed to get status for pod "kube-apiserver-kubernetes_kube-system(32544bee4c007108f4b6c54da83cc67e)": Get https://192.168.1.19:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-kubernetes: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:35 kubernetes kubelet[10401]: I0727 17:22:35.596873 10401 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
Jul 27 17:22:35 kubernetes kubelet[10401]: I0727 17:22:35.596962 10401 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
Jul 27 17:22:35 kubernetes kubelet[10401]: W0727 17:22:35.601046 10401 status_manager.go:482] Failed to get status for pod "kube-controller-manager-kubernetes_kube-system(fe4b0bda62e8e0df1386dc034ba16ee3)": Get https://192.168.1.19:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-kubernetes: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:35 kubernetes kubelet[10401]: E0727 17:22:35.622704 10401 eviction_manager.go:243] eviction manager: failed to get get summary stats: failed to get node info: node "kubernetes" not found
Jul 27 17:22:35 kubernetes kubelet[10401]: W0727 17:22:35.627154 10401 cni.go:172] Unable to update cni config: No networks found in /etc/cni/net.d
Jul 27 17:22:35 kubernetes kubelet[10401]: E0727 17:22:35.627334 10401 kubelet.go:2110] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Jul 27 17:22:35 kubernetes kubelet[10401]: I0727 17:22:35.630351 10401 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
Jul 27 17:22:35 kubernetes kubelet[10401]: W0727 17:22:35.634273 10401 status_manager.go:482] Failed to get status for pod "kube-scheduler-kubernetes_kube-system(537879acc30dd5eff5497cb2720a6d64)": Get https://192.168.1.19:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-kubernetes: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:35 kubernetes kubelet[10401]: I0727 17:22:35.652980 10401 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "k8s-certs" (UniqueName: "kubernetes.io/host-path/32544bee4c007108f4b6c54da83cc67e-k8s-certs") pod "kube-apiserver-kubernetes" (UID: "32544bee4c007108f4b6c54da83cc67e")
Jul 27 17:22:35 kubernetes kubelet[10401]: I0727 17:22:35.653036 10401 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "ca-certs" (UniqueName: "kubernetes.io/host-path/fe4b0bda62e8e0df1386dc034ba16ee3-ca-certs") pod "kube-controller-manager-kubernetes" (UID: "fe4b0bda62e8e0df1386dc034ba16ee3")
¿Qué sucede si detiene todo lo relacionado con kubernetes en este nodo e intenta ejecutar un pequeño servidor escuchando en 192.168.1.19:6443
?
kubeadm reset
systemctl stop kubelet
netstat -tulpn
para asegurarse de que nada esté escuchando allí.
escribe esto en un archivo test.go
package main
import (
"net/http"
"strings"
)
func sayHello(w http.ResponseWriter, r *http.Request) {
message := r.URL.Path
message = strings.TrimPrefix(message, "/")
message = "Hello " + message
w.Write([]byte(message))
}
func main() {
http.HandleFunc("/", sayHello)
if err := http.ListenAndServe("192.168.1.19:6443", nil); err != nil {
panic(err)
}
}
go run test.go
$ curl 192.168.1.19:6443
¿Funciona?
Sé que esto es una tontería, pero no tengo ideas.
Salida de netstat -tulpn
:
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 989/sshd
tcp6 0 0 :::22 :::* LISTEN 989/sshd
udp 0 0 0.0.0.0:68 0.0.0.0:* 807/dhclient
salida de curl
después de ejecutar test.go
:
[pierrick<strong i="12">@kubernetes</strong> ~]$ curl 192.168.1.19:6443
Hello [pierrick<strong i="13">@kubernetes</strong> ~]$
bien funciona!
Lo siento, pero hoy no puedo ayudar más.
tal vez alguien más pueda echar un vistazo a los registros y averiguar qué está pasando.
¿Alguien en particular en quien piensas? ¿Podrías etiquetarlos en este número?
@ kubernetes / sig-cluster-lifecycle-bugs
@ PierrickI3 ¿puedes comprobar si el servidor de aplicaciones está escuchando en el puerto 6443? Si es así, ¿puede verificar la configuración de su proxy (si corresponde)?
netstat -tulpn |grep 6443
set |grep -i proxy
@ bart0sh Gracias, pero tuve que empezar de nuevo este fin de semana, así que ya no puedo solucionar este problema. Cerraré este tema.
Para su información, la salida de netstat -tulpn se muestra arriba (https://github.com/kubernetes/kubeadm/issues/1026#issuecomment-408457991). No tengo proxy (conexión directa a Internet).
¿Por qué está esto cerrado? Estoy enfrentando un problema similar en mi VirtualBox VM con Centos 7 ..
Entonces, ¿la única solución fue crear una nueva caja por completo?
@ PierrickI3 Estoy teniendo exactamente el mismo problema y la mayoría de los registros son similares. ¿Llegó a una solución o solución temporal?
@kheirp lamentablemente no. Me di por vencido y volví a minikube para fines de desarrollo. Pero me encantaría volver a kubeadm si se encuentra una solución.
@ PierrickI3 ok ... cuando llegue a algo, te lo haré saber
mismo problema al ejecutar Ubuntu 16 en VMWare.
mismo problema que se ejecuta en RHEL 7.5
mismo problema que se ejecuta en el clúster Centos7
lo mismo con un grupo Kubic
systemctl stop kubelet
systemctl stop docker
iptables --flush
iptables -tnat --flush
systemctl start kubelet
systemctl start docker
Tal vez olvidó ejecutar esto después de instalar @ PierrickI3 :
sudo cp /etc/kubernetes/admin.conf $ INICIO /
sudo chown $ (id -u): $ (id -g) $ HOME / admin.conf
exportar KUBECONFIG = $ HOME / admin.conf
Estamos teniendo exactamente los mismos problemas en RHEL 7. Esto también dejó de funcionar de la nada. Esto es fundamental para nosotros.
he corrido
sudo cp /etc/kubernetes/admin.conf $ INICIO /
sudo chown $ (id -u): $ (id -g) $ HOME / admin.conf
exportar KUBECONFIG = $ HOME / admin.conf
etc., etc., pero el nodo maestro rechaza la conexión incluso a todos los nodos de nuestro clúster.
Nunca pude reproducir este problema, el clúster simplemente vuelve a funcionar bien después de 2-3 minutos después del reinicio.
debe ser algo relacionado con configuraciones individuales.
Parece que mi servidor API ya no se está ejecutando. He intentado reiniciar kubelet varias veces, pero todo fue en vano. Lo curioso es que esto había estado funcionando y no había tocado nada hasta ayer cuando iba a agregar más imágenes de docker para nuestro clúster.
Descubrí lo que estaba pasando. El directorio montado /var
había llenado. Ahora está funcionando como se esperaba.
Supuse que, dado que se agregó una regla para el puerto 6443 a iptables y que recibía una conexión continua rechazada, incluso desde localhost y que desde que hice un docker ps
no arrojó contenedores en ejecución, el servicio API (y otros servicios ) no se estaba ejecutando, lo que significa que algo extraño estaba sucediendo, y efectivamente ... algo extraño, aunque no hay nada en los registros de kubectl
indiquen por qué el servicio de API no se pudo iniciar.
Descubrí lo que estaba pasando. El directorio montado
/var
había llenado.
¡¡¡GRACIAS OMG !!!
systemctl status kubelet cmd muestra la información del error
"27 de julio 17:22:20 kubernetes kubelet [10401]: W0727 17: 22: 20.623173 10401 cni.go: 172] No se puede actualizar la configuración de cni: no se encontraron redes en /etc/cni/net.d
27 de julio 17:22:20 kubernetes kubelet [10401]: E0727 17: 22: 20.623379 10401 kubelet.go: 2110] La red de tiempo de ejecución del contenedor no está lista: NetworkReady = razón falsa mensaje: docker : el complemento de red no está listo: cni config no inicializado
27 de julio 17:22:21 kubernetes kubelet [10401]: E0727 17: 22: 21.058116 10401 event.go: 212] No se puede escribir el evento: 'Publicar https://192.168.1.19 : 6443 / api / v1 / namespaces / default / events: dial tcp 192.168.1.19:6443: connect: connection rechazada '(puede volver a intentarlo después de dormir) "
Entonces
vi /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
agregue estos parámetros: --network-plugin = cni --cni-conf-dir = / etc / cni / --cni-bin-dir = / opt / cni / bin
y entonces
reinicio de kubeadm
kubeadm init
prueba esto en cada nodo @ PierrickI3
systemctl stop kubelet
systemctl stop docker
iptables --flush
iptables -tnat --flush
systemctl start kubelet
systemctl start docker
Esto solucionó mi problema, aunque solo hice esto en el nodo maestro y solo reinicié kubeadm en otros nodos
Gracias @ manoj-bandara, pero me di por vencido por ahora y volveré a esto más adelante este año. Ver https://github.com/kubernetes/kubeadm/issues/1026#issuecomment -420948092
mismo problema aquí - intenté muchas cosas. parece que este fue el problema:
failed: open /run/systemd/resolve/resolv.conf: no such file or directory
Conecté /etc/resolv.conf a esa ubicación y reinicié kubele (solo en el maestro), luego el maestro y todos los nodos comenzaron a aparecer nuevamente.
En mi caso fue el problema del intercambio, lo solucioné apagando el intercambio
sudo swapoff -a
sudo systemctl restart kubelet.service
prueba esto en cada nodo @ PierrickI3
systemctl stop kubelet
systemctl stop docker
iptables --flush
iptables -tnat --flush
systemctl start kubelet
systemctl start docker
¡¡Dios mío, comando increíble !! ¡¡Muchas gracias de verdad !!
prueba esto en cada nodo @ PierrickI3
systemctl stop kubelet
systemctl stop docker
iptables --flush
iptables -tnat --flush
systemctl start kubelet
systemctl start docker¡¡Dios mío, comando increíble !! ¡¡Muchas gracias de verdad !!
@JTRNEO , tengo que recordar este también :)
Me enfrenté al mismo problema en varias instalaciones. Aún no lo he descubierto. Probé los drivers Flannel, Calico y Wave y el resultado es el mismo. Entonces, probablemente el problema no esté relacionado con los complementos de red.
No estoy seguro de que esté relacionado con el problema, pero aunque ejecuto "sudo swapoff -a", después de reiniciar el servidor me recomienda que apague el intercambio nuevamente.
El mismo problema aquí. Mis registros de kube-apiserver están llenos de:
I1124 18:44:13.180672 1 log.go:172] http: TLS handshake error from 192.168.1.235:56160: EOF
I1124 18:44:13.186601 1 log.go:172] http: TLS handshake error from 192.168.1.235:56244: EOF
I1124 18:44:13.201880 1 log.go:172] http: TLS handshake error from 192.168.1.192:56340: EOF
I1124 18:44:13.208991 1 log.go:172] http: TLS handshake error from 192.168.1.235:56234: EOF
I1124 18:44:13.248214 1 log.go:172] http: TLS handshake error from 192.168.1.235:56166: EOF
I1124 18:44:13.292943 1 log.go:172] http: TLS handshake error from 192.168.1.235:56272: EOF
I1124 18:44:13.332362 1 log.go:172] http: TLS handshake error from 192.168.1.235:56150: EOF
I1124 18:44:13.352911 1 log.go:172] http: TLS handshake error from 192.168.1.249:41300: EOF
Las tablas ip de descarga no funcionaron para mí y mi intercambio ya está desactivado.
¿Instalaste franela en los nodos de los trabajadores?
El domingo, 24 de noviembre de 2019, 21:54 Christopher J. Bottaro [email protected]
escribió:
El mismo problema aquí. Mis registros de kube-apiserver están llenos de:
I1124 18: 44: 13.180672 1 log.go: 172] http: Error de protocolo de enlace TLS de 192.168.1.235:56160: EOF
I1124 18: 44: 13.186601 1 log.go: 172] http: Error de protocolo de enlace TLS de 192.168.1.235:56244: EOF
I1124 18: 44: 13.201880 1 log.go: 172] http: Error de protocolo de enlace TLS de 192.168.1.192:56340: EOF
I1124 18: 44: 13.208991 1 log.go: 172] http: Error de protocolo de enlace TLS de 192.168.1.235:56234: EOF
I1124 18: 44: 13.248214 1 log.go: 172] http: Error de protocolo de enlace TLS de 192.168.1.235:56166: EOF
I1124 18: 44: 13.292943 1 log.go: 172] http: Error de protocolo de enlace TLS de 192.168.1.235:56272: EOF
I1124 18: 44: 13.332362 1 log.go: 172] http: Error de protocolo de enlace TLS de 192.168.1.235:56150: EOF
I1124 18: 44: 13.352911 1 log.go: 172] http: Error de protocolo de enlace TLS de 192.168.1.249:41300: EOFLas tablas ip de descarga no funcionaron para mí y mi intercambio ya está desactivado.
-
Estás recibiendo esto porque hiciste un comentario.
Responda a este correo electrónico directamente, véalo en GitHub
https://github.com/kubernetes/kubeadm/issues/1026?email_source=notifications&email_token=AKZB2S2OOQWTN5QH3NCIVNTQVLEYBA5CNFSM4FMNY3N2YY3PNVWWK3TUL52HS4DFVDVREXWG43V
o darse de baja
https://github.com/notifications/unsubscribe-auth/AKZB2SZZTPQEVNMV77HZFSLQVLEYBANCNFSM4FMNY3NQ
.
No, estoy usando kube-enrutador para redes. El pod para él está en estado de retroceso por falla y lleno de errores sobre no poder hablar con el servidor api.
El clúster funcionó bien durante meses, pero luego se cortó la electricidad, por lo que reinicié mis máquinas y un clúster estropeado.
@cjbottaro
¿Podría ser que sus certificados de cliente de kubelet hayan caducado?
vea la segunda advertencia aquí:
https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/#check -certificate-expiration
En nodos creados con kubeadm init, antes de la versión 1.17 de kubeadm ...
mismo problema al ejecutar Ubuntu 16 en VMWare.
También estoy ejecutando un clúster en vmware, ¿qué resolvió su problema? Gracias
Tuve los mismos problemas hoy después de editar la configuración de service-cidr en mi nuevo clúster de kube. El problema para mí fue que el contenedor de la ventana acoplable kube-apiserver se agitaba. Después de mirar los registros usando los registros de la ventana acoplable
Error: error determining service IP ranges for primary service cidr: The service cluster IP range must be at least 8 IP addresses.
Pensé que podía proporcionar un cidr más pequeño de / 30 a los servicios, pero necesitaba abrirlo en un / 29.
No estoy seguro de si está relacionado, pero tenga en cuenta que 1.17.0 tiene errores relacionados con CIDR, por lo que es de esperar que esté ejecutando un parche más reciente de .17.
Puede que necesite cambiar las reglas de iptables.
ejecute CMD: iptables -L --line-numbers para encontrar el rechazo con icmp-host-prohibido
luego, iptables -D INPUT 153, eliminarlo # 153 significa número de línea
por fin, reinicia kubelet
Tengo un problema similar al usar el clúster kubeadm. Acabo de usar 2 veces el reinicio de docker $ (docker ps -qa)
Tengo el mismo problema que @ PierrickI3 . Después de reiniciar, el plano de control del nodo está inactivo. kubelet
está ejecutando tratando de conectarse a un servidor que no se está ejecutando. etcd
está ejecutando. No hay interfaces de red CNI, solo loopback, ethernet y docker.
No se ve ningún error en particular en ninguna parte, simplemente no se inicia, aunque funcionó bien durante mucho tiempo antes del reinicio. Probé todo lo mencionado en este hilo así como en muchos otros.
Estoy completamente confundido sobre lo que comienza aquí para investigar el problema más a fondo. ¿El servicio kubelet
inicia la red CNI y luego inicia los pods del plano de control (por ejemplo, api, planificador, etc.)? He comprobado docker ps -a
y no se ha intentado iniciar todos los contenedores del plano de control, tampoco hay una política de reinicio en ellos. Entonces, ¿por qué kubelet
intenta hablar con el servidor api cuando no lo ha iniciado?
Tuve un problema similar, aún no se ha resuelto para mí, pero el problema se produjo debido a que la ventana acoplable se actualizó a una versión incompatible. Solo verifique que su servicio de Docker esté funcionando o no.
@ neolit123
' @cjbottaro '
¿Podría ser que sus certificados de cliente de kubelet hayan caducado?vea la segunda advertencia aquí:
https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/#check -certificate-expirationEn nodos creados con kubeadm init, antes de la versión 1.17 de kubeadm ...
Gracias. Mi máquina no ha podido conectarse después de reiniciar. Después de verificar el certificado, se encontró que 3 certificados habían caducado.
[root<strong i="16">@localhost</strong> home]# kubeadm alpha certs check-expiration
[check-expiration] Reading configuration from the cluster...
[check-expiration] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[check-expiration] Error reading configuration from the Cluster. Falling back to default configuration
W0128 14:02:50.815166 21689 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
CERTIFICATE EXPIRES RESIDUAL TIME CERTIFICATE AUTHORITY EXTERNALLY MANAGED
admin.conf Sep 17, 2021 06:56 UTC 232d no
apiserver Sep 17, 2021 06:55 UTC 232d ca no
apiserver-etcd-client Sep 17, 2021 06:55 UTC 232d etcd-ca no
apiserver-kubelet-client Sep 17, 2021 06:55 UTC 232d ca no
controller-manager.conf Sep 17, 2021 06:56 UTC 232d no
etcd-healthcheck-client Dec 19, 2020 07:56 UTC <invalid> etcd-ca no
etcd-peer Dec 19, 2020 07:56 UTC <invalid> etcd-ca no
etcd-server Dec 19, 2020 07:56 UTC <invalid> etcd-ca no
front-proxy-client Sep 17, 2021 06:55 UTC 232d front-proxy-ca no
scheduler.conf Sep 17, 2021 06:56 UTC 232d no
Entonces renueve el certificado
kubeadm alpha certs renew etcd-healthcheck-client
kubeadm alpha certs renew etcd-peer
kubeadm alpha certs renew etcd-server
Reiniciar servicio
systemctl daemon-reload
systemctl restart kubelet
systemctl restart docker
Funciona bien
Hola,
Intente ver si ha habilitado el intercambio en el nodo maestro y trabajador. Deshabilítelo y reinicie el servicio.
Comentario más útil
prueba esto en cada nodo @ PierrickI3
systemctl stop kubelet
systemctl stop docker
iptables --flush
iptables -tnat --flush
systemctl start kubelet
systemctl start docker