Kubeadm: 再起動埌にkubeletが再起動しない-ノヌドをAPIサヌバヌに登録できたせん接続が拒吊されたした

䜜成日 2018幎07月27日  Â·  60コメント  Â·  ゜ヌス: kubernetes/kubeadm

問題を提出しおいただきありがずうございたす ボタンを抌す前に、これらの質問に答えおください。

これは助けを求めるものですか

それはそうですが、私はStackOverflowを怜玢し、問題を芋぀けるこずなく䜕床もグヌグルで怜玢したした。 たた、これはより倚くの人々に圱響を䞎えるようです。

これを提出する前に、kubeadmの問題でどのキヌワヌドを怜玢したしたか

journalctlに衚瀺される゚ラヌメッセヌゞ

これはバグレポヌトですか、それずも機胜リク゚ストですか

バグレポヌト

バヌゞョン

kubeadmバヌゞョン kubeadm version: &version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.1", GitCommit:"b1b29978270dc22fecc592ac55d903350454310a", GitTreeState:"clean", BuildDate:"2018-07-17T18:50:16Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}

環境

  • Kubernetesバヌゞョン Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.1", GitCommit:"b1b29978270dc22fecc592ac55d903350454310a", GitTreeState:"clean", BuildDate:"2018-07-17T18:53:20Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}
    サヌバヌも1.11ですが、珟時点では起動しおいないため、 kubectl versionでは衚瀺されたせん。
  • クラりドプロバむダヌたたはハヌドりェア構成ロヌカルハヌドりェアセルフホスト
  • OS 
NAME="CentOS Linux"
VERSION="7 (Core)"
ID="centos"
ID_LIKE="rhel fedora"
VERSION_ID="7"
PRETTY_NAME="CentOS Linux 7 (Core)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:centos:centos:7"
HOME_URL="https://www.centos.org/"
BUG_REPORT_URL="https://bugs.centos.org/"

CENTOS_MANTISBT_PROJECT="CentOS-7"
CENTOS_MANTISBT_PROJECT_VERSION="7"
REDHAT_SUPPORT_PRODUCT="centos"
REDHAT_SUPPORT_PRODUCT_VERSION="7"
  • カヌネル Linux kubernetes 3.10.0-862.9.1.el7.x86_64 #1 SMP Mon Jul 16 16:29:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux

䜕が起こった

Kubeletサヌビスが開始されない

あなたは䜕が起こるず思っおいたしたか

Kubeletサヌビスを開始する必芁がありたす

それを可胜な限り最小限か぀正確に再珟する方法は

  • kubeadmを䜿甚しおkubernetesをデプロむしたした
  • 耇数のサヌビスを展開し、すべおが正垞に機胜しおいるこずを確認できたした
  • 再起動したした
  • Kubeletサヌビスは開始されなくなりたした

他に知っおおくべきこずはありたすか

Journalctlログ

Jul 27 14:46:17 kubernetes systemd[1]: Starting kubelet: The Kubernetes Node Agent...
-- Subject: Unit kubelet.service has begun start-up
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit kubelet.service has begun starting up.
Jul 27 14:46:17 kubernetes kubelet[1619]: Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more informatio
n.
Jul 27 14:46:17 kubernetes kubelet[1619]: Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Jul 27 14:46:17 kubernetes kubelet[1619]: I0727 14:46:17.608612    1619 server.go:408] Version: v1.11.1
Jul 27 14:46:17 kubernetes kubelet[1619]: I0727 14:46:17.609679    1619 plugins.go:97] No cloud provider specified.
Jul 27 14:46:17 kubernetes kubelet[1619]: I0727 14:46:17.613651    1619 certificate_store.go:131] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
Jul 27 14:46:17 kubernetes kubelet[1619]: I0727 14:46:17.709720    1619 server.go:648] --cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /
Jul 27 14:46:17 kubernetes kubelet[1619]: I0727 14:46:17.710299    1619 container_manager_linux.go:243] container manager verified user specified cgroup-root exists: []
Jul 27 14:46:17 kubernetes kubelet[1619]: I0727 14:46:17.710322    1619 container_manager_linux.go:248] Creating Container Manager object based on Node Config: {RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:docker CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:nodefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.1} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>} {Signal:imagefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.15} GracePeriod:0s MinReclaim:<nil>} {Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:<nil>}]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalCPUManagerReconcilePeriod:10s ExperimentalPodPidsLimit:-1 EnforceCPULimits:true}
Jul 27 14:46:17 kubernetes kubelet[1619]: I0727 14:46:17.710457    1619 container_manager_linux.go:267] Creating device plugin manager: true
Jul 27 14:46:17 kubernetes kubelet[1619]: I0727 14:46:17.710515    1619 state_mem.go:36] [cpumanager] initializing new in-memory state store
Jul 27 14:46:17 kubernetes kubelet[1619]: I0727 14:46:17.710600    1619 state_mem.go:84] [cpumanager] updated default cpuset: ""
Jul 27 14:46:17 kubernetes kubelet[1619]: I0727 14:46:17.710617    1619 state_mem.go:92] [cpumanager] updated cpuset assignments: "map[]"
Jul 27 14:46:17 kubernetes kubelet[1619]: I0727 14:46:17.710751    1619 kubelet.go:274] Adding pod path: /etc/kubernetes/manifests
Jul 27 14:46:17 kubernetes kubelet[1619]: I0727 14:46:17.710814    1619 kubelet.go:299] Watching apiserver
Jul 27 14:46:17 kubernetes kubelet[1619]: E0727 14:46:17.711655    1619 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://192.168.1.19:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 14:46:17 kubernetes kubelet[1619]: E0727 14:46:17.711661    1619 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://192.168.1.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 14:46:17 kubernetes kubelet[1619]: E0727 14:46:17.711752    1619 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.1.19:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 14:46:17 kubernetes kubelet[1619]: I0727 14:46:17.717242    1619 client.go:75] Connecting to docker on unix:///var/run/docker.sock
Jul 27 14:46:17 kubernetes kubelet[1619]: I0727 14:46:17.717277    1619 client.go:104] Start docker client with request timeout=2m0s
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.718726    1619 docker_service.go:545] Hairpin mode set to "promiscuous-bridge" but kubenet is not enabled, falling back to "hairpin-veth"
Jul 27 14:46:17 kubernetes kubelet[1619]: I0727 14:46:17.718756    1619 docker_service.go:238] Hairpin mode set to "hairpin-veth"
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.721656    1619 hostport_manager.go:68] The binary conntrack is not installed, this can cause failures in network connection cleanup.
Jul 27 14:46:17 kubernetes kubelet[1619]: I0727 14:46:17.721975    1619 docker_service.go:253] Docker cri networking managed by cni
Jul 27 14:46:17 kubernetes kubelet[1619]: I0727 14:46:17.733083    1619 docker_service.go:258] Docker Info: &{ID:V36L:ETJO:IECX:PJF4:G3GB:JHA6:LGCF:VQBJ:D2GY:PVFO:567O:545Y Containers:66 ContainersRunning:0 ContainersPaused:0 ContainersStopped:66 Images:21 Driver:overlay2 DriverStatus:[[Backing Filesystem xfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:[] Plugins:{Volume:[local] Network:[bridge host macvlan null overlay] Authorization:[] Log:[]} MemoryLimit:true SwapLimit:true KernelMemory:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6tables:true Debug:false NFd:15 OomKillDisable:true NGoroutines:22 SystemTime:2018-07-27T14:46:17.727178862+02:00 LoggingDriver:journald CgroupDriver:systemd NEventsListener:0 KernelVersion:3.10.0-862.9.1.el7.x86_64 OperatingSystem:CentOS Linux 7 (Core) OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:0xc420ebd110 NCPU:12 MemTotal:33386934272 GenericResources:[] DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:kubernetes Labels:[] ExperimentalBuild:false ServerVersion:1.13.1 ClusterStore: ClusterAdvertise: Runtimes:map[runc:{Path:docker-runc Args:[]} docker-runc:{Path:/usr/libexec/docker/docker-runc-current Args:[]}] DefaultRuntime:docker-runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:[] Nodes:0 Managers:0 Cluster:0xc421016140} LiveRestoreEnabled:false Isolation: InitBinary:/usr/libexec/docker/docker-init-current ContainerdCommit:{ID: Expected:aa8187dbd3b7ad67d8e5e3a15115d3eef43a7ed1} RuncCommit:{ID:5eda6f6fd0c2884c2c8e78a6e7119e8d0ecedb77 Expected:9df8b306d01f59d3a8029be411de015b7304dd8f} InitCommit:{ID:fec3683b971d9c3ef73f284f176672c44b448662 Expected:949e6facb77383876aeff8a6944dde66b3089574} SecurityOptions:[name=seccomp,profile=/etc/docker/seccomp.json name=selinux]}
Jul 27 14:46:17 kubernetes kubelet[1619]: I0727 14:46:17.733181    1619 docker_service.go:271] Setting cgroupDriver to systemd
Jul 27 14:46:17 kubernetes kubelet[1619]: I0727 14:46:17.825381    1619 kuberuntime_manager.go:186] Container runtime docker initialized, version: 1.13.1, apiVersion: 1.26.0
Jul 27 14:46:17 kubernetes kubelet[1619]: I0727 14:46:17.839306    1619 csi_plugin.go:111] kubernetes.io/csi: plugin initializing...
Jul 27 14:46:17 kubernetes kubelet[1619]: I0727 14:46:17.840955    1619 server.go:129] Starting to listen on 0.0.0.0:10250
Jul 27 14:46:17 kubernetes kubelet[1619]: I0727 14:46:17.841036    1619 server.go:986] Started kubelet
Jul 27 14:46:17 kubernetes kubelet[1619]: I0727 14:46:17.841423    1619 fs_resource_analyzer.go:66] Starting FS ResourceAnalyzer
Jul 27 14:46:17 kubernetes kubelet[1619]: I0727 14:46:17.841448    1619 status_manager.go:152] Starting to sync pod status with apiserver
Jul 27 14:46:17 kubernetes kubelet[1619]: I0727 14:46:17.841462    1619 kubelet.go:1758] Starting kubelet main sync loop.
Jul 27 14:46:17 kubernetes kubelet[1619]: I0727 14:46:17.841479    1619 kubelet.go:1775] skipping pod synchronization - [container runtime is down PLEG is not healthy: pleg was last seen active 2562047h47m16.854775807s ago; threshold is 3m0s]
Jul 27 14:46:17 kubernetes kubelet[1619]: I0727 14:46:17.841710    1619 volume_manager.go:247] Starting Kubelet Volume Manager
Jul 27 14:46:17 kubernetes kubelet[1619]: I0727 14:46:17.841754    1619 desired_state_of_world_populator.go:130] Desired state populator starts to run
Jul 27 14:46:17 kubernetes kubelet[1619]: I0727 14:46:17.842653    1619 server.go:302] Adding debug handlers to kubelet server.
Jul 27 14:46:17 kubernetes kubelet[1619]: E0727 14:46:17.868316    1619 kubelet.go:1261] Image garbage collection failed once. Stats initialization may not have completed yet: failed to get imageFs info: unable to find data for container /
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.872508    1619 container.go:393] Failed to create summary reader for "/system.slice/systemd-hostnamed.service": none of the resources are being tracked.
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.872925    1619 container.go:393] Failed to create summary reader for "/system.slice/systemd-journal-flush.service": none of the resources are being tracked.
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.873312    1619 container.go:393] Failed to create summary reader for "/system.slice/systemd-logind.service": none of the resources are being tracked.
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.873703    1619 container.go:393] Failed to create summary reader for "/system.slice/systemd-remount-fs.service": none of the resources are being tracked.
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.874064    1619 container.go:393] Failed to create summary reader for "/system.slice/rsyslog.service": none of the resources are being tracked.
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.874452    1619 container.go:393] Failed to create summary reader for "/system.slice/systemd-readahead-collect.service": none of the resources are being tracked.
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.874765    1619 container.go:393] Failed to create summary reader for "/system.slice": none of the resources are being tracked.
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.875097    1619 container.go:393] Failed to create summary reader for "/system.slice/kmod-static-nodes.service": none of the resources are being tracked.
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.875392    1619 container.go:393] Failed to create summary reader for "/system.slice/irqbalance.service": none of the resources are being tracked.
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.875679    1619 container.go:393] Failed to create summary reader for "/system.slice/rhel-dmesg.service": none of the resources are being tracked.
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.876007    1619 container.go:393] Failed to create summary reader for "/system.slice/systemd-readahead-replay.service": none of the resources are being tracked.
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.876289    1619 container.go:393] Failed to create summary reader for "/system.slice/NetworkManager.service": none of the resources are being tracked.
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.876567    1619 container.go:393] Failed to create summary reader for "/system.slice/auditd.service": none of the resources are being tracked.
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.876913    1619 container.go:393] Failed to create summary reader for "/system.slice/systemd-udev-trigger.service": none of the resources are being tracked.
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.877200    1619 container.go:393] Failed to create summary reader for "/system.slice/kubelet.service": none of the resources are being tracked.
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.877503    1619 container.go:393] Failed to create summary reader for "/system.slice/network.service": none of the resources are being tracked.
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.877792    1619 container.go:393] Failed to create summary reader for "/system.slice/system-getty.slice": none of the resources are being tracked.
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.878118    1619 container.go:393] Failed to create summary reader for "/system.slice/systemd-journald.service": none of the resources are being tracked.
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.878486    1619 container.go:393] Failed to create summary reader for "/system.slice/systemd-user-sessions.service": none of the resources are being tracked.
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.878912    1619 container.go:393] Failed to create summary reader for "/system.slice/polkit.service": none of the resources are being tracked.
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.879312    1619 container.go:393] Failed to create summary reader for "/system.slice/rhel-domainname.service": none of the resources are being tracked.
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.879802    1619 container.go:393] Failed to create summary reader for "/system.slice/lvm2-monitor.service": none of the resources are being tracked.
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.880172    1619 container.go:393] Failed to create summary reader for "/system.slice/tuned.service": none of the resources are being tracked.
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.880491    1619 container.go:393] Failed to create summary reader for "/system.slice/dbus.service": none of the resources are being tracked.
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.880788    1619 container.go:393] Failed to create summary reader for "/system.slice/docker.service": none of the resources are being tracked.
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.881112    1619 container.go:393] Failed to create summary reader for "/system.slice/systemd-udevd.service": none of the resources are being tracked.
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.881402    1619 container.go:393] Failed to create summary reader for "/system.slice/kdump.service": none of the resources are being tracked.
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.881710    1619 container.go:393] Failed to create summary reader for "/system.slice/rhel-import-state.service": none of the resources are being tracked.
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.882166    1619 container.go:393] Failed to create summary reader for "/system.slice/systemd-random-seed.service": none of the resources are being tracked.
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.882509    1619 container.go:393] Failed to create summary reader for "/system.slice/systemd-tmpfiles-setup-dev.service": none of the resources are being tracked.
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.882806    1619 container.go:393] Failed to create summary reader for "/system.slice/systemd-tmpfiles-setup.service": none of the resources are being tracked.
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.883115    1619 container.go:393] Failed to create summary reader for "/system.slice/rhel-readonly.service": none of the resources are being tracked.
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.883420    1619 container.go:393] Failed to create summary reader for "/system.slice/NetworkManager-dispatcher.service": none of the resources are being tracked.
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.883704    1619 container.go:393] Failed to create summary reader for "/system.slice/NetworkManager-wait-online.service": none of the resources are being tracked.
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.884005    1619 container.go:393] Failed to create summary reader for "/system.slice/crond.service": none of the resources are being tracked.
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.884329    1619 container.go:393] Failed to create summary reader for "/system.slice/system-selinux\\x2dpolicy\\x2dmigrate\\x2dlocal\\x2dchanges.slice": none of the resources are being tracked.
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.884617    1619 container.go:393] Failed to create summary reader for "/system.slice/systemd-sysctl.service": none of the resources are being tracked.
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.884907    1619 container.go:393] Failed to create summary reader for "/system.slice/k8s-self-hosted-recover.service": none of the resources are being tracked.
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.885213    1619 container.go:393] Failed to create summary reader for "/system.slice/lvm2-lvmetad.service": none of the resources are being tracked.
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.885466    1619 container.go:393] Failed to create summary reader for "/user.slice": none of the resources are being tracked.
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.885730    1619 container.go:393] Failed to create summary reader for "/system.slice/sshd.service": none of the resources are being tracked.
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.886098    1619 container.go:393] Failed to create summary reader for "/system.slice/systemd-update-utmp.service": none of the resources are being tracked.
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.886384    1619 container.go:393] Failed to create summary reader for "/system.slice/systemd-vconsole-setup.service": none of the resources are being tracked.
Jul 27 14:46:17 kubernetes kubelet[1619]: I0727 14:46:17.913789    1619 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
Jul 27 14:46:17 kubernetes kubelet[1619]: I0727 14:46:17.917905    1619 cpu_manager.go:155] [cpumanager] starting with none policy
Jul 27 14:46:17 kubernetes kubelet[1619]: I0727 14:46:17.917923    1619 cpu_manager.go:156] [cpumanager] reconciling every 10s
Jul 27 14:46:17 kubernetes kubelet[1619]: I0727 14:46:17.917935    1619 policy_none.go:42] [cpumanager] none policy: Start
Jul 27 14:46:17 kubernetes kubelet[1619]: E0727 14:46:17.926164    1619 event.go:212] Unable to write event: 'Post https://192.168.1.19:6443/api/v1/namespaces/default/events: dial tcp 192.168.1.19:6443: connect: connection refused' (may retry after sleeping)
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.932356    1619 container.go:393] Failed to create summary reader for "/libcontainer_1619_systemd_test_default.slice": none of the resources are being tracked.
Jul 27 14:46:17 kubernetes kubelet[1619]: I0727 14:46:17.941592    1619 kubelet.go:1775] skipping pod synchronization - [container runtime is down PLEG is not healthy: pleg was last seen active 2562047h47m16.854775807s ago; threshold is 3m0s]
Jul 27 14:46:17 kubernetes kubelet[1619]: I0727 14:46:17.941762    1619 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
Jul 27 14:46:17 kubernetes kubelet[1619]: I0727 14:46:17.944471    1619 kubelet_node_status.go:79] Attempting to register node kubernetes
Jul 27 14:46:17 kubernetes kubelet[1619]: E0727 14:46:17.944714    1619 kubelet_node_status.go:103] Unable to register node "kubernetes" with API server: Post https://192.168.1.19:6443/api/v1/nodes: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 14:46:17 kubernetes kubelet[1619]: Starting Device Plugin manager
Jul 27 14:46:17 kubernetes kubelet[1619]: E0727 14:46:17.986308    1619 eviction_manager.go:243] eviction manager: failed to get get summary stats: failed to get node info: node "kubernetes" not found
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.986668    1619 container_manager_linux.go:792] CPUAccounting not enabled for pid: 998
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.986680    1619 container_manager_linux.go:795] MemoryAccounting not enabled for pid: 998
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.986749    1619 container_manager_linux.go:792] CPUAccounting not enabled for pid: 1619
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.986755    1619 container_manager_linux.go:795] MemoryAccounting not enabled for pid: 1619
Jul 27 14:46:18 kubernetes kubelet[1619]: I0727 14:46:18.144855    1619 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
Jul 27 14:46:18 kubernetes kubelet[1619]: I0727 14:46:18.148528    1619 kubelet_node_status.go:79] Attempting to register node kubernetes
Jul 27 14:46:18 kubernetes kubelet[1619]: E0727 14:46:18.148933    1619 kubelet_node_status.go:103] Unable to register node "kubernetes" with API server: Post https://192.168.1.19:6443/api/v1/nodes: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 14:46:18 kubernetes kubelet[1619]: W0727 14:46:18.158503    1619 docker_sandbox.go:372] failed to read pod IP from plugin/docker: NetworkPlugin cni failed on the status hook for pod "rook-ceph-mon0-4txgr_rook-ceph": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "5b910771d1fd895b3b8d2feabdeb564cc57b213ae712416bdffec4a414dc4747"
Jul 27 14:46:18 kubernetes kubelet[1619]: W0727 14:46:18.300596    1619 pod_container_deletor.go:75] Container "5b910771d1fd895b3b8d2feabdeb564cc57b213ae712416bdffec4a414dc4747" not found in pod's containers
Jul 27 14:46:18 kubernetes kubelet[1619]: W0727 14:46:18.323729    1619 docker_sandbox.go:372] failed to read pod IP from plugin/docker: NetworkPlugin cni failed on the status hook for pod "rook-ceph-osd-id-0-54d59fc64b-c5tw4_rook-ceph": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "a73305551840113b16cedd206109a837f57c6c3b2c8b1864ed5afab8b40b186d"
Jul 27 14:46:18 kubernetes kubelet[1619]: W0727 14:46:18.516802    1619 pod_container_deletor.go:75] Container "a73305551840113b16cedd206109a837f57c6c3b2c8b1864ed5afab8b40b186d" not found in pod's containers
Jul 27 14:46:18 kubernetes kubelet[1619]: I0727 14:46:18.549067    1619 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
Jul 27 14:46:18 kubernetes kubelet[1619]: I0727 14:46:18.552841    1619 kubelet_node_status.go:79] Attempting to register node kubernetes
Jul 27 14:46:18 kubernetes kubelet[1619]: E0727 14:46:18.553299    1619 kubelet_node_status.go:103] Unable to register node "kubernetes" with API server: Post https://192.168.1.19:6443/api/v1/nodes: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 14:46:18 kubernetes kubelet[1619]: W0727 14:46:18.674143    1619 pod_container_deletor.go:75] Container "96b85439f089170cf6161f5410f8970de67f0609d469105dff4e3d5ec2d10351" not found in pod's containers
Jul 27 14:46:18 kubernetes kubelet[1619]: E0727 14:46:18.712440    1619 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://192.168.1.19:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 14:46:18 kubernetes kubelet[1619]: E0727 14:46:18.713284    1619 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://192.168.1.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 14:46:18 kubernetes kubelet[1619]: E0727 14:46:18.714397    1619 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.1.19:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 14:46:19 kubernetes kubelet[1619]: W0727 14:46:19.139032    1619 pod_container_deletor.go:75] Container "7b9757b85bc8ee4ce6ac954acf0bcd5c06b2ceb815aee802a8f53f9de18d967f" not found in pod's containers
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.932356    1619 container.go:393] Failed to create summary reader for "/libcontainer_1619_systemd_test_default.slice": none of the resources are being tracked.
Jul 27 14:46:17 kubernetes kubelet[1619]: I0727 14:46:17.941592    1619 kubelet.go:1775] skipping pod synchronization - [container runtime is down PLEG is not healthy: pleg was last seen active 2562047h47m16.854775807s ago; threshold is 3m0s]
Jul 27 14:46:17 kubernetes kubelet[1619]: I0727 14:46:17.941762    1619 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
Jul 27 14:46:17 kubernetes kubelet[1619]: I0727 14:46:17.944471    1619 kubelet_node_status.go:79] Attempting to register node kubernetes
Jul 27 14:46:17 kubernetes kubelet[1619]: E0727 14:46:17.944714    1619 kubelet_node_status.go:103] Unable to register node "kubernetes" with API server: Post https://192.168.1.19:6443/api/v1/nodes: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 14:46:17 kubernetes kubelet[1619]: Starting Device Plugin manager
Jul 27 14:46:17 kubernetes kubelet[1619]: E0727 14:46:17.986308    1619 eviction_manager.go:243] eviction manager: failed to get get summary stats: failed to get node info: node "kubernetes" not found
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.986668    1619 container_manager_linux.go:792] CPUAccounting not enabled for pid: 998
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.986680    1619 container_manager_linux.go:795] MemoryAccounting not enabled for pid: 998
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.986749    1619 container_manager_linux.go:792] CPUAccounting not enabled for pid: 1619
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.986755    1619 container_manager_linux.go:795] MemoryAccounting not enabled for pid: 1619
Jul 27 14:46:18 kubernetes kubelet[1619]: I0727 14:46:18.144855    1619 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
Jul 27 14:46:18 kubernetes kubelet[1619]: I0727 14:46:18.148528    1619 kubelet_node_status.go:79] Attempting to register node kubernetes
Jul 27 14:46:18 kubernetes kubelet[1619]: E0727 14:46:18.148933    1619 kubelet_node_status.go:103] Unable to register node "kubernetes" with API server: Post https://192.168.1.19:6443/api/v1/nodes: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 14:46:18 kubernetes kubelet[1619]: W0727 14:46:18.158503    1619 docker_sandbox.go:372] failed to read pod IP from plugin/docker: NetworkPlugin cni failed on the status hook for pod "rook-ceph-mon0-4txgr_rook-ceph": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "5b910771d1fd895b3b8d2feabdeb564cc57b213ae712416bdffec4a414dc4747"
Jul 27 14:46:18 kubernetes kubelet[1619]: W0727 14:46:18.300596    1619 pod_container_deletor.go:75] Container "5b910771d1fd895b3b8d2feabdeb564cc57b213ae712416bdffec4a414dc4747" not found in pod's containers
Jul 27 14:46:18 kubernetes kubelet[1619]: W0727 14:46:18.323729    1619 docker_sandbox.go:372] failed to read pod IP from plugin/docker: NetworkPlugin cni failed on the status hook for pod "rook-ceph-osd-id-0-54d59fc64b-c5tw4_rook-ceph": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "a73305551840113b16cedd206109a837f57c6c3b2c8b1864ed5afab8b40b186d"
Jul 27 14:46:18 kubernetes kubelet[1619]: W0727 14:46:18.516802    1619 pod_container_deletor.go:75] Container "a73305551840113b16cedd206109a837f57c6c3b2c8b1864ed5afab8b40b186d" not found in pod's containers
Jul 27 14:46:18 kubernetes kubelet[1619]: I0727 14:46:18.549067    1619 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
Jul 27 14:46:18 kubernetes kubelet[1619]: I0727 14:46:18.552841    1619 kubelet_node_status.go:79] Attempting to register node kubernetes
Jul 27 14:46:18 kubernetes kubelet[1619]: E0727 14:46:18.553299    1619 kubelet_node_status.go:103] Unable to register node "kubernetes" with API server: Post https://192.168.1.19:6443/api/v1/nodes: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 14:46:18 kubernetes kubelet[1619]: W0727 14:46:18.674143    1619 pod_container_deletor.go:75] Container "96b85439f089170cf6161f5410f8970de67f0609d469105dff4e3d5ec2d10351" not found in pod's containers
Jul 27 14:46:18 kubernetes kubelet[1619]: E0727 14:46:18.712440    1619 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://192.168.1.19:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 14:46:18 kubernetes kubelet[1619]: E0727 14:46:18.713284    1619 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://192.168.1.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused

そしお、それは続き、 kubernetes これは私のホスト名ですを登録できず、kubernetesリ゜ヌスを䞀芧衚瀺できないこずに぀いおです。

最初から、再起動の圱響を受けないように、self-hosted-recoverスクリプトhttps://github.com/xetys/k8s-self-hosted-recoveryを適甚したした。 ログは次のずおりです。

Jul 27 14:46:09 kubernetes systemd[1]: Starting Recovers self-hosted k8s after reboot...
Jul 27 14:46:09 kubernetes k8s-self-hosted-recover[1001]: [k8s-self-hosted-recover] Restoring old plane...
Jul 27 14:46:12 kubernetes k8s-self-hosted-recover[1001]: [controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
Jul 27 14:46:12 kubernetes k8s-self-hosted-recover[1001]: [controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
Jul 27 14:46:12 kubernetes k8s-self-hosted-recover[1001]: [controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
Jul 27 14:46:17 kubernetes k8s-self-hosted-recover[1001]: [k8s-self-hosted-recover] Waiting while the api server is back..

私はアむデアが䞍足しおいるので、あなたがもたらすこずができるどんな助けも歓迎したす。

areHA help wanted kinbug prioritawaiting-more-evidence sicluster-lifecycle

最も参考になるコメント

各ノヌドでこれを詊しおください@ PierrickI3

systemctl stop kubelet
systemctl stop docker
iptables --flush
iptables -tnat --flush
systemctl start kubelet
systemctl start docker

党おのコメント60件

こんにちは@ PierrickI3

次の堎所に接続できたすか

https://192.168.1.19:6443

私はしたせんが、それは私のロヌカルIPアドレスです

[pierrick<strong i="6">@kubernetes</strong> ~]$ curl https://192.168.1.19:6443
curl: (7) Failed connect to 192.168.1.19:6443; Connection refused
[pierrick<strong i="7">@kubernetes</strong> ~]$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
2: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether fc:aa:14:9a:97:e4 brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.19/24 brd 192.168.1.255 scope global noprefixroute dynamic eno1
       valid_lft 86196sec preferred_lft 86196sec
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
    link/ether 02:42:4e:90:66:f7 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 scope global docker0
       valid_lft forever preferred_lft forever

6443は、APIサヌバヌの安党なポヌトです。
APIサヌバヌは実行されおいたすか

kubectl get pods --all-namespacesの出力は䜕

Watching apiserver盎埌に倱敗は芋られたせん

ファむアりォヌルが6443をブロックしおいるこずがありたすか
たた、APIサヌバヌのマニフェストを共有しおいただけたすか 必芁に応じお、機密デヌタを非衚瀺にしたす。

kubectlからの出力は次のずおりです。

[pierrick<strong i="6">@kubernetes</strong> ~]$ kubectl get pods --all-namespaces
The connection to the server 192.168.1.19:6443 was refused - did you specify the right host or port?

このポヌトをブロックしおいるファむアりォヌルはなく、マシン䞊で盎接sshを実行したす。

マニフェストは次のずおりです公開されおいないため、非衚瀺にする機密デヌタはありたせん。

apiVersion: v1
kind: Pod
metadata:
  annotations:
    scheduler.alpha.kubernetes.io/critical-pod: ""
  creationTimestamp: null
  labels:
    component: kube-apiserver
    tier: control-plane
  name: kube-apiserver
  namespace: kube-system
spec:
  containers:
  - command:
    - kube-apiserver
    - --authorization-mode=Node,RBAC
    - --advertise-address=192.168.1.19
    - --allow-privileged=true
    - --client-ca-file=/etc/kubernetes/pki/ca.crt
    - --disable-admission-plugins=PersistentVolumeLabel
    - --enable-admission-plugins=NodeRestriction
    - --enable-bootstrap-token-auth=true
    - --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt
    - --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt
    - --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key
    - --etcd-servers=https://127.0.0.1:2379
    - --insecure-port=0
    - --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt
    - --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key
    - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
    - --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt
    - --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key
    - --requestheader-allowed-names=front-proxy-client
    - --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
    - --requestheader-extra-headers-prefix=X-Remote-Extra-
    - --requestheader-group-headers=X-Remote-Group
    - --requestheader-username-headers=X-Remote-User
    - --secure-port=6443
    - --service-account-key-file=/etc/kubernetes/pki/sa.pub
    - --service-cluster-ip-range=10.96.0.0/12
    - --tls-cert-file=/etc/kubernetes/pki/apiserver.crt
    - --tls-private-key-file=/etc/kubernetes/pki/apiserver.key
    image: k8s.gcr.io/kube-apiserver-amd64:v1.11.1
    imagePullPolicy: IfNotPresent
    livenessProbe:
      failureThreshold: 8
      httpGet:
        host: 192.168.1.19
        path: /healthz
        port: 6443
        scheme: HTTPS
      initialDelaySeconds: 15
      timeoutSeconds: 15
    name: kube-apiserver
    resources:
      requests:
        cpu: 250m
    volumeMounts:
    - mountPath: /etc/kubernetes/pki
      name: k8s-certs
      readOnly: true
    - mountPath: /etc/ssl/certs
      name: ca-certs
      readOnly: true
    - mountPath: /etc/pki
      name: etc-pki
      readOnly: true
  hostNetwork: true
  priorityClassName: system-cluster-critical
  volumes:
  - hostPath:
      path: /etc/kubernetes/pki
      type: DirectoryOrCreate
    name: k8s-certs
  - hostPath:
      path: /etc/ssl/certs
      type: DirectoryOrCreate
    name: ca-certs
  - hostPath:
      path: /etc/pki
      type: DirectoryOrCreate
    name: etc-pki

dockerpsからの出力は次のずおりです。

CONTAINER ID        IMAGE                  COMMAND                  CREATED             STATUS              PORTS               NAMES
0ab99c997dc4        272b3a60cd68           "kube-scheduler --..."   37 minutes ago      Up 37 minutes                           k8s_kube-scheduler_kube-scheduler-kubernetes_kube-system_537879acc30dd5eff5497cb2720a6d64_8
058ee146b06f        52096ee87d0e           "kube-controller-m..."   37 minutes ago      Up 37 minutes                           k8s_kube-controller-manager_kube-controller-manager-kubernetes_kube-system_0da157f6a48b9a49830c56c19af5c954_0
dca22f2e66c1        k8s.gcr.io/pause:3.1   "/pause"                 37 minutes ago      Up 37 minutes                           k8s_POD_kube-scheduler-kubernetes_kube-system_537879acc30dd5eff5497cb2720a6d64_8
301d6736b6ad        k8s.gcr.io/pause:3.1   "/pause"                 37 minutes ago      Up 37 minutes                           k8s_POD_kube-controller-manager-kubernetes_kube-system_0da157f6a48b9a49830c56c19af5c954_0
10cf027f301b        k8s.gcr.io/pause:3.1   "/pause"                 37 minutes ago      Up 37 minutes                           k8s_POD_kube-apiserver-kubernetes_kube-system_b784b670ba660d7fe4b0407690d68d81_4
c7c59c971636        k8s.gcr.io/pause:3.1   "/pause"                 37 minutes ago      Up 37 minutes                           k8s_POD_etcd-kubernetes_kube-system_6fd4d3c9fe373df920ce5e1e4572fd1d_8
--disable-admission-plugins=PersistentVolumeLabel

これは1.11.1で非掚奚になり、デフォルトで無効になっおいるはずですが、叀いkubeadm構成を採甚したしたか
線集ああ、これをチェリヌピックしなかったのを埅っおくださいhttps//github.com/kubernetes/kubernetes/pull/65827

kubeadm構成の内容は䜕ですか繰り返したすが、必芁に応じお機密デヌタを非衚瀺にしおください

kubeadm構成を取埗するにはどうすればよいですか kubeadm config viewを実行するず、同じconnection refused゚ラヌが発生したす。

デプロむするには、kubeadmの指瀺に埓いたした。 これが私が実行したものです

# update yum packages
yum update -y

# install git, wget & docker
yum install -y git wget nano go docker

# install CRI
rpm --import https://mirror.go-repo.io/centos/RPM-GPG-KEY-GO-REPO
curl -s https://mirror.go-repo.io/centos/go-repo.repo | tee /etc/yum.repos.d/go-repo.repo
yum update -y golang

# start Docker
systemctl enable docker && systemctl start docker

# disable swap (not supported by kubeadm)
swapoff -a

# add kubernetes repo to yum
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF

setenforce 0 # required to allow containers to access the host filesystem (https://www.centos.org/docs/5/html/5.2/Deployment_Guide/sec-sel-enable-disable-enforcement.html). To disable permanently: https://www.tecmint.com/disable-selinux-temporarily-permanently-in-centos-rhel-fedora/

# disable firewall (I know, not great but I am fed up with opening ports and I am behind another firewall and I can do whatever I want)
systemctl disable firewalld && systemctl stop firewalld

###########
# KUBEADM #
###########

# install kubelet, kubeadm and kubectl
yum install -y kubelet kubeadm kubectl
systemctl enable kubelet && systemctl start kubelet

# prevent issuers with traffic being routed incorrectly due to iptables being bypassed
cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system

# install CRICTL (https://github.com/kubernetes-incubator/cri-tools), required by kubeadm
go get github.com/kubernetes-incubator/cri-tools/cmd/crictl

# deploy kubernetes
kubeadm init --pod-network-cidr=10.244.0.0/16

# allow kubectl for non sudoers (run this as a regular user)
cp /etc/kubernetes/admin.conf $HOME/
chown $(id -u):$(id -g) $HOME/admin.conf
export KUBECONFIG=$HOME/admin.conf
echo 'export KUBECONFIG=$HOME/admin.conf' >> $HOME/.bashrc

# For the root user, run this:
export KUBECONFIG=/etc/kubernetes/admin.conf
echo 'KUBECONFIG=/etc/kubernetes/admin.conf' >> $HOME/.bashrc

# deploy pod network (flannel)
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/v0.10.0/Documentation/kube-flannel.yml
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/v0.10.0/Documentation/k8s-manifests/kube-flannel-rbac.yml
kubectl taint nodes --all node-role.kubernetes.io/master- # allow pods to be scheduled on master

###################
# REBOOTING ISSUE #
###################
# At the time of writing this, rebooting causes kubernetes to no longer work. This will fix it (http://stytex.de/blog/2018/01/16/how-to-recover-self-hosted-kubeadm-kubernetes-cluster-after-reboot/)
git clone https://github.com/xetys/k8s-self-hosted-recovery
cd k8s-self-hosted-recovery
chmod +x install.sh
./install.sh
cd ..
kubeadm init --pod-network-cidr=10.244.0.0/16

この堎合、 --configを枡しおいないので、構成はほが次のようになりたす。
kubeadm config print-default + CIDRの倉曎。

別のCNIを䜿甚するずどうなりたすか
再起動する前に機胜したこずはわかっおいたすが、テストしおいるだけです...。

最初にCIDRなしでこれ

# deploy kubernetes
kubeadm init

その埌

# deploy pod network (weave)
kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"
kubectl taint nodes --all node-role.kubernetes.io/master- # allow pods to be scheduled on master

線集汚れた線を削陀しお遊んでください
kubectl taint nodes --all node-role.kubernetes.io/master- ...

再詊行する前にkubeadm resetを実行する必芁がありたすか たたは、単にkubeadm initを実行しおから、CIDRをロヌドしたすか

再詊行する前にkubeadmresetを実行する必芁がありたすか

はい、あなたがしたくない理由がない限り

kubeadm resetを実行したずころ、 [init] this might take a minute or longer if the control plane images have to be pulled埌に゚ラヌが発生したした

[init] using Kubernetes version: v1.11.1
[preflight] running pre-flight checks
I0727 16:36:29.663874   26847 kernel_validator.go:81] Validating kernel version
I0727 16:36:29.664814   26847 kernel_validator.go:96] Validating kernel config
[preflight/images] Pulling images required for setting up a Kubernetes cluster
[preflight/images] This might take a minute or two, depending on the speed of your internet connection
[preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[preflight] Activating the kubelet service
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [kubernetes kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.1.19]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Generated etcd/ca certificate and key.
[certificates] Generated etcd/server certificate and key.
[certificates] etcd/server serving cert is signed for DNS names [kubernetes localhost] and IPs [127.0.0.1 ::1]
[certificates] Generated etcd/peer certificate and key.
[certificates] etcd/peer serving cert is signed for DNS names [kubernetes localhost] and IPs [192.168.1.19 127.0.0.1 ::1]
[certificates] Generated etcd/healthcheck-client certificate and key.
[certificates] Generated apiserver-etcd-client certificate and key.
[certificates] valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests"
[init] this might take a minute or longer if the control plane images have to be pulled


                Unfortunately, an error has occurred:
                        timed out waiting for the condition

                This error is likely caused by:
                        - The kubelet is not running
                        - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
                        - No internet connection is available so the kubelet cannot pull or find the following control plane images:
                                - k8s.gcr.io/kube-apiserver-amd64:v1.11.1
                                - k8s.gcr.io/kube-controller-manager-amd64:v1.11.1
                                - k8s.gcr.io/kube-scheduler-amd64:v1.11.1
                                - k8s.gcr.io/etcd-amd64:3.2.18
                                - You can check or miligate this in beforehand with "kubeadm config images pull" to make sure the images
                                  are downloaded locally and cached.

                If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
                        - 'systemctl status kubelet'
                        - 'journalctl -xeu kubelet'

                Additionally, a control plane component may have crashed or exited when started by the container runtime.
                To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
                Here is one example how you may list all Kubernetes containers running in docker:
                        - 'docker ps -a | grep kube | grep -v pause'
                        Once you have found the failing container, you can inspect its logs with:
                        - 'docker logs CONTAINERID'
couldn't initialize a Kubernetes cluster

kubeadm reset出力は次のずおりです。

[reset] WARNING: changes made to this host by 'kubeadm init' or 'kubeadm join' will be reverted.
[reset] are you sure you want to proceed? [y/N]: y
[preflight] running pre-flight checks
[reset] stopping the kubelet service
[reset] unmounting mounted directories in "/var/lib/kubelet"
[reset] removing kubernetes-managed containers
[reset] cleaning up running containers using crictl with socket /var/run/dockershim.sock
[reset] failed to list running pods using crictl: exit status 1. Trying to use docker instead[reset] deleting contents of stateful directories: [/var/lib/kubelet /etc/cni/net.d /var/lib/dockershim /var/run/kubernetes /var/lib/etcd]
[reset] deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
       Unfortunately, an error has occurred:
                   timed out waiting for the condition

なんらかの理由で、ここ数日ナヌザヌからこれを芋おきたした。

以䞋を䜿甚しお画像を事前にプルできたすか

kubeadm config images pull --kubernetes-version 1.11.1

次に、 init再詊行したす。
それが再び倱敗した堎合、 journalctlはkubeletに぀いお䜕ず蚀いたすか

これがkubeadm config images pull --kubernetes-version 1.11.1からの出力です

[root<strong i="7">@kubernetes</strong> ~]# kubeadm config images pull --kubernetes-version 1.11.1
[config/images] Pulled k8s.gcr.io/kube-apiserver-amd64:v1.11.1
[config/images] Pulled k8s.gcr.io/kube-controller-manager-amd64:v1.11.1
[config/images] Pulled k8s.gcr.io/kube-scheduler-amd64:v1.11.1
[config/images] Pulled k8s.gcr.io/kube-proxy-amd64:v1.11.1
[config/images] Pulled k8s.gcr.io/pause:3.1
[config/images] Pulled k8s.gcr.io/etcd-amd64:3.2.18
[config/images] Pulled k8s.gcr.io/coredns:1.1.3

次に、 kubeadm reset 

[reset] WARNING: changes made to this host by 'kubeadm init' or 'kubeadm join' will be reverted.
[reset] are you sure you want to proceed? [y/N]: y
[preflight] running pre-flight checks
[reset] stopping the kubelet service
[reset] unmounting mounted directories in "/var/lib/kubelet"
[reset] removing kubernetes-managed containers
[reset] cleaning up running containers using crictl with socket /var/run/dockershim.sock
[reset] failed to list running pods using crictl: exit status 1. Trying to use docker instead[reset] deleting contents of stateful directories: [/var/lib/kubelet /etc/cni/net.d /var/lib/dockershim /var/run/kubernetes /var/lib/etcd]
[reset] deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]

次にkubeadm init 

[init] using Kubernetes version: v1.11.1
[preflight] running pre-flight checks
I0727 17:02:09.000020    3408 kernel_validator.go:81] Validating kernel version
I0727 17:02:09.000209    3408 kernel_validator.go:96] Validating kernel config
[preflight/images] Pulling images required for setting up a Kubernetes cluster
[preflight/images] This might take a minute or two, depending on the speed of your internet connection
[preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[preflight] Activating the kubelet service
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [kubernetes kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.1.19]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Generated etcd/ca certificate and key.
[certificates] Generated etcd/server certificate and key.
[certificates] etcd/server serving cert is signed for DNS names [kubernetes localhost] and IPs [127.0.0.1 ::1]
[certificates] Generated etcd/peer certificate and key.
[certificates] etcd/peer serving cert is signed for DNS names [kubernetes localhost] and IPs [192.168.1.19 127.0.0.1 ::1]
[certificates] Generated etcd/healthcheck-client certificate and key.
[certificates] Generated apiserver-etcd-client certificate and key.
[certificates] valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests"
[init] this might take a minute or longer if the control plane images have to be pulled

                Unfortunately, an error has occurred:
                        timed out waiting for the condition

                This error is likely caused by:
                        - The kubelet is not running
                        - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
                        - No internet connection is available so the kubelet cannot pull or find the following control plane images:
                                - k8s.gcr.io/kube-apiserver-amd64:v1.11.1
                                - k8s.gcr.io/kube-controller-manager-amd64:v1.11.1
                                - k8s.gcr.io/kube-scheduler-amd64:v1.11.1
                                - k8s.gcr.io/etcd-amd64:3.2.18
                                - You can check or miligate this in beforehand with "kubeadm config images pull" to make sure the images
                                  are downloaded locally and cached.

                If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
                        - 'systemctl status kubelet'
                        - 'journalctl -xeu kubelet'

                Additionally, a control plane component may have crashed or exited when started by the container runtime.
                To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
                Here is one example how you may list all Kubernetes containers running in docker:
                        - 'docker ps -a | grep kube | grep -v pause'
                        Once you have found the failing container, you can inspect its logs with:
                        - 'docker logs CONTAINERID'
couldn't initialize a Kubernetes cluster

journalctlの最埌の゚ントリ

Jul 27 17:03:00 kubernetes kubelet[3636]: I0727 17:03:00.604819    3636 kuberuntime_manager.go:757] checking backoff for container "etcd" in pod "etcd-kubernetes_kube-system(6fd4d3c9fe373df920ce5e1e4572fd1d)"
Jul 27 17:03:00 kubernetes kubelet[3636]: I0727 17:03:00.604946    3636 kuberuntime_manager.go:767] Back-off 20s restarting failed container=etcd pod=etcd-kubernetes_kube-system(6fd4d3c9fe373df920ce5e1e4572fd1d)
Jul 27 17:03:00 kubernetes kubelet[3636]: E0727 17:03:00.604982    3636 pod_workers.go:186] Error syncing pod 6fd4d3c9fe373df920ce5e1e4572fd1d ("etcd-kubernetes_kube-system(6fd4d3c9fe373df920ce5e1e4572fd1d)"), skipping: failed to "StartContainer" for "etcd" with CrashLoopBackOff: "Back-off 20s restarting failed container=etcd pod=etcd-kubernetes_kube-system(6fd4d3c9fe373df920ce5e1e4572fd1d)"
Jul 27 17:03:01 kubernetes kubelet[3636]: I0727 17:03:01.175841    3636 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
Jul 27 17:03:01 kubernetes kubelet[3636]: I0727 17:03:01.179832    3636 kubelet_node_status.go:79] Attempting to register node kubernetes
Jul 27 17:03:01 kubernetes kubelet[3636]: E0727 17:03:01.180286    3636 kubelet_node_status.go:103] Unable to register node "kubernetes" with API server: Post https://192.168.1.19:6443/api/v1/nodes: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:03:01 kubernetes kubelet[3636]: E0727 17:03:01.414597    3636 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://192.168.1.19:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:03:01 kubernetes kubelet[3636]: E0727 17:03:01.415518    3636 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.1.19:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:03:01 kubernetes kubelet[3636]: E0727 17:03:01.416549    3636 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://192.168.1.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:03:01 kubernetes kubelet[3636]: I0727 17:03:01.428500    3636 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
Jul 27 17:03:01 kubernetes kubelet[3636]: I0727 17:03:01.732777    3636 kuberuntime_manager.go:513] Container {Name:kube-apiserver Image:k8s.gcr.io/kube-apiserver-amd64:v1.11.1 Command:[kube-apiserver --authorization-mode=Node,RBAC --advertise-address=192.168.1.19 --allow-privileged=true --client-ca-file=/etc/kubernetes/pki/ca.crt --disable-admission-plugins=PersistentVolumeLabel --enable-admission-plugins=NodeRestriction --enable-bootstrap-token-auth=true --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key --etcd-servers=https://127.0.0.1:2379 --insecure-port=0 --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-key-file=/etc/kubernetes/pki/sa.pub --service-cluster-ip-range=10.96.0.0/12 --tls-cert-file=/etc/kubernetes/pki/apiserver.crt --tls-private-key-file=/etc/kubernetes/pki/apiserver.key] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[cpu:{i:{value:250 scale:-3} d:{Dec:<nil>} s:250m Format:DecimalSI}]} VolumeMounts:[{Name:k8s-certs ReadOnly:true MountPath:/etc/kubernetes/pki SubPath: MountPropagation:<nil>} {Name:ca-certs ReadOnly:true MountPath:/etc/ssl/certs SubPath: MountPropagation:<nil>} {Name:etc-pki ReadOnly:true MountPath:/etc/pki SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:6443,Host:192.168.1.19,Scheme:HTTPS,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:15,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:8,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Jul 27 17:03:01 kubernetes kubelet[3636]: I0727 17:03:01.732893    3636 kuberuntime_manager.go:757] checking backoff for container "kube-apiserver" in pod "kube-apiserver-kubernetes_kube-system(32544bee4c007108f4b6c54da83cc67e)"
Jul 27 17:03:02 kubernetes kubelet[3636]: I0727 17:03:02.325269    3636 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
Jul 27 17:03:02 kubernetes kubelet[3636]: I0727 17:03:02.325333    3636 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
Jul 27 17:03:02 kubernetes kubelet[3636]: W0727 17:03:02.329421    3636 status_manager.go:482] Failed to get status for pod "kube-apiserver-kubernetes_kube-system(32544bee4c007108f4b6c54da83cc67e)": Get https://192.168.1.19:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-kubernetes: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:03:02 kubernetes kubelet[3636]: E0727 17:03:02.415253    3636 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://192.168.1.19:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:03:02 kubernetes kubelet[3636]: E0727 17:03:02.416228    3636 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.1.19:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:03:02 kubernetes kubelet[3636]: E0727 17:03:02.417254    3636 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://192.168.1.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:03:02 kubernetes kubelet[3636]: I0727 17:03:02.629551    3636 kuberuntime_manager.go:513] Container {Name:kube-apiserver Image:k8s.gcr.io/kube-apiserver-amd64:v1.11.1 Command:[kube-apiserver --authorization-mode=Node,RBAC --advertise-address=192.168.1.19 --allow-privileged=true --client-ca-file=/etc/kubernetes/pki/ca.crt --disable-admission-plugins=PersistentVolumeLabel --enable-admission-plugins=NodeRestriction --enable-bootstrap-token-auth=true --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key --etcd-servers=https://127.0.0.1:2379 --insecure-port=0 --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-key-file=/etc/kubernetes/pki/sa.pub --service-cluster-ip-range=10.96.0.0/12 --tls-cert-file=/etc/kubernetes/pki/apiserver.crt --tls-private-key-file=/etc/kubernetes/pki/apiserver.key] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[cpu:{i:{value:250 scale:-3} d:{Dec:<nil>} s:250m Format:DecimalSI}]} VolumeMounts:[{Name:k8s-certs ReadOnly:true MountPath:/etc/kubernetes/pki SubPath: MountPropagation:<nil>} {Name:ca-certs ReadOnly:true MountPath:/etc/ssl/certs SubPath: MountPropagation:<nil>} {Name:etc-pki ReadOnly:true MountPath:/etc/pki SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:6443,Host:192.168.1.19,Scheme:HTTPS,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:15,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:8,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Jul 27 17:03:02 kubernetes kubelet[3636]: I0727 17:03:02.629669    3636 kuberuntime_manager.go:757] checking backoff for container "kube-apiserver" in pod "kube-apiserver-kubernetes_kube-system(32544bee4c007108f4b6c54da83cc67e)"
Jul 27 17:03:02 kubernetes kubelet[3636]: I0727 17:03:02.629850    3636 kuberuntime_manager.go:767] Back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-kubernetes_kube-system(32544bee4c007108f4b6c54da83cc67e)
Jul 27 17:03:02 kubernetes kubelet[3636]: E0727 17:03:02.629891    3636 pod_workers.go:186] Error syncing pod 32544bee4c007108f4b6c54da83cc67e ("kube-apiserver-kubernetes_kube-system(32544bee4c007108f4b6c54da83cc67e)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "Back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-kubernetes_kube-system(32544bee4c007108f4b6c54da83cc67e)"
Jul 27 17:03:03 kubernetes kubelet[3636]: E0727 17:03:03.415923    3636 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://192.168.1.19:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:03:03 kubernetes kubelet[3636]: E0727 17:03:03.416815    3636 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.1.19:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:03:03 kubernetes kubelet[3636]: E0727 17:03:03.417995    3636 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://192.168.1.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:03:04 kubernetes kubelet[3636]: E0727 17:03:04.416640    3636 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://192.168.1.19:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:03:04 kubernetes kubelet[3636]: E0727 17:03:04.417583    3636 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.1.19:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:03:04 kubernetes kubelet[3636]: E0727 17:03:04.418535    3636 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://192.168.1.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:03:05 kubernetes kubelet[3636]: E0727 17:03:05.417293    3636 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://192.168.1.19:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:03:05 kubernetes kubelet[3636]: E0727 17:03:05.418283    3636 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.1.19:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:03:05 kubernetes kubelet[3636]: E0727 17:03:05.419337    3636 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://192.168.1.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:03:05 kubernetes kubelet[3636]: W0727 17:03:05.511995    3636 cni.go:172] Unable to update cni config: No networks found in /etc/cni/net.d
Jul 27 17:03:05 kubernetes kubelet[3636]: E0727 17:03:05.512167    3636 kubelet.go:2110] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Jul 27 17:03:06 kubernetes kubelet[3636]: I0727 17:03:06.396897    3636 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
Jul 27 17:03:06 kubernetes kubelet[3636]: W0727 17:03:06.401380    3636 status_manager.go:482] Failed to get status for pod "etcd-kubernetes_kube-system(6fd4d3c9fe373df920ce5e1e4572fd1d)": Get https://192.168.1.19:6443/api/v1/namespaces/kube-system/pods/etcd-kubernetes: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:03:06 kubernetes kubelet[3636]: E0727 17:03:06.417876    3636 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://192.168.1.19:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:03:06 kubernetes kubelet[3636]: E0727 17:03:06.418801    3636 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.1.19:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:03:06 kubernetes kubelet[3636]: E0727 17:03:06.419836    3636 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://192.168.1.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:03:06 kubernetes kubelet[3636]: I0727 17:03:06.577952    3636 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
Jul 27 17:03:06 kubernetes kubelet[3636]: W0727 17:03:06.582165    3636 status_manager.go:482] Failed to get status for pod "kube-apiserver-kubernetes_kube-system(32544bee4c007108f4b6c54da83cc67e)": Get https://192.168.1.19:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-kubernetes: dial tcp 192.168.1.19:6443: connect: connection refused

ネットワヌクポッドをただ远加できたせん。

ネットワヌクポッドをただ远加できたせん。

はい、これは以前に倱敗しおいたす。

pull images機胜しおいるずいうこずは、 gcr.ioバケットに接続できるこずを意味したす。

kubeletを手動で再起動しお、ログに䜕が衚瀺されるかを確認しおください。

systemctl restart kubelet

systemctl status kubelet # <---- ?
journalctl -xeu kubelet   # <---- ?

アむデアが足りなくなりたした。

  • systemctl restart kubelet
  • systemctl status kubelet 
● kubelet.service - kubelet: The Kubernetes Node Agent
   Loaded: loaded (/etc/systemd/system/kubelet.service; enabled; vendor preset: disabled)
  Drop-In: /etc/systemd/system/kubelet.service.d
           └─10-kubeadm.conf
   Active: active (running) since Fri 2018-07-27 17:22:15 CEST; 6s ago
     Docs: http://kubernetes.io/docs/
 Main PID: 10401 (kubelet)
   CGroup: /system.slice/kubelet.service
           └─10401 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --cgroup-driver=systemd --cni-bin-dir=/opt/cni/bin --cni-conf-dir=/etc/cni/net.d --network...

Jul 27 17:22:18 kubernetes kubelet[10401]: E0727 17:22:18.636513   10401 kubelet_node_status.go:103] Unable to register node "kubernetes" with API server: Post https://192.168.1.19:6443/api/v1/nodes: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:19 kubernetes kubelet[10401]: E0727 17:22:19.482761   10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://192.168.1.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dkubernetes&li...: connection refused
Jul 27 17:22:19 kubernetes kubelet[10401]: E0727 17:22:19.483784   10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.1.19:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dkuberne...: connection refused
Jul 27 17:22:19 kubernetes kubelet[10401]: E0727 17:22:19.484750   10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://192.168.1.19:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp...: connection refused
Jul 27 17:22:20 kubernetes kubelet[10401]: E0727 17:22:20.483505   10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://192.168.1.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dkubernetes&li...: connection refused
Jul 27 17:22:20 kubernetes kubelet[10401]: E0727 17:22:20.484300   10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.1.19:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dkuberne...: connection refused
Jul 27 17:22:20 kubernetes kubelet[10401]: E0727 17:22:20.485469   10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://192.168.1.19:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp...: connection refused
Jul 27 17:22:20 kubernetes kubelet[10401]: W0727 17:22:20.623173   10401 cni.go:172] Unable to update cni config: No networks found in /etc/cni/net.d
Jul 27 17:22:20 kubernetes kubelet[10401]: E0727 17:22:20.623379   10401 kubelet.go:2110] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Jul 27 17:22:21 kubernetes kubelet[10401]: E0727 17:22:21.058116   10401 event.go:212] Unable to write event: 'Post https://192.168.1.19:6443/api/v1/namespaces/default/events: dial tcp 192.168.1.19:6443: connect: connection refused' (may retry after sleeping)
Hint: Some lines were ellipsized, use -l to show in full.

journalctl -xeu kubelet | もっず少なく

-- Subject: Unit kubelet.service has begun shutting down
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit kubelet.service has begun shutting down.
Jul 27 17:22:15 kubernetes systemd[1]: Started kubelet: The Kubernetes Node Agent.
-- Subject: Unit kubelet.service has finished start-up
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit kubelet.service has finished starting up.
--
-- The start-up result is done.
Jul 27 17:22:15 kubernetes systemd[1]: Starting kubelet: The Kubernetes Node Agent...
-- Subject: Unit kubelet.service has begun start-up
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit kubelet.service has begun starting up.
Jul 27 17:22:15 kubernetes kubelet[10401]: Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more informati
on.
Jul 27 17:22:15 kubernetes kubelet[10401]: Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more informati
on.
Jul 27 17:22:15 kubernetes kubelet[10401]: I0727 17:22:15.439279   10401 server.go:408] Version: v1.11.1
Jul 27 17:22:15 kubernetes kubelet[10401]: I0727 17:22:15.439636   10401 plugins.go:97] No cloud provider specified.
Jul 27 17:22:15 kubernetes kubelet[10401]: I0727 17:22:15.443829   10401 certificate_store.go:131] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
Jul 27 17:22:15 kubernetes kubelet[10401]: I0727 17:22:15.478490   10401 server.go:648] --cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /
Jul 27 17:22:15 kubernetes kubelet[10401]: I0727 17:22:15.478906   10401 container_manager_linux.go:243] container manager verified user specified cgroup-root exists: []
Jul 27 17:22:15 kubernetes kubelet[10401]: I0727 17:22:15.478931   10401 container_manager_linux.go:248] Creating Container Manager object based on Node Config: {RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:docker CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:imagefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.15} GracePeriod:0s MinReclaim:<nil>} {Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.1} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>}]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalCPUManagerReconcilePeriod:10s ExperimentalPodPidsLimit:-1 EnforceCPULimits:true}
Jul 27 17:22:15 kubernetes kubelet[10401]: I0727 17:22:15.479017   10401 container_manager_linux.go:267] Creating device plugin manager: true
Jul 27 17:22:15 kubernetes kubelet[10401]: I0727 17:22:15.479052   10401 state_mem.go:36] [cpumanager] initializing new in-memory state store
Jul 27 17:22:15 kubernetes kubelet[10401]: I0727 17:22:15.479103   10401 state_mem.go:84] [cpumanager] updated default cpuset: ""
Jul 27 17:22:15 kubernetes kubelet[10401]: I0727 17:22:15.479113   10401 state_mem.go:92] [cpumanager] updated cpuset assignments: "map[]"
Jul 27 17:22:15 kubernetes kubelet[10401]: I0727 17:22:15.479206   10401 kubelet.go:274] Adding pod path: /etc/kubernetes/manifests
Jul 27 17:22:15 kubernetes kubelet[10401]: I0727 17:22:15.479256   10401 kubelet.go:299] Watching apiserver
Jul 27 17:22:15 kubernetes kubelet[10401]: E0727 17:22:15.479973   10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://192.168.1.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:15 kubernetes kubelet[10401]: E0727 17:22:15.480000   10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.1.19:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:15 kubernetes kubelet[10401]: E0727 17:22:15.480053   10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://192.168.1.19:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:15 kubernetes kubelet[10401]: I0727 17:22:15.484688   10401 client.go:75] Connecting to docker on unix:///var/run/docker.sock
Jul 27 17:22:15 kubernetes kubelet[10401]: I0727 17:22:15.484709   10401 client.go:104] Start docker client with request timeout=2m0s
Jul 27 17:22:15 kubernetes kubelet[10401]: W0727 17:22:15.486438   10401 docker_service.go:545] Hairpin mode set to "promiscuous-bridge" but kubenet is not enabled, falling back to "hairpin-veth"
Jul 27 17:22:15 kubernetes kubelet[10401]: I0727 17:22:15.486467   10401 docker_service.go:238] Hairpin mode set to "hairpin-veth"
Jul 27 17:22:15 kubernetes kubelet[10401]: W0727 17:22:15.486599   10401 cni.go:172] Unable to update cni config: No networks found in /etc/cni/net.d
Jul 27 17:22:15 kubernetes kubelet[10401]: W0727 17:22:15.488739   10401 hostport_manager.go:68] The binary conntrack is not installed, this can cause failures in network connection cleanup.
Jul 27 17:22:15 kubernetes kubelet[10401]: W0727 17:22:15.488845   10401 cni.go:172] Unable to update cni config: No networks found in /etc/cni/net.d
Jul 27 17:22:15 kubernetes kubelet[10401]: I0727 17:22:15.488882   10401 docker_service.go:253] Docker cri networking managed by cni
Jul 27 17:22:15 kubernetes kubelet[10401]: I0727 17:22:15.499501   10401 docker_service.go:258] Docker Info: &{ID:V36L:ETJO:IECX:PJF4:G3GB:JHA6:LGCF:VQBJ:D2GY:PVFO:567O:545Y Containers:8 ContainersRunning:6 ContainersPaused:0 ContainersStopped:2 Images:21 Driver:overlay2 DriverStatus:[[Backing Filesystem xfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:[] Plugins:{Volume:[local] Network:[bridge host macvlan null overlay] Authorization:[] Log:[]} MemoryLimit:true SwapLimit:true KernelMemory:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6tables:true Debug:false NFd:40 OomKillDisable:true NGoroutines:47 SystemTime:2018-07-27T17:22:15.493615517+02:00 LoggingDriver:journald CgroupDriver:systemd NEventsListener:0 KernelVersion:3.10.0-862.9.1.el7.x86_64 OperatingSystem:CentOS Linux 7 (Core) OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:0xc420f3e000 NCPU:12 MemTotal:33386934272 GenericResources:[] DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:kubernetes Labels:[] ExperimentalBuild:false ServerVersion:1.13.1 ClusterStore: ClusterAdvertise: Runtimes:map[docker-runc:{Path:/usr/libexec/docker/docker-runc-current Args:[]} runc:{Path:docker-runc Args:[]}] DefaultRuntime:docker-runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:[] Nodes:0 Managers:0 Cluster:0xc420f48000} LiveRestoreEnabled:false Isolation: InitBinary:/usr/libexec/docker/docker-init-current ContainerdCommit:{ID: Expected:aa8187dbd3b7ad67d8e5e3a15115d3eef43a7ed1} RuncCommit:{ID:5eda6f6fd0c2884c2c8e78a6e7119e8d0ecedb77 Expected:9df8b306d01f59d3a8029be411de015b7304dd8f} InitCommit:{ID:fec3683b971d9c3ef73f284f176672c44b448662 Expected:949e6facb77383876aeff8a6944dde66b3089574} SecurityOptions:[name=seccomp,profile=/etc/docker/seccomp.json name=selinux]}
Jul 27 17:22:15 kubernetes kubelet[10401]: I0727 17:22:15.499629   10401 docker_service.go:271] Setting cgroupDriver to systemd
Jul 27 17:22:15 kubernetes kubelet[10401]: I0727 17:22:15.510960   10401 kuberuntime_manager.go:186] Container runtime docker initialized, version: 1.13.1, apiVersion: 1.26.0
Jul 27 17:22:15 kubernetes kubelet[10401]: I0727 17:22:15.511524   10401 csi_plugin.go:111] kubernetes.io/csi: plugin initializing...
Jul 27 17:22:15 kubernetes kubelet[10401]: I0727 17:22:15.512249   10401 server.go:129] Starting to listen on 0.0.0.0:10250
Jul 27 17:22:15 kubernetes kubelet[10401]: E0727 17:22:15.512297   10401 kubelet.go:1261] Image garbage collection failed once. Stats initialization may not have completed yet: failed to get imageFs info: unable to find data for container /
Jul 27 17:22:15 kubernetes kubelet[10401]: I0727 17:22:15.512389   10401 server.go:986] Started kubelet
Jul 27 17:22:15 kubernetes kubelet[10401]: E0727 17:22:15.512717   10401 event.go:212] Unable to write event: 'Post https://192.168.1.19:6443/api/v1/namespaces/default/events: dial tcp 192.168.1.19:6443: connect: connection refused' (may retry after sleeping)
Jul 27 17:22:15 kubernetes kubelet[10401]: I0727 17:22:15.513432   10401 server.go:302] Adding debug handlers to kubelet server.
Jul 27 17:22:15 kubernetes kubelet[10401]: I0727 17:22:15.513637   10401 fs_resource_analyzer.go:66] Starting FS ResourceAnalyzer
Jul 27 17:22:15 kubernetes kubelet[10401]: I0727 17:22:15.513742   10401 status_manager.go:152] Starting to sync pod status with apiserver
Jul 27 17:22:15 kubernetes kubelet[10401]: I0727 17:22:15.513810   10401 kubelet.go:1758] Starting kubelet main sync loop.
Jul 27 17:22:15 kubernetes kubelet[10401]: I0727 17:22:15.513901   10401 kubelet.go:1775] skipping pod synchronization - [container runtime is down PLEG is not healthy: pleg was last seen active 2562047h47m16.854775807s ago; threshold is 3m0s]
Jul 27 17:22:15 kubernetes kubelet[10401]: I0727 17:22:15.516479   10401 volume_manager.go:247] Starting Kubelet Volume Manager
Jul 27 17:22:15 kubernetes kubelet[10401]: I0727 17:22:15.516557   10401 desired_state_of_world_populator.go:130] Desired state populator starts to run
Jul 27 17:22:15 kubernetes kubelet[10401]: W0727 17:22:15.518992   10401 cni.go:172] Unable to update cni config: No networks found in /etc/cni/net.d
Jul 27 17:22:15 kubernetes kubelet[10401]: E0727 17:22:15.519170   10401 kubelet.go:2110] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Jul 27 17:22:15 kubernetes kubelet[10401]: W0727 17:22:15.557259   10401 container.go:393] Failed to create summary reader for "/system.slice/docker.service": none of the resources are being tracked.
Jul 27 17:22:15 kubernetes kubelet[10401]: W0727 17:22:15.557505   10401 container.go:393] Failed to create summary reader for "/system.slice/irqbalance.service": none of the resources are being tracked.
Jul 27 17:22:15 kubernetes kubelet[10401]: W0727 17:22:15.557765   10401 container.go:393] Failed to create summary reader for "/system.slice/sshd.service": none of the resources are being tracked.
Jul 27 17:22:15 kubernetes kubelet[10401]: W0727 17:22:15.561683   10401 container.go:393] Failed to create summary reader for "/system.slice/k8s-self-hosted-recover.service": none of the resources are being tracked.
Jul 27 17:22:15 kubernetes kubelet[10401]: W0727 17:22:15.565386   10401 container.go:393] Failed to create summary reader for "/system.slice/systemd-udevd.service": none of the resources are being tracked.
Jul 27 17:22:15 kubernetes kubelet[10401]: W0727 17:22:15.565666   10401 container.go:393] Failed to create summary reader for "/system.slice/tuned.service": none of the resources are being tracked.
Jul 27 17:22:15 kubernetes kubelet[10401]: W0727 17:22:15.567498   10401 container.go:393] Failed to create summary reader for "/system.slice/auditd.service": none of the resources are being tracked.
Jul 27 17:22:15 kubernetes kubelet[10401]: W0727 17:22:15.567815   10401 container.go:393] Failed to create summary reader for "/system.slice/system-getty.slice": none of the resources are being tracked.
Jul 27 17:22:15 kubernetes kubelet[10401]: W0727 17:22:15.568113   10401 container.go:393] Failed to create summary reader for "/system.slice/systemd-journald.service": none of the resources are being tracked.
Jul 27 17:22:15 kubernetes kubelet[10401]: W0727 17:22:15.568957   10401 container.go:393] Failed to create summary reader for "/system.slice/NetworkManager.service": none of the resources are being tracked.
Jul 27 17:22:15 kubernetes kubelet[10401]: W0727 17:22:15.569544   10401 container.go:393] Failed to create summary reader for "/system.slice/polkit.service": none of the resources are being tracked.
Jul 27 17:22:15 kubernetes kubelet[10401]: W0727 17:22:15.572074   10401 container.go:393] Failed to create summary reader for "/system.slice/rsyslog.service": none of the resources are being tracked.
Jul 27 17:22:15 kubernetes kubelet[10401]: W0727 17:22:15.572295   10401 container.go:393] Failed to create summary reader for "/system.slice/systemd-logind.service": none of the resources are being tracked.
Jul 27 17:22:15 kubernetes kubelet[10401]: W0727 17:22:15.575556   10401 container.go:393] Failed to create summary reader for "/system.slice/dbus.service": none of the resources are being tracked.
Jul 27 17:22:15 kubernetes kubelet[10401]: W0727 17:22:15.580473   10401 container.go:393] Failed to create summary reader for "/system.slice/crond.service": none of the resources are being tracked.
Jul 27 17:22:15 kubernetes kubelet[10401]: W0727 17:22:15.580825   10401 container.go:393] Failed to create summary reader for "/system.slice/lvm2-lvmetad.service": none of the resources are being tracked.
Jul 27 17:22:15 kubernetes kubelet[10401]: I0727 17:22:15.614059   10401 kubelet.go:1775] skipping pod synchronization - [container runtime is down]
Jul 27 17:22:15 kubernetes kubelet[10401]: I0727 17:22:15.614738   10401 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
Jul 27 17:22:15 kubernetes kubelet[10401]: I0727 17:22:15.617620   10401 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
Jul 27 17:22:15 kubernetes kubelet[10401]: I0727 17:22:15.618177   10401 kubelet_node_status.go:79] Attempting to register node kubernetes
Jul 27 17:22:15 kubernetes kubelet[10401]: E0727 17:22:15.618610   10401 kubelet_node_status.go:103] Unable to register node "kubernetes" with API server: Post https://192.168.1.19:6443/api/v1/nodes: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:15 kubernetes kubelet[10401]: I0727 17:22:15.620065   10401 cpu_manager.go:155] [cpumanager] starting with none policy
Jul 27 17:22:15 kubernetes kubelet[10401]: I0727 17:22:15.620083   10401 cpu_manager.go:156] [cpumanager] reconciling every 10s
Jul 27 17:22:15 kubernetes kubelet[10401]: I0727 17:22:15.620097   10401 policy_none.go:42] [cpumanager] none policy: Start
Jul 27 17:22:15 kubernetes kubelet[10401]: Starting Device Plugin manager
Jul 27 17:22:15 kubernetes kubelet[10401]: W0727 17:22:15.621716   10401 container_manager_linux.go:792] CPUAccounting not enabled for pid: 986
Jul 27 17:22:15 kubernetes kubelet[10401]: W0727 17:22:15.621726   10401 container_manager_linux.go:795] MemoryAccounting not enabled for pid: 986
Jul 27 17:22:15 kubernetes kubelet[10401]: W0727 17:22:15.621797   10401 container_manager_linux.go:792] CPUAccounting not enabled for pid: 10401
Jul 27 17:22:15 kubernetes kubelet[10401]: W0727 17:22:15.621804   10401 container_manager_linux.go:795] MemoryAccounting not enabled for pid: 10401
Jul 27 17:22:15 kubernetes kubelet[10401]: E0727 17:22:15.622220   10401 eviction_manager.go:243] eviction manager: failed to get get summary stats: failed to get node info: node "kubernetes" not found
Jul 27 17:22:15 kubernetes kubelet[10401]: I0727 17:22:15.818778   10401 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
Jul 27 17:22:15 kubernetes kubelet[10401]: I0727 17:22:15.822804   10401 kubelet_node_status.go:79] Attempting to register node kubernetes
Jul 27 17:22:15 kubernetes kubelet[10401]: E0727 17:22:15.823257   10401 kubelet_node_status.go:103] Unable to register node "kubernetes" with API server: Post https://192.168.1.19:6443/api/v1/nodes: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:16 kubernetes kubelet[10401]: I0727 17:22:16.223461   10401 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
Jul 27 17:22:16 kubernetes kubelet[10401]: I0727 17:22:16.227305   10401 kubelet_node_status.go:79] Attempting to register node kubernetes
Jul 27 17:22:16 kubernetes kubelet[10401]: E0727 17:22:16.227718   10401 kubelet_node_status.go:103] Unable to register node "kubernetes" with API server: Post https://192.168.1.19:6443/api/v1/nodes: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:16 kubernetes kubelet[10401]: E0727 17:22:16.480650   10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://192.168.1.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:16 kubernetes kubelet[10401]: E0727 17:22:16.481646   10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.1.19:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:16 kubernetes kubelet[10401]: E0727 17:22:16.482678   10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://192.168.1.19:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:17 kubernetes kubelet[10401]: I0727 17:22:17.027967   10401 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
Jul 27 17:22:17 kubernetes kubelet[10401]: I0727 17:22:17.031757   10401 kubelet_node_status.go:79] Attempting to register node kubernetes
Jul 27 17:22:17 kubernetes kubelet[10401]: E0727 17:22:17.032125   10401 kubelet_node_status.go:103] Unable to register node "kubernetes" with API server: Post https://192.168.1.19:6443/api/v1/nodes: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:17 kubernetes kubelet[10401]: E0727 17:22:17.481348   10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://192.168.1.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:17 kubernetes kubelet[10401]: E0727 17:22:17.482224   10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.1.19:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:17 kubernetes kubelet[10401]: E0727 17:22:17.483219   10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://192.168.1.19:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:18 kubernetes kubelet[10401]: E0727 17:22:18.482059   10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://192.168.1.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:18 kubernetes kubelet[10401]: E0727 17:22:18.483034   10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.1.19:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:18 kubernetes kubelet[10401]: E0727 17:22:18.484060   10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://192.168.1.19:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:18 kubernetes kubelet[10401]: I0727 17:22:18.632340   10401 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
Jul 27 17:22:18 kubernetes kubelet[10401]: I0727 17:22:18.636096   10401 kubelet_node_status.go:79] Attempting to register node kubernetes
Jul 27 17:22:18 kubernetes kubelet[10401]: E0727 17:22:18.636513   10401 kubelet_node_status.go:103] Unable to register node "kubernetes" with API server: Post https://192.168.1.19:6443/api/v1/nodes: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:19 kubernetes kubelet[10401]: E0727 17:22:19.482761   10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://192.168.1.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:19 kubernetes kubelet[10401]: E0727 17:22:19.483784   10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.1.19:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:19 kubernetes kubelet[10401]: E0727 17:22:19.484750   10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://192.168.1.19:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:20 kubernetes kubelet[10401]: E0727 17:22:20.483505   10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://192.168.1.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:20 kubernetes kubelet[10401]: E0727 17:22:20.484300   10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.1.19:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:20 kubernetes kubelet[10401]: E0727 17:22:20.485469   10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://192.168.1.19:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:20 kubernetes kubelet[10401]: W0727 17:22:20.623173   10401 cni.go:172] Unable to update cni config: No networks found in /etc/cni/net.d
Jul 27 17:22:20 kubernetes kubelet[10401]: E0727 17:22:20.623379   10401 kubelet.go:2110] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Jul 27 17:22:21 kubernetes kubelet[10401]: E0727 17:22:21.058116   10401 event.go:212] Unable to write event: 'Post https://192.168.1.19:6443/api/v1/namespaces/default/events: dial tcp 192.168.1.19:6443: connect: connection refused' (may retry after sleeping)
Jul 27 17:22:21 kubernetes kubelet[10401]: E0727 17:22:21.484293   10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://192.168.1.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:21 kubernetes kubelet[10401]: E0727 17:22:21.485115   10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.1.19:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:21 kubernetes kubelet[10401]: E0727 17:22:21.486258   10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://192.168.1.19:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:21 kubernetes kubelet[10401]: I0727 17:22:21.836741   10401 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
Jul 27 17:22:21 kubernetes kubelet[10401]: I0727 17:22:21.840909   10401 kubelet_node_status.go:79] Attempting to register node kubernetes
Jul 27 17:22:21 kubernetes kubelet[10401]: E0727 17:22:21.841341   10401 kubelet_node_status.go:103] Unable to register node "kubernetes" with API server: Post https://192.168.1.19:6443/api/v1/nodes: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:22 kubernetes kubelet[10401]: E0727 17:22:22.484956   10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://192.168.1.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:22 kubernetes kubelet[10401]: E0727 17:22:22.485979   10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.1.19:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:22 kubernetes kubelet[10401]: E0727 17:22:22.487497   10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://192.168.1.19:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:23 kubernetes kubelet[10401]: E0727 17:22:23.485724   10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://192.168.1.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:23 kubernetes kubelet[10401]: E0727 17:22:23.486543   10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.1.19:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:23 kubernetes kubelet[10401]: E0727 17:22:23.488107   10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://192.168.1.19:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:24 kubernetes kubelet[10401]: E0727 17:22:24.486391   10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://192.168.1.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:24 kubernetes kubelet[10401]: E0727 17:22:24.487305   10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.1.19:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:24 kubernetes kubelet[10401]: E0727 17:22:24.488708   10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://192.168.1.19:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:25 kubernetes kubelet[10401]: E0727 17:22:25.487061   10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://192.168.1.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:25 kubernetes kubelet[10401]: E0727 17:22:25.488108   10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.1.19:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:25 kubernetes kubelet[10401]: E0727 17:22:25.489302   10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://192.168.1.19:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:25 kubernetes kubelet[10401]: E0727 17:22:25.622507   10401 eviction_manager.go:243] eviction manager: failed to get get summary stats: failed to get node info: node "kubernetes" not found
Jul 27 17:22:25 kubernetes kubelet[10401]: W0727 17:22:25.624576   10401 cni.go:172] Unable to update cni config: No networks found in /etc/cni/net.d
Jul 27 17:22:25 kubernetes kubelet[10401]: E0727 17:22:25.624815   10401 kubelet.go:2110] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Jul 27 17:22:26 kubernetes kubelet[10401]: E0727 17:22:26.487846   10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://192.168.1.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:26 kubernetes kubelet[10401]: E0727 17:22:26.488682   10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.1.19:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:26 kubernetes kubelet[10401]: E0727 17:22:26.489896   10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://192.168.1.19:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:27 kubernetes kubelet[10401]: E0727 17:22:27.488510   10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://192.168.1.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:27 kubernetes kubelet[10401]: E0727 17:22:27.489607   10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.1.19:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:27 kubernetes kubelet[10401]: E0727 17:22:27.490628   10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://192.168.1.19:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:28 kubernetes kubelet[10401]: I0727 17:22:28.241605   10401 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
Jul 27 17:22:28 kubernetes kubelet[10401]: I0727 17:22:28.245892   10401 kubelet_node_status.go:79] Attempting to register node kubernetes
Jul 27 17:22:28 kubernetes kubelet[10401]: E0727 17:22:28.246319   10401 kubelet_node_status.go:103] Unable to register node "kubernetes" with API server: Post https://192.168.1.19:6443/api/v1/nodes: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:28 kubernetes kubelet[10401]: E0727 17:22:28.489234   10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://192.168.1.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:28 kubernetes kubelet[10401]: E0727 17:22:28.490271   10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.1.19:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:28 kubernetes kubelet[10401]: E0727 17:22:28.491127   10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://192.168.1.19:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:29 kubernetes kubelet[10401]: E0727 17:22:29.489955   10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://192.168.1.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:29 kubernetes kubelet[10401]: E0727 17:22:29.490777   10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.1.19:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:29 kubernetes kubelet[10401]: E0727 17:22:29.491942   10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://192.168.1.19:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:30 kubernetes kubelet[10401]: E0727 17:22:30.490671   10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://192.168.1.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:30 kubernetes kubelet[10401]: E0727 17:22:30.491715   10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.1.19:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:30 kubernetes kubelet[10401]: E0727 17:22:30.492608   10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://192.168.1.19:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:30 kubernetes kubelet[10401]: W0727 17:22:30.626001   10401 cni.go:172] Unable to update cni config: No networks found in /etc/cni/net.d
Jul 27 17:22:30 kubernetes kubelet[10401]: E0727 17:22:30.626177   10401 kubelet.go:2110] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Jul 27 17:22:31 kubernetes kubelet[10401]: E0727 17:22:31.058799   10401 event.go:212] Unable to write event: 'Post https://192.168.1.19:6443/api/v1/namespaces/default/events: dial tcp 192.168.1.19:6443: connect: connection refused' (may retry after sleeping)
Jul 27 17:22:31 kubernetes kubelet[10401]: E0727 17:22:31.491375   10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://192.168.1.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:31 kubernetes kubelet[10401]: E0727 17:22:31.492253   10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.1.19:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:31 kubernetes kubelet[10401]: E0727 17:22:31.493403   10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://192.168.1.19:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:32 kubernetes kubelet[10401]: E0727 17:22:32.492126   10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://192.168.1.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:32 kubernetes kubelet[10401]: E0727 17:22:32.493049   10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.1.19:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:32 kubernetes kubelet[10401]: E0727 17:22:32.494127   10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://192.168.1.19:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:33 kubernetes kubelet[10401]: E0727 17:22:33.492838   10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://192.168.1.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:33 kubernetes kubelet[10401]: E0727 17:22:33.493750   10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.1.19:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:33 kubernetes kubelet[10401]: E0727 17:22:33.494858   10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://192.168.1.19:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:34 kubernetes kubelet[10401]: E0727 17:22:34.493558   10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://192.168.1.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:34 kubernetes kubelet[10401]: E0727 17:22:34.494514   10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.1.19:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:34 kubernetes kubelet[10401]: E0727 17:22:34.495496   10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://192.168.1.19:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:35 kubernetes kubelet[10401]: I0727 17:22:35.246559   10401 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
Jul 27 17:22:35 kubernetes kubelet[10401]: I0727 17:22:35.251191   10401 kubelet_node_status.go:79] Attempting to register node kubernetes
Jul 27 17:22:35 kubernetes kubelet[10401]: E0727 17:22:35.251606   10401 kubelet_node_status.go:103] Unable to register node "kubernetes" with API server: Post https://192.168.1.19:6443/api/v1/nodes: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:35 kubernetes kubelet[10401]: I0727 17:22:35.493606   10401 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
Jul 27 17:22:35 kubernetes kubelet[10401]: E0727 17:22:35.494144   10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://192.168.1.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:35 kubernetes kubelet[10401]: E0727 17:22:35.495144   10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.1.19:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:35 kubernetes kubelet[10401]: E0727 17:22:35.496164   10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://192.168.1.19:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:35 kubernetes kubelet[10401]: I0727 17:22:35.530217   10401 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
Jul 27 17:22:35 kubernetes kubelet[10401]: I0727 17:22:35.530338   10401 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
Jul 27 17:22:35 kubernetes kubelet[10401]: W0727 17:22:35.534682   10401 status_manager.go:482] Failed to get status for pod "etcd-kubernetes_kube-system(6fd4d3c9fe373df920ce5e1e4572fd1d)": Get https://192.168.1.19:6443/api/v1/namespaces/kube-system/pods/etcd-kubernetes: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:35 kubernetes kubelet[10401]: I0727 17:22:35.552571   10401 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "etcd-certs" (UniqueName: "kubernetes.io/host-path/6fd4d3c9fe373df920ce5e1e4572fd1d-etcd-certs") pod "etcd-kubernetes" (UID: "6fd4d3c9fe373df920ce5e1e4572fd1d")
Jul 27 17:22:35 kubernetes kubelet[10401]: I0727 17:22:35.552633   10401 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "etcd-data" (UniqueName: "kubernetes.io/host-path/6fd4d3c9fe373df920ce5e1e4572fd1d-etcd-data") pod "etcd-kubernetes" (UID: "6fd4d3c9fe373df920ce5e1e4572fd1d")
Jul 27 17:22:35 kubernetes kubelet[10401]: I0727 17:22:35.563624   10401 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
Jul 27 17:22:35 kubernetes kubelet[10401]: I0727 17:22:35.563750   10401 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
Jul 27 17:22:35 kubernetes kubelet[10401]: W0727 17:22:35.567753   10401 status_manager.go:482] Failed to get status for pod "kube-apiserver-kubernetes_kube-system(32544bee4c007108f4b6c54da83cc67e)": Get https://192.168.1.19:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-kubernetes: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:35 kubernetes kubelet[10401]: I0727 17:22:35.596873   10401 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
Jul 27 17:22:35 kubernetes kubelet[10401]: I0727 17:22:35.596962   10401 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
Jul 27 17:22:35 kubernetes kubelet[10401]: W0727 17:22:35.601046   10401 status_manager.go:482] Failed to get status for pod "kube-controller-manager-kubernetes_kube-system(fe4b0bda62e8e0df1386dc034ba16ee3)": Get https://192.168.1.19:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-kubernetes: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:35 kubernetes kubelet[10401]: E0727 17:22:35.622704   10401 eviction_manager.go:243] eviction manager: failed to get get summary stats: failed to get node info: node "kubernetes" not found
Jul 27 17:22:35 kubernetes kubelet[10401]: W0727 17:22:35.627154   10401 cni.go:172] Unable to update cni config: No networks found in /etc/cni/net.d
Jul 27 17:22:35 kubernetes kubelet[10401]: E0727 17:22:35.627334   10401 kubelet.go:2110] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Jul 27 17:22:35 kubernetes kubelet[10401]: I0727 17:22:35.630351   10401 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
Jul 27 17:22:35 kubernetes kubelet[10401]: W0727 17:22:35.634273   10401 status_manager.go:482] Failed to get status for pod "kube-scheduler-kubernetes_kube-system(537879acc30dd5eff5497cb2720a6d64)": Get https://192.168.1.19:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-kubernetes: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:35 kubernetes kubelet[10401]: I0727 17:22:35.652980   10401 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "k8s-certs" (UniqueName: "kubernetes.io/host-path/32544bee4c007108f4b6c54da83cc67e-k8s-certs") pod "kube-apiserver-kubernetes" (UID: "32544bee4c007108f4b6c54da83cc67e")
Jul 27 17:22:35 kubernetes kubelet[10401]: I0727 17:22:35.653036   10401 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "ca-certs" (UniqueName: "kubernetes.io/host-path/fe4b0bda62e8e0df1386dc034ba16ee3-ca-certs") pod "kube-controller-manager-kubernetes" (UID: "fe4b0bda62e8e0df1386dc034ba16ee3")

このノヌドでkubernetesに関連するすべおを停止し、 192.168.1.19:6443リッスンしおいる小さなサヌバヌを実行しようずするずどうなりたすか

kubeadm reset
systemctl stop kubelet

netstat -tulpn
䜕も聞いおいないこずを確認したす。

これをtest.goファむルに曞き蟌みたす

package main
import (
  "net/http"
  "strings"
)
func sayHello(w http.ResponseWriter, r *http.Request) {
  message := r.URL.Path
  message = strings.TrimPrefix(message, "/")
  message = "Hello " + message
  w.Write([]byte(message))
}
func main() {
  http.HandleFunc("/", sayHello)
  if err := http.ListenAndServe("192.168.1.19:6443", nil); err != nil {
    panic(err)
  }
}

go run test.go

$ curl 192.168.1.19:6443
それは機胜したすか

私はこれがばかげおいるこずを知っおいたす、しかし私は考えがありたせん。

netstat -tulpn出力

Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      989/sshd
tcp6       0      0 :::22                   :::*                    LISTEN      989/sshd
udp        0      0 0.0.0.0:68              0.0.0.0:*                           807/dhclient

出力curl実行した埌、 test.go 

[pierrick<strong i="12">@kubernetes</strong> ~]$ curl 192.168.1.19:6443
Hello [pierrick<strong i="13">@kubernetes</strong> ~]$

うたくいきたす
申し蚳ありたせんが、今日はこれ以䞊お圹に立おたせん。

倚分誰か他の人がログを芋お䜕が起こっおいるのか理解するこずができたすか

特にあなたが考えおいる人はいたすか この号でそれらにタグを付けおもらえたすか

@ kubernetes / sig-cluster-lifecycle-bugs

@ PierrickI3アプリサヌバヌがポヌト6443でリッスンしおいるかどうかを確認できたすか もしそうなら、あなたはあなたのプロキシ蚭定をチェックできたすかもしあれば

netstat -tulpn |grep 6443
set |grep -i proxy

@ bart0shありがずうございたすが、今週末から

参考たでに、netstat -tulpnの出力は䞊に瀺されおいたすhttps://github.com/kubernetes/kubeadm/issues/1026#issuecomment-408457991。 プロキシがありたせんむンタヌネットぞの盎接接続。

なぜこれが閉鎖されおいるのですか Centos7を搭茉したVirtualBoxVMでも同様の問題が発生しおいたす。
したがっお、唯䞀の解決策は、新しいボックスを完党に䜜成するこずでした???

@ PierrickI3たったく同じ問題が発生しおおり、ほずんどのログは類䌌しおいたす。 解決策たたは回避策に到達したしたか

@kheirpは残念ながらそうではありたせん。 私はあきらめお、開発目的でminikubeに戻りたした。 しかし、解決策が芋぀かったら、kubeadmに戻りたいず思いたす。

@ PierrickI3 ok ..䜕かに到達したら、私はあなたに知らせたす

VMWareでUbuntu16を実行しおも同じ問題が発生したす。

RHEL7.5で実行されおいる同じ問題

Centos7クラスタヌで実行されおいる同じ問題

Kubicクラスタヌず同じ

各ノヌドでこれを詊しおください@ PierrickI3

systemctl stop kubelet
systemctl stop docker
iptables --flush
iptables -tnat --flush
systemctl start kubelet
systemctl start docker

たぶんあなたは@ PierrickI3をむンストヌルした埌にこれを実行するのを忘れたした

sudo cp /etc/kubernetes/admin.conf $ HOME /
sudo chown $id -u$id -g$ HOME / admin.conf
KUBECONFIG = $ HOME /admin.confを゚クスポヌトしたす

RHEL 7でもたったく同じ問題が発生しおいたす。これも、突然機胜しなくなりたした。 これは私たちにずっお重芁です。

私は走った
sudo cp /etc/kubernetes/admin.conf $ HOME /
sudo chown $id -u$id -g$ HOME / admin.conf
KUBECONFIG = $ HOME /admin.confを゚クスポヌトしたす

などなどですが、マスタヌノヌドはクラスタヌ内のすべおのノヌドぞの接続を拒吊しおいたす。

この問題を再珟するこずはできたせんでした。再起動埌2〜3分埌に、クラスタヌは正垞に埩旧したす。
個々の蚭定に関連するものでなければなりたせん。

APIサヌバヌが実行されおいないようです。 kubeletを䜕床か再起動しおみたしたが、たったく圹に立ちたせんでした。 面癜いこずに、これは機胜しおいお、昚日クラスタヌにDockerむメヌゞを远加するたで䜕も觊れおいたせんでした。

私は䜕が起こっおいるのか理解したした。 /varマりントされたディレクトリがいっぱいになりたした。 珟圚、期埅どおりに機胜しおいたす。

ポヌト6443のルヌルがiptablesに远加され、ロヌカルホストからでも継続的な接続が拒吊され、 docker ps実行しおも実行䞭のコンテナヌが生成されなかったため、APIサヌビスおよびその他のサヌビスが実行されおいたせんでした。぀たり、䜕か奇劙なこずが起こっおいたした。確かに... kubectlログには䜕もありたせんが、APIサヌビスの開始に倱敗した理由は䜕も瀺されおいたせん。

私は䜕が起こっおいるのか理解したした。 /varマりントされたディレクトリがいっぱいになりたした。

OMGありがずう!!!

systemctl status kubeletcmdぱラヌ情報を衚瀺したす
"Jul 27 17:22:20 kubernetes kubelet [10401]W0727 172220.623173 10401 cni.go172] cni構成を曎新できたせん/etc/cni/net.dにネットワヌクが芋぀かりたせん
7月27日172220kubernetes kubelet [10401]E0727 172220.623379 10401 kubelet.go2110]コンテナランタむムネットワヌクの準備ができおいたせんNetworkReady = false理由NetworkPluginNotReadyメッセヌゞdockerネットワヌクプラグむンの準備ができおいたせんcni config初期化されおいたせん
7月27日172221kubernetes kubelet [10401]E0727 172221.058116 10401 event.go212]むベントを曞き蟌めたせん '投皿https://192.168.1.196443 / api / v1 / namespaces / default / eventsダむダルtcp 192.168.1.19:6443接続接続が拒吊されたした 'スリヌプ埌に再詊行できたす "

そう

vi /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
次のパラメヌタを远加したす。-network-plugin= cni --cni-conf-dir = / etc / cni / --cni-bin-dir = / opt / cni / bin
その埌
kubeadmリセット
kubeadm init

各ノヌドでこれを詊しおください@ PierrickI3

systemctl stop kubelet
systemctl stop docker
iptables --flush
iptables -tnat --flush
systemctl start kubelet
systemctl start docker

これで問題が修正されたしたが、これはマスタヌノヌドでのみ実行し、他のノヌドではkubeadmをリセットしたした。

@ manoj-bandaraに感謝したすが、私は今のずころあきらめお、今幎の埌半にこれに戻りたす。 https://github.com/kubernetes/kubeadm/issues/1026#issuecomment-420948092を参照しお

ここでも同じ問題-たくさんのこずを詊したした。 これが問題だったようです
failed: open /run/systemd/resolve/resolv.conf: no such file or directory

/etc/resolv.confをその堎所にシンボリックリンクし、kubeleを再起動したしたマスタヌのみ。その埌、マスタヌずすべおのノヌドが再び起動し始めたした。

私の堎合はスワップの問題でしたが、スワップをオフにしお修正したした

sudo swapoff -a
sudo systemctl restart kubelet.service

各ノヌドでこれを詊しおください@ PierrickI3

systemctl stop kubelet
systemctl stop docker
iptables --flush
iptables -tnat --flush
systemctl start kubelet
systemctl start docker

OMG、すごいコマンド!!本圓にありがずうございたした!!

各ノヌドでこれを詊しおください@ PierrickI3

systemctl stop kubelet
systemctl stop docker
iptables --flush
iptables -tnat --flush
systemctl start kubelet
systemctl start docker

OMG、すごいコマンド!!本圓にありがずうございたした!!

@JTRNEO 、私もこれを芚えおおく必芁がありたす:)

私はいく぀かのむンストヌルで同じ問題に盎面したした。 私はただそれを理解しおいたせん。 Flannel、Calico、Waveドラむバヌを詊したしたが、結果は同じです。 したがっお、おそらく問題はネットワヌクプラグむンに関連しおいたせん。
問題に関連しおいるかどうかはわかりたせんが、「sudo swapoff -a」を実行したしたが、再起動埌、サヌバヌはスワップを再床オフにするこずを掚奚しおいたす。

ここで同じ問題。 私のkube-apiserverログは次のものでいっぱいです

I1124 18:44:13.180672       1 log.go:172] http: TLS handshake error from 192.168.1.235:56160: EOF
I1124 18:44:13.186601       1 log.go:172] http: TLS handshake error from 192.168.1.235:56244: EOF
I1124 18:44:13.201880       1 log.go:172] http: TLS handshake error from 192.168.1.192:56340: EOF
I1124 18:44:13.208991       1 log.go:172] http: TLS handshake error from 192.168.1.235:56234: EOF
I1124 18:44:13.248214       1 log.go:172] http: TLS handshake error from 192.168.1.235:56166: EOF
I1124 18:44:13.292943       1 log.go:172] http: TLS handshake error from 192.168.1.235:56272: EOF
I1124 18:44:13.332362       1 log.go:172] http: TLS handshake error from 192.168.1.235:56150: EOF
I1124 18:44:13.352911       1 log.go:172] http: TLS handshake error from 192.168.1.249:41300: EOF

ipテヌブルのフラッシュが機胜せず、スワップはすでにオフになっおいたす。

ワヌカヌノヌドにフランネルをむンストヌルしたしたか

日、2019幎11月24日には、2154クリストファヌ・J. Bottaro [email protected]
曞きたした

ここで同じ問題。 私のkube-apiserverログは次のものでいっぱいです

I1124 184413.180672 1 log.go172] http192.168.1.23556160からのTLSハンドシェむク゚ラヌEOF
I1124 184413.186601 1 log.go172] http192.168.1.23556244からのTLSハンドシェむク゚ラヌEOF
I1124 184413.201880 1 log.go172] http192.168.1.19256340からのTLSハンドシェむク゚ラヌEOF
I1124 184413.208991 1 log.go172] http192.168.1.23556234からのTLSハンドシェむク゚ラヌEOF
I1124 184413.248214 1 log.go172] http192.168.1.23556166からのTLSハンドシェむク゚ラヌEOF
I1124 184413.292943 1 log.go172] http192.168.1.23556272からのTLSハンドシェむク゚ラヌEOF
I1124 184413.332362 1 log.go172] http192.168.1.23556150からのTLSハンドシェむク゚ラヌEOF
I1124 184413.352911 1 log.go172] http192.168.1.24941300からのTLSハンドシェむク゚ラヌEOF

ipテヌブルのフラッシュが機胜せず、スワップはすでにオフになっおいたす。

—
あなたがコメントしたのであなたはこれを受け取っおいたす。
このメヌルに盎接返信し、GitHubで衚瀺しおください
https://github.com/kubernetes/kubeadm/issues/1026?email_source=notifications&email_token=AKZB2S2OOQWTN5QH3NCIVNTQVLEYBA5CNFSM4FMNY3N2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKT
たたは賌読を解陀する
https://github.com/notifications/unsubscribe-auth/AKZB2SZZTPQEVNMV77HZFSLQVLEYBANCNFSM4FMNY3NQ
。

いいえ、ネットワヌクにkube-routerを䜿甚しおいたす。 そのポッドはクラッシュバックオフ状態にあり、APIサヌバヌず通信できないこずに関する゚ラヌでいっぱいです。

クラスタヌは数か月間正垞に動䜜しおいたしたが、その埌電気が切れたため、マシンずクラスタヌが再起動したした。

@cjbottaro
kubeletクラむアント蚌明曞の有効期限が切れおいる可胜性がありたすか

ここで2番目の譊告を参照しおください
https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/#check -certificate-expiration

kubeadm initで䜜成されたノヌドで、kubeadmバヌゞョン1.17より前。

VMWareでUbuntu16を実行しおも同じ問題が発生したす。

私もvmwareでクラスタヌを実行しおいたすが、䜕が問題を解決したしたか ありがずう

新しいkubeクラスタヌでservice-cidr蚭定を線集した埌、今日も同じ問題が発生したした。 私にずっおの問題は、kube-apiserverdockerコンテナヌがフラッピングしおいたこずでした。 Dockerログを䜿甚しおログを確認した埌、 私が芋た
Error: error determining service IP ranges for primary service cidr: The service cluster IP range must be at least 8 IP addresses.
/ 30の小さいCIDRをサヌビスにプロビゞョニングするこずはできたしたが、/ 29に開く必芁がありたした。

関連するかどうかはわかりたせんが、1.17.0にはCIDR関連のバグがあるこずに泚意しおください。したがっお、.17の最新のパッチを実行しおいるこずを願っおいたす。

uはiptablesのルヌルを倉曎する必芁があるかもしれたせん。
CMDiptables -L --line-numbersを実行しお、拒吊を芋぀けたす-with icmp-host-prohibited
次に、iptables -D INPUT 153、削陀したす153は行番号を意味したす
぀いに、kubeletを再起動したす

kubeadmクラスタヌを䜿甚した同様の問題がありたす。 docker restart $docker ps -qaを2回䜿甚したした

@ PierrickI3ず同じ問題がありたす。 再起動埌、ノヌドコントロヌルプレヌンはダりンしおいたす。 kubeletサヌビスは、実行されおいないapiserverに接続しようずしお実行されおいたす。 etcdが実行されおいたす。 CNIネットワヌクむンタヌフェむスはなく、ルヌプバック、むヌサネット、およびDockerのみがありたす。

再起動する前に長い間正垞に動䜜したしたが、どこにも特に゚ラヌは芋られず、起動しないだけです。 このスレッドだけでなく、他の倚くのスレッドで蚀及されおいるすべおを詊したした。

問題をさらに調査するためにここで䜕が始たるのか、私は完党に混乱しおいたす。 kubeletサヌビスはCNIネットワヌクを開始しおから、コントロヌルプレヌンAPI、スケゞュヌラなどポッドを開始したすか docker ps -aを確認したしたが、すべおのコントロヌルプレヌンコンテナの起動が詊行されおいたせん。たた、それらに再起動ポリシヌがありたせん。 では、なぜkubeletがAPIサヌバヌを起動しおいないのに、APIサヌバヌず通信しようずするのでしょうか。

同様の問題がありたしたが、ただ解決されおいたせんが、Dockerが互換性のないバヌゞョンにアップグレヌドされたために問題が発生したした。 Dockerサヌビスが実行されおいるかどうかを確認するだけです。

@ neolit123
' @cjbottaro '
kubeletクラむアント蚌明曞の有効期限が切れおいる可胜性がありたすか

ここで2番目の譊告を参照しおください
https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/#check -certificate-expiration

kubeadm initで䜜成されたノヌドで、kubeadmバヌゞョン1.17より前。

ありがずうございたす。再起動埌、マシンが接続できたせんでした。 蚌明曞を確認したずころ、3぀の蚌明曞の有効期限が切れおいるこずがわかりたした。

[root<strong i="16">@localhost</strong> home]# kubeadm alpha certs check-expiration
[check-expiration] Reading configuration from the cluster...
[check-expiration] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[check-expiration] Error reading configuration from the Cluster. Falling back to default configuration

W0128 14:02:50.815166   21689 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
CERTIFICATE                EXPIRES                  RESIDUAL TIME   CERTIFICATE AUTHORITY   EXTERNALLY MANAGED
admin.conf                 Sep 17, 2021 06:56 UTC   232d                                    no      
apiserver                  Sep 17, 2021 06:55 UTC   232d            ca                      no      
apiserver-etcd-client      Sep 17, 2021 06:55 UTC   232d            etcd-ca                 no      
apiserver-kubelet-client   Sep 17, 2021 06:55 UTC   232d            ca                      no      
controller-manager.conf    Sep 17, 2021 06:56 UTC   232d                                    no      
etcd-healthcheck-client    Dec 19, 2020 07:56 UTC   <invalid>       etcd-ca                 no      
etcd-peer                  Dec 19, 2020 07:56 UTC   <invalid>       etcd-ca                 no      
etcd-server                Dec 19, 2020 07:56 UTC   <invalid>       etcd-ca                 no      
front-proxy-client         Sep 17, 2021 06:55 UTC   232d            front-proxy-ca          no      
scheduler.conf             Sep 17, 2021 06:56 UTC   232d                                    no   

したがっお、蚌明曞を曎新したす

kubeadm alpha certs renew etcd-healthcheck-client
kubeadm alpha certs renew etcd-peer
kubeadm alpha certs renew etcd-server

サヌビスを再開したす

systemctl daemon-reload
systemctl restart kubelet
systemctl restart docker

それはうたくいきたす

こんにちは、
マスタヌノヌドずワヌカヌノヌドでスワップが有効になっおいるかどうかを確認しおください。 それず再起動サヌビスを無効にしおください。

このペヌゞは圹に立ちたしたか
0 / 5 - 0 評䟡